1. 9
  1. 13

    “This is in essence how GPT-3, or for that matter, all of what you call AI works. By all means, a very complex process, but one void of magic there or signs of thought emergence. It’s a strictly definite and discrete problem, and the machines of today seem to be doing a good job of solving it.”

    A college prof of mine back in the ‘80s pointed this out as a paradox of AI: as soon as we figure how to make a computer do something difficult, we stop thinking of it as a sign of intelligence; it’s just a clever trick. In the 1950s it was chess; now it’s recognizing faces and generating high-school-level English prose (or poetry!)

    The word “magic” in the quote above is telling — implying that to be intelligence it has to be like magic. I don’t buy it.

    The rest of the arguments are similar to Searle’s old “Chinese Room” argument: that because we can’t point to some specific part of GPT3 that’s an “English recognizer” or “English generator”, it can’t be said to “know” English in any sense.

    Obviously GPT3 isn’t a true general AI. (For one thing, it’s got severe short-term memory issues!) And I don’t think this approach could simply be scaled up to produce one. But I think (as a non-AI-guru) that the way it works has some interesting similarities to the way human consciousness may have evolved. Once we came up with primitive forms of language to communicate with other people, it was a short step to using language to communicate with ourselves, a feedback loop that creates an internal stream of consciousness. So the brain is generating words and thinking about them, which triggers likely successor words, etc.

    I’m not saying our brains are doing the same thing as GPT3, just as I don’t think our visual centers do exactly what Deep Dream does. But the similarities at a high level are striking.

    1. 7

      The problem isn’t that the goalposts move, it’s that the goals end up being tractable to approaches that don’t get us as much as we expected. For example, Go AI was supposed to be a revelation, but in fact it turns out that random playouts are more than enough to get stronger than human pros… but just like with chess, that’s not how humans play or think, so we can achieve the simple goal of “win a game” but can only derive patterns/insight from that strong play with human analysis.

      It seems to me that the patterns and insight are what we’re really after, not the wins.

      1. 4

        Woah, wait a minute… professional Go players are (consistently) worse than random?!

        That seems like a very important insight, albeit much bleaker than the sort AI researchers were looking for.

        1. 3

          I think what asthasr is referring to is the way Alpha Go iteratively played against itself to gain ground. My understanding is that it started out with ~random noise vs ~random noise and improved by figuring out which side did better and repeating that process an inhuman number of times.

          It’s not entirely unlike how a (human) novice might get more with the game taken to the limit. We got some novel game states that humans hadn’t (yet) stumbled on to, but as far as I’m aware Alpha Go provides very little insight into how (human) professionals approach the board.

          1. 1

            kavec’s comment is correct, but even later engines use random playouts, pruned by the results of playouts in similar positions, to choose their next move. It works. It’s led to some interesting analysis (by humans), but the AI in itself isn’t doing that analysis.

            1. 1

              I believe what you meant is the Monte-Carlo Tree Search part. I don’t think that is uniform randomizaton. Reading page 3: https://arxiv.org/pdf/1712.01815.pdf, it suggests to expand the nodes biased by DNN’s evaluation rather than uniform random rollout.

              1. 1

                It’s not uniform randomization. Go is too “big” for that. However, it’s essentially treating positional analysis as a function from board position to board position, without any heuristics or sector analysis. That’s not how people play or think about the game; in essence it’s very good because it can run Monte Carlo playouts fast and figure out, given the entire board position, what the next move ought to be… but it has no “why.”

                1. 1

                  Because it is not uniform random and the node expansion biased by DNN’s evaluation output, the heuristics or sector analysis could simply be moved to the DNN (the convolutional neural net is translation-invariant, and we don’t have internals of the DNN to poke with). The heuristics from neural nets is essential for AlphaZero’s sucess. I won’t discount that and say random rollout from MCTS, which has been in-use for Go since 2000s is as crucial. MCTS is important to explore the state space, but the “intuition / memorization” from neural nets is crucial.

                  1. 1

                    It’s possible that generalizations can be teased out. There are people trying and I await the results eagerly. But crucially, once again, it’s not the AI that’s capable of doing it. If it’s accomplished, it will be the humans running the AI who do it.

        2. 1

          I’ve thought about the same thing. A form of (at least apparent) “consciousness”, it seems to me, could be built out of a “language generator” like GPT-3, with a feedback loop, and with a way to feed in information about the outside world.

          How much research has there been on this field? Surely someone has tried to feed GPT-3 into itself and seen what happens?

          1. 3

            Sort of like that very creepy video that starts with a frame buffer of random noise and iteratively applies Deep Dream, zooms slightly, and repeats. After a few minutes you get an H.R. Giger nightmare of malignant dog noses; that model they used really has some deep seated dog issues it needs to work out in therapy.

            In the messy neuro-chemical domain, dissociative psychedelics like ketamine, DMT and salvia divinorum seem to work by blocking out the sensorium and amplifying feedback in the stream of consciousness, producing very real-seeming but bonkers dream worlds.

            1. 3

              You’re right, this really does end in a nightmare of dog noses and eyeballs! Some of these are really horrifying.

              https://youtu.be/SCE-QeDfXtA

          2. 1

            Here, have a book from a (recent) prior generation of AI optimists. Hawken’s thing didn’t quite work out like he was hoping, but it’s a good stepping stone toward current theories of embedded cognition. We’ve got a long way to go, still.

          3. 6

            This is the reason the machine learning algorithms in production today are biased towards one aspect of society or another. A human fed them with biased inputs containing one’s subjectivity. Feed them with someone else’s subjective inputs, and they will start acting correspondingly.

            Yes, so totally different from humans!

            On a more serious note: I don’t buy any of the arguments. It feels a lot like the “but machines don’t have a soul” argument. For some reason, a lot of people can’t accept that humans are just biological machines.

            1. 3

              I would say that the burden of proof is on you, the “biological machine” theorist, to demonstrate anything reasonably machine-like about even the simplest organism. Can you reduce its operation to simple, well-understood principles? Can you build one from scratch?

              This particular rhetorical trick goes back to Descartes and maybe a bit earlier. It’s seen continuous usage since then, mostly by would-be technocrats. Despite amazing progress in biology, the substitution (charitably, the analogy) really doesn’t hold up to any scientific scrutiny even now. Here’s a book for you.

              1. 4

                I would say that the burden of proof is on you, the “biological machine” theorist, to demonstrate anything reasonably machine-like about even the simplest organism. Can you reduce its operation to simple, well-understood principles? Can you build one from scratch?

                I can’t, but it can and has be done by others [1]. Even if it hadn’t been done, I had more of a theoretical argument in mind: What could humans possibly have that a sufficiently advanced machine can’t posses? I can’t really think of anything.

                [1] https://digitalminds2016.wordpress.com/2016/03/03/the-first-complete-computer-simulation-of-an-entire-animal-in-your-browser/

                1. 3

                  I disagree; denying that an animal is a machine (i.e. an object behaving according to known physical laws) implies that there is some unknown principle responsible for its non-understood behaviors (a “soul” or “animal spirit” or whatever.) By Occam’s Razor we should resist that and require a high burden of proof.

                  There used to be a similar belief in chemistry, that substances in living creatures had a vital essence that made them different from non-living chemicals. It was disproved when urea was synthesized. (But we still call complex carbon-containing molecules “organic” for that reason.)

                  I know you’re no alone in this and there are respected scientists who share that view, like Roger Penrose. But I find his hypothesis — that there are spooky quantum effects inside neurons through which intelligence leaks in — nutty.

              2. 0

                I think it’s supposed to be pronounced “ai yi yi”.