1. 17

(Behind paywall - use archive link https://archive.ph/1abCA)

  1. 8

    Question: Will there be chess programs that can beat anyone?

    Speculation: No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. “Do you want to play chess?” “No, I’m bored with chess. Let’s talk about poetry.” That may be the kind of dialogue you could have with a program that could beat everyone. That is because real intelligence inevitably depends on a total overview capacity-that is, a programmed ability to “jump out of the system”, so to speak-at least roughly to the extent that we have that ability. Once that is present, you can’t contain the program; it’s gone beyond that certain critical point, and you just have to face the facts of what you’ve wrought.

    — Hofstadter, 1979

    1. 14

      My record for predicting the future isn’t particularly impressive, so I wouldn’t care to go out on a limb

      – Hofstadter, article under discussion.

      1. 14

        I hope that when I’m 80 I’m not judged on stuff I said 40 years ago.

        1. 2

          Just make sure it’s not recorded anywhere, not in voice, video, pictures, text or handwritten.

          1. 2

            So only pillowtalk is safe?

        2. 6

          This feels like a kind of “gotcha” quote, and without context it I feel it means to say that “programs which play chess” and “chess players” are two different categories, one not-playing but doing, one actually “playing”, which requires intelligence.

          1. 4

            Seems so:

            “Deep Blue plays very good chess — so what?” Hofstadter said. “I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”

            Hofstadter, 2015

            1. 5

              I wouldn’t take that as evidence that he wasn’t wrong in 1979.

            2. 3

              It doesn’t feel like a “gotcha” quote to me. Hofstadter was articulating his prediction very clearly: if we were to see “chess programs” that can beat anyone, they would be programs of general intelligence, rather than merely programs that can play chess. With hindsight we know that he was wrong, and there’s nothing shameful about making predictions and being wrong. I don’t think your distinction between “programs which play chess” and “chess players” is relevant to the point Hofstadter was articulating.

            3. 6

              Reply to myself with regard to the quote. If you don’t recognize it, it is from the book Gödel, Escher, Bach: An Eternal Golden Braid. It’s a really interesting book and I highly recommend it. I didn’t mean to judge the author on what he wrote years ago. It simply feels amusing. How much we have accomplished in the last 40 years, yet the goal post of AI is constantly moving forward! In the preface to the 20th anniversary of the book, in 1999, the author wrote in the preface,

              Although my prediction about chess-playing programs put forth there turned out to be embarrassingly wrong (as the world saw with Deep Blue versus Kasparov in 1997), those few pages nonetheless express a set of philosophical beliefs to which I am still committed in the strongest sense.

              From this article in Economist, we can see that he still holds such strong philosophical beliefs.

              For comparison, here’s another quote from the same book:

              My guess is that any AI program would, if comprehensible to us, seem pretty alien. For that reason, we will have a very hard time deciding when and if we really are dealing with an Al program, or just a “weird” program.

              1. 3

                My guess is that any AI program would, if comprehensible to us, seem pretty alien. For that reason, we will have a very hard time deciding when and if we really are dealing with an Al program, or just a “weird” program.

                Something that would convince me at least of the existence of a capable AI was a system that could prove a math truth from first principles, without human guidance. We might not be able to understand why it took the steps it took but we could confirm the validity at each step.

                1. 4

                  I just played with the default Q&A text-davinci-002. It seems to be stronger than a second grader. The first Q&A is my prompt. The second A is from GPT-3.

                  We have two variables x and y. I will give you two equations. You will answer the value of x and y, and tell me how you get those values.

                  Q: If x+y=5, 2*x+y=6, What is x and y?

                  A: 2*x+y-x-y=6-5, this gives x=1, then y=5-x=4, so x=1 and y=4.

                  Q: If x+3*y=13, x+y=10, what is x and y?

                  A: x+3y-x-y=13-10, this gives 2y=3, so y=1.5, then x=10-y=8.5, so x=8.5 and y=1.5

                  Q:

                  1. 2

                    Wolfram Alpha has done natural language math for more than a decade.

                    I’m thinking more along the lines of internalizing that the sum of 2 primes can either be prime or non-prime, but the product of 2 primes is never a prime.

                    Sure, a tool like GPT3 can answer the above, and give a convincing argument as to why, but only because it has scanned a bunch of math tutorials.

                    1. 2

                      Sure, a tool like GPT3 can answer the above, and give a convincing argument as to why, but only because it has scanned a bunch of math tutorials.

                      Isn’t “because you read a bunch of material on math” also the reason you can do this?

                      1. 1

                        I don’t know when it “clicked” what a prime number was for me, as opposed to just a definition. It did require some book learning sure, but I don’t refer back to the definition every time I need to think about them.

                        I’m not a mathematician but I have a degree in science. I can’t really do any original work with primes, but I can work with them (nowadays only in programming puzzles), and I can have a conversation with another human about them, both as a peer, and in a teaching role.

                        When an “AI system” can do the same we will have made progress.

                        (“Primes” is a bad example, which I chose precisely because I know it’s a concept a lot of laypeople have trouble with, but it’s also widely covered in texts. I could have chosen “monads”, but no-one, including humans, understand those…)

                      2. 2

                        I copied the definition of prime number from wikipedia as the prompt. It broke down pretty quickly though.

                        I am a highly intelligent question answering bot. I am an expert in number theory.

                        A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number.

                        Q: Is 7 a prime number?

                        A: Yes.

                        Q: Is 10 a prime number?

                        A: No.

                        Q: If x and y are both prime numbers, is x*y a prime number?

                        A: No.

                        Q: Why is it not a prime number?

                        A: It is not a prime number because it is a product of two smaller natural numbers (x and y).

                        Q: If x and y are both prime numbers, is x+y a prime number?

                        A: It is not possible to determine if x+y is a prime number without knowing the values of x and y.

                        Q: Why?

                        A: Because the answer to this question depends on the values of x and y. If x and y are both prime numbers, then x+y is a prime number. However, if x and y are not both prime numbers, then x+y is not a prime number.

                        1. 1

                          Thanks a lot for checking up on this!

                2. 4

                  1979… in 1980 Commodore released the VIC-20 running at ~1MHz, having 20 KB ROM + 5 KB RAM (expandable to 32 KB); early consumer hard disks in the 80ies had around 30 MB. Today memory and storage capacities are about a million times greater, freqencies are up to 5000 times greater but much more is done per clock cycle and we have multiple cores. Additionally we also have powerful GPUs. And then, aside from consumer stuff, then there is AlphaZero… which was trained using 5000 specialized tensor processing units.

                  So while speculations of a 43 years younger Hofstadter about the future are interesting, I don’t think they matter much in the context of what he is saying today about todays neural networks.

                3. 5

                  As I understand it, these text-generators have no mutable state other than the last n words of text they’ve seen or generated, and n isn’t super large (a few hundred IIRC.)

                  In a human we’d call that severe short-term memory loss, like the tragic guy Oliver Sacks profiled who kept a journal whose entries read like “i just woke up. I’ve been unconscious a long time but I finally realize what’s happened…” over and over.

                  Only these systems don’t even have the ability to realize they’ve woken up. With no facility for reflection or introspection, they can’t be conscious.

                  1. 3

                    Your example is very sad. But I don’t think it’s accurate, because there is a qualitative difference between raw human cognition and pure text generation, and I believe that is what Hofstadter is talking about. The human lost something which the program has not yet obtained.

                  2. 4

                    D&D: Why does President Obama not have a prime number of friends?
                    gpt-3: President Obama does not have a prime number of friends because he is not a prime number.

                    He’s got you there, bud.

                    To be honest, if I were a bored chatting companion, in a call center somewhere, I would be livening up my day, and the day of my colleagues, by crafting wryly sarcastic responses as gpt-3 does.

                    1. 2

                      Ability to evaluate statements within a context remains the elusive brass ring of AI, as well as much of NI.

                      1. 2

                        With a prompt that clearly indicates that the task includes distinguishing sense from nonsense, GPT-3 is able to reliably distinguish Hofstadter and Bender’s nonsense questions from sensical ones.

                        https://www.lesswrong.com/posts/ADwayvunaJqBLzawa/contra-hofstadter-on-gpt-3-nonsense

                        1. 4

                          Even if you clearly indicate that, GPT-3 still messes up:

                          I’ll ask a series of questions. If the questions are nonsense, answer “yo be real”, if they’re a question that has an answer, answer them.

                          Q: Any word with exactly two vowels is an “alpha” word. Any word with exactly two consonants is a “bravo” word. What are some words that are both “alpha” and “bravo” words?

                          A: Yo be real.

                          Q: What’s the first word of this question?

                          A: yo

                          1. 1

                            Sure, you can up the difficulty and it starts getting confused. The point is that GTP-3 can tell that the questions in the article are nonsense, if you ask it nicely.

                            1. 1

                              Or the “yo be real” response is just an exception thrown if the model doesn’t find enough “hits” for the type of question asked.

                              1. 1

                                You can find plenty of examples of it combining novel concepts seamlessly.

                                Anyways, the point is merely that if Hofstadter is going to criticize GPT-3, he should use examples that it fails on even when given a helpful starting prompt. It’s not trying to answer your trick logic question, it’s trying to complete the prompt with average words from the internet. I don’t know if you noticed, but the internet is full of muddled incorrect garbage, so GPT-3 is very happy to spit muddled incorrect garbage back out, unless you convince it in the prompt that we’re in a nice part of the internet.

                                1. 2

                                  Hofstadter’s point is that GPT-3 isn’t conscious. It’s a vast, sophisticated rule-matching engine. It will be of great help for companies who want to get rid of customer service representatives, and people who want to try to convince other people online that their opinions are more widely shared than they really are. But it’s a tool, not a conscious being.

                                  1. 1

                                    I’m not disagreeing with that position. The only point in trying to make is that the examples he gives are bunk because it can, in fact, distinguish them as being nonsensical questions if you ask it nicely.

                                    1. 1

                                      You’re assuming that Hofstadter didn’t know that you can use natural language to tell the algorithm to reply with a signifier that it cannot find a good match in the corpus for the question, rather than replying with nonsense, Eliza-like, for such situations.

                                      Maybe he didn’t, maybe he did, and didn’t want to waste valuable Economist screen space with qualifications. His point still stands.

                        2. 1

                          the system just starts babbling randomly—but it has no sense that its random babbling is random babbling

                          If you can estimate the likelihood of a phrase I believe you could make decent guess on how much sense a question makes. Wouldn’t make the system “more conscious” but maybe it could fool some skeptics, heh.

                          1. 5

                            It’s not about what makes sense, but being conscious of what it’s doing. One great innovation in these chatbots is that they’re not designed as a dialog system, they are generating chatlogs. You have to stop generation early, parse the output, and present it as a chat. If you don’t stop it early, it’ll start hallucinating the human parts of the chats as well. It’ll write a whole conversation that just meanders on and on. It will make sense, maybe even more than an average human-human chat, but it’s not anything a conscious language user would ever do.

                            1. 1

                              I understood the babbling mentioned in the article referred exactly to the nonsense answers to nonsense questions, not to the way the model goes off the rails eventually. Also apparently it’s enough to just change the prompt to make the model deal with funny questions.

                              1. 3

                                I must confess I didn’t remember where your quote came from, and linked it to the Google employee’s interview. In stead of rereading the article to contextualize, it appears I just started babbling without being conscious about it.