Question: Will there be chess programs that can beat anyone?

Speculation: No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. “Do you want to play chess?” “No, I’m bored with chess. Let’s talk about poetry.” That may be the kind of dialogue you could have with a program that could beat everyone. That is because real intelligence inevitably depends on a total overview capacity-that is, a programmed ability to “jump out of the system”, so to speak-at least roughly to the extent that we have that ability. Once that is present, you can’t contain the program; it’s gone beyond that certain critical
point, and you just have to face the facts of what you’ve wrought.

This feels like a kind of “gotcha” quote, and without context it I feel it means to say that “programs which play chess” and “chess players” are two different categories, one not-playing but doing, one actually “playing”, which requires intelligence.

“Deep Blue plays very good chess — so what?” Hofstadter said. “I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”

It doesn’t feel like a “gotcha” quote to me. Hofstadter was articulating his prediction very clearly: if we were to see “chess programs” that can beat anyone, they would be programs of general intelligence, rather than merely programs that can play chess. With hindsight we know that he was wrong, and there’s nothing shameful about making predictions and being wrong. I don’t think your distinction between “programs which play chess” and “chess players” is relevant to the point Hofstadter was articulating.

Reply to myself with regard to the quote. If you don’t recognize it, it is from the book Gödel, Escher, Bach: An Eternal Golden Braid. It’s a really interesting book and I highly recommend it. I didn’t mean to judge the author on what he wrote years ago. It simply feels amusing. How much we have accomplished in the last 40 years, yet the goal post of AI is constantly moving forward! In the preface to the 20th anniversary of the book, in 1999, the author wrote in the preface,

Although my prediction about chess-playing programs put forth there turned out to be embarrassingly wrong (as the world saw with Deep Blue versus Kasparov in 1997), those few pages nonetheless express a set of philosophical beliefs to which I am still committed in the strongest sense.

From this article in Economist, we can see that he still holds such strong philosophical beliefs.

For comparison, here’s another quote from the same book:

My guess is that any AI program would, if comprehensible to us, seem pretty alien. For that reason, we will have a very hard time deciding when and if we really are dealing with an Al program, or just a “weird” program.

My guess is that any AI program would, if comprehensible to us, seem pretty alien. For that reason, we will have a very hard time deciding when and if we really are dealing with an Al program, or just a “weird” program.

Something that would convince me at least of the existence of a capable AI was a system that could prove a math truth from first principles, without human guidance. We might not be able to understand why it took the steps it took but we could confirm the validity at each step.

I just played with the default Q&A text-davinci-002. It seems to be stronger than a second grader. The first Q&A is my prompt. The second A is from GPT-3.

We have two variables x and y. I will give you two equations. You will answer the value of x and y, and tell me how you get those values.

Q: If x+y=5, 2*x+y=6, What is x and y?

A: 2*x+y-x-y=6-5, this gives x=1, then y=5-x=4, so x=1 and y=4.

Q: If x+3*y=13, x+y=10, what is x and y?

A: x+3y-x-y=13-10, this gives 2y=3, so y=1.5, then x=10-y=8.5, so x=8.5 and y=1.5

Wolfram Alpha has done natural language math for more than a decade.

I’m thinking more along the lines of internalizing that the sum of 2 primes can either be prime or non-prime, but the product of 2 primes is never a prime.

Sure, a tool like GPT3 can answer the above, and give a convincing argument as to why, but only because it has scanned a bunch of math tutorials.

I don’t know when it “clicked” what a prime number was for me, as opposed to just a definition. It did require some book learning sure, but I don’t refer back to the definition every time I need to think about them.

I’m not a mathematician but I have a degree in science. I can’t really do any original work with primes, but I can work with them (nowadays only in programming puzzles), and I can have a conversation with another human about them, both as a peer, and in a teaching role.

When an “AI system” can do the same we will have made progress.

(“Primes” is a bad example, which I chose precisely because I know it’s a concept a lot of laypeople have trouble with, but it’s also widely covered in texts. I could have chosen “monads”, but no-one, including humans, understand those…)

I copied the definition of prime number from wikipedia as the prompt. It broke down pretty quickly though.

I am a highly intelligent question answering bot. I am an expert in number theory.

A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number.

Q: Is 7 a prime number?

A: Yes.

Q: Is 10 a prime number?

A: No.

Q: If x and y are both prime numbers, is x*y a prime number?

A: No.

Q: Why is it not a prime number?

A: It is not a prime number because it is a product of two smaller natural numbers (x and y).

Q: If x and y are both prime numbers, is x+y a prime number?

A: It is not possible to determine if x+y is a prime number without knowing the values of x and y.

Q: Why?

A: Because the answer to this question depends on the values of x and y. If x and y are both prime numbers, then x+y is a prime number. However, if x and y are not both prime numbers, then x+y is not a prime number.

1979… in 1980 Commodore released the VIC-20 running at ~1MHz, having 20 KB ROM + 5 KB RAM (expandable to 32 KB); early consumer hard disks in the 80ies had around 30 MB. Today memory and storage capacities are about a million times greater, freqencies are up to 5000 times greater but much more is done per clock cycle and we have multiple cores. Additionally we also have powerful GPUs. And then, aside from consumer stuff, then there is AlphaZero… which was trained using 5000 specialized tensor processing units.

So while speculations of a 43 years younger Hofstadter about the future are interesting, I don’t think they matter much in the context of what he is saying today about todays neural networks.

— Hofstadter, 1979

– Hofstadter, article under discussion.

I hope that when I’m 80 I’m not judged on stuff I said 40 years ago.

Just make sure it’s not recorded anywhere, not in voice, video, pictures, text or handwritten.

So only pillowtalk is safe?

This feels like a kind of “gotcha” quote, and without context it I feel it means to say that “programs which play chess” and “chess players” are two different categories, one not-playing but doing, one actually “playing”, which requires intelligence.

Seems so:

— Hofstadter, 2015

I wouldn’t take that as evidence that he wasn’t wrong in 1979.

It doesn’t feel like a “gotcha” quote to me. Hofstadter was articulating his prediction very clearly: if we were to see “chess programs” that can beat anyone, they would be programs of general intelligence, rather than merely programs that can play chess. With hindsight we know that he was wrong, and there’s nothing shameful about making predictions and being wrong. I don’t think your distinction between “programs which play chess” and “chess players” is relevant to the point Hofstadter was articulating.

Reply to myself with regard to the quote. If you don’t recognize it, it is from the book

Gödel, Escher, Bach: An Eternal Golden Braid. It’s a really interesting book and I highly recommend it. I didn’t mean to judge the author on what he wrote years ago. It simply feels amusing. How much we have accomplished in the last 40 years, yet the goal post of AI is constantly moving forward! In the preface to the 20th anniversary of the book, in 1999, the author wrote in the preface,From this article in Economist, we can see that he still holds such strong philosophical beliefs.

For comparison, here’s another quote from the same book:

Something that would convince me at least of the existence of a capable AI was a system that could prove a math truth from first principles, without human guidance. We might not be able to understand

whyit took the steps it took but we could confirm the validity at each step.I just played with the default Q&A text-davinci-002. It seems to be stronger than a second grader. The first Q&A is my prompt. The second A is from GPT-3.

Wolfram Alpha has done natural language math for more than a decade.

I’m thinking more along the lines of internalizing that the sum of 2 primes can either be prime or non-prime, but the product of 2 primes is never a prime.

Sure, a tool like GPT3 can answer the above, and give a convincing argument as to why, but only because it has scanned a bunch of math tutorials.

Isn’t “because you read a bunch of material on math” also the reason

youcan do this?I don’t know when it “clicked” what a prime number was for me, as opposed to just a definition. It did require some book learning sure, but I don’t refer back to the definition every time I need to think about them.

I’m not a mathematician but I have a degree in science. I can’t really do any original work with primes, but I can work with them (nowadays only in programming puzzles), and I can have a conversation with another human about them, both as a peer, and in a teaching role.

When an “AI system” can do the same we will have made progress.

(“Primes” is a bad example, which I chose precisely because I know it’s a concept a lot of laypeople have trouble with, but it’s also widely covered in texts. I could have chosen “monads”, but no-one, including humans, understand those…)

I copied the definition of prime number from wikipedia as the prompt. It broke down pretty quickly though.

Thanks a lot for checking up on this!

1979… in 1980 Commodore released the VIC-20 running at ~1MHz, having 20 KB ROM + 5 KB RAM (expandable to 32 KB); early consumer hard disks in the 80ies had around 30 MB. Today memory and storage capacities are about a million times greater, freqencies are up to 5000 times greater but much more is done per clock cycle and we have multiple cores. Additionally we also have powerful GPUs. And then, aside from consumer stuff, then there is AlphaZero… which was trained using 5000 specialized tensor processing units.

So while speculations of a 43 years younger Hofstadter about the future are interesting, I don’t think they matter much in the context of what he is saying today about todays neural networks.