1. 49
    1. 6

      I just added a new section to the bottom of this article showing how it can handle mathematical explanations too… input as LaTeX and the output can include more LaTeX which can then be rendered using GitHub Markdown. It’s a really neat trick!

    2. 7

      Amazing.

      “Well this tool just successfully answered every question I had, including stuff I didn’t know. But because it is unable to tell when it is talking nonsense, I shall interpret this to mean literally “that it does not know anything about anything at all” because it is a pattern generator, rather than, for instance, huge multilevel pattern engines being able to represent knowledge.”

      “Because this thing is not perfect, it must not know anything.”

      I find the ability of people to convince themselves that the evidence of their own two eyes is just nonexistent, truly remarkable. To be frank: what will it take?!

      edit: This suggests that there’s an “uncanny valley of intelligence” where we judge subhuman systems by human standards.

      1. 20

        That was a bit unkind. Whether GPT3 et al can be considered to “know” or “understand” is a huge open question, and there’s no need to go all ad hominem and imply that a contrary opinion is delusional.

        We’re at the point where Searle’s Chinese Room thought experiment has become real. I’ve always disagreed with Searle’s side — it’s disingenuous to say the person in the room is just “looking up rules in a book”, because translation really involves extremely complex behavior like, well, a deep-learning neural net. If you say the person in the room is manually feeding the input through a GPT-3 size net, working out all the math with a calculator, it stops seeming obvious that there’s no knowledge of Chinese, or rather, that the knowledge isn’t in the puny human but in the billions of connections.

        My take is that GPT-3 definitely seems to “know” stuff; just don’t pin me down on exactly what “know” means. But it doesn’t know what it doesn’t know; it’s not self-aware. Instead it confabulates. It’s kind of like we’ve created a sorta-maybe-intelligence only it has some profound brain damage.

        (Disclaimer: I have no real AI fu, I’ve just read a lot of lightweight cognitive science.)

        1. 7

          But it doesn’t know what it doesn’t know; it’s not self-aware. Instead it confabulates.

          There’s evidence to show that if you explicitly ask it not to confabulate and instead to call out questions it doesn’t know the answer to/that don’t make sense, that it does so: https://www.lesswrong.com/posts/ADwayvunaJqBLzawa/contra-hofstadter-on-gpt-3-nonsense

          Edit: another example: https://twitter.com/goodside/status/1545793388871651330

          1. 2

            That is fascinating, and sort of shakes up my opinion/understanding again. It’s like it by default doesn’t realize that dismissing the question is an option, and feels “compelled” to come up with an answer due to the way the prompt is phrased.

            There’s a quote from that Twitter thread that I find very interesting:

            One factor that exacerbates this is what I call “rhetorical bias” — we interpret Q+A as two-party dialog, but the model interprets Q+A as rhetorical one-author text. In this context, the suppositions of questions are never contradicted by their answer.

            1. 2

              My theory is that there’s just very few samples online of somebody answering a question with “I don’t know.” Because most examples of the Q/A format are rhetorical questions, FAQs, and answer sheets. So the model thinks that this is just not done.

              It’s not optimizing for correct answers but for correct continuations. And making something up is probably more correct (to the input samples, at least) than admitting ignorance.

              edit: And I just noticed that I just rephrased your quote…

              1. 2

                This is a good insight! Online discourse (whether formalized in FAQ or less formal in chat) usually does not contain the response “I don’t know”. There’s a danger in applying the ideal “Socratic dialog” model to online text, because it is less synchronous than conversation, and there are entire swaths from conversations that are missing (like pauses, “umms” etc)

        2. 4

          It’s admittedly unkind, but as you said it’s an open question, so to see it put aside with “well, obviously it doesn’t actually know anything” just makes me want to go “actually, obviously it knows a lot, you just demonstrated lots of things it knows.”

      2. 14

        Not sure what you’re disagreeing with me here on. I think it’s vitally important for people to understand that the AI here isn’t an intelligence with knowledge around the world: it’s a huge bag of patterns.

        That doesn’t mean it’s not useful and interesting - I find it incredibly useful and I’m absolutely fascinated by it.

        But it does mean that it’s really important people understand the nature of the thing. It’s not safe to use these tools if you don’t understand that they aren’t “intelligent”, and they definitely aren’t “truthful”.

        A lesson I learned just this morning: if you ask GPT-3 a leading question, “Which of those two queries is more efficient and why?”, it could well provide a misleading answer because you didn’t give it an obvious option to say “actually they’re both the same”.

        Using these tools effectively is a very deep subject. I want to encourage people to understand that!

        1. 8

          Not sure what you’re disagreeing with me here on. I think it’s vitally important for people to understand that the AI here isn’t an intelligence with knowledge around the world: it’s a huge bag of patterns.

          Those are not antonyms.

          I think GPT-3 is a (weak) intelligence with (unreliable) knowledge about the world, implemented as a huge bag of patterns and metapatterns.

          edit: I think what’s happening is that you only have two categories: intelligent things, and unintelligent things. And your model for “intelligent things” is humans. So when you see a thing lacking some critical human intellectual capabilities but clearly having others, then those others have to be “faked”, have to be illusions or mere tricks. Whereas I think that human intelligence is assembled from lots of separate skills, some involving knowledge recall, which GPT-3 has mastered, others involving introspection and ability to ascertain reliability of knowledge, which it is architecturally incapable of. But to say that this means that GPT-3 doesn’t “know” things is to deny there was ever a baby in the bathwater, after writing an article documenting its crying.

          1. 3

            I really like your description of GPT-3 there - it fits my understanding really well.

            I didn’t spend much time carefully crafting the wording of the paragraph we’re talking about here. I really just wanted to re-emphasize that you can’t blindly trust the output of GPT-3.

            When I said “GPT-3 doesn’t ‘know’ things” I was trying to hint that GPT-3 doesn’t have a “fact base” of things that it knows are true: it has a huge bag of vectors representing word associations. That some of those associations may represent true facts (“London is the capital of England”) is almost accidental.

            1. 3

              Of course, the same thing may still be said of humans. There’s a mechanism that makes us believe true things sometimes - just like with GPT-3 - but it is imperfect and there’s lots of ways in which it can fail, ways to make it fail, and leave people believing and espousing false things - just like with GPT-3.

              The important thing about GPT-3 isn’t that it fails, but that it works at all some of the time. I believe that if you can get a thing to have humanlike capabilities, say, one in ten prompts, then you’re a few weeks out from the Singularity.

        2. 2

          AI here isn’t an intelligence with knowledge around the world: it’s a huge bag of patterns.

          Genuine question: What evidence do we have that real intelligence isn’t?

          1. 7

            None at all. The philosophical questions here are fascinating - I’d love to read writing from philosophers on this stuff.

            I’ve been thinking about that a lot with respect to DALL-E as well - how are images generated by DALL-E different from images generated by a human artist who has spent a whole lot of time looking at art by other people before developing their own?

            Could DALL-E be thought of as a human artist with an enormous memory who’s deeply examined millions of images before starting to make their own?

            I intuitively feel like these are not the same thing at all - humans are more than just pattern matchers. But I don’t have anything to back that feeling up yet.

            1. 4

              I have a PhD in philosophy. Here’s a Lobsters thread on whether Copilot “knows” things. I think the answer is “no” even bracketing the more difficult question of whether it believes things: https://lobste.rs/s/umicte/give_up_github#c_3zpm90

              I should probably write a blog post about this. I like the idea that some day soon I can feed an outline to an AI, have it flesh that out into paragraphs, and then spend my time editing the semi-finished product. It would make me much more productive.

            2. 2

              humans are more than just pattern matchers

              Sure, we’re pattern matchers plus a bunch of randomness! :D

      3. 3

        I think the main thing is just having a history of reliability so you feel like you can trust it. That history you may have observed yourself or have it signalled to you socially / culturally. eg. maybe one trusts what an experienced programmer would write up as an explanation imagining that they have a history of thinking about code in a way that works.

        But yeah if this usage of GPT-3 or something like it shows it reliably working for this and builds trust, it could feel that way too. Processing that as “it understands it / doesn’t understand it” is just about how the truth value of that statement affects other ways we interact with it I think (though maybe that’s a bit reductionist in a pragmatist way…).

    3. 3

      I can’t even understand my own code. Good luck, ai.

    4. 1

      Impressive! Are you still using your OpenAI free credits? I like playing with it, but their fees add up quickly.

      1. 2

        I ran out of free credits - I spent $7.77 in June and $4.46 so far in July.

    5. 1

      I don’t know if it’s right or not on the left join thing, but it’s probably smarter about code than the average commenter!

    6. 1

      Now ask it why someone would want to write this code; what business purpose would be served by it.

      1. 2

        Just tried that: I pasted in the first example from my blog post (the Python lexer) and asked, here’s what it said:

        Me: what is the business value?

        GPT-3: The business value of this code is that it helps to parse strings of text, which can be used to interpret data or convert it into a format that can be read by a computer.

        Me: how can a business make money from it?

        GPT-3: A business can make money from this code by using it to develop software that can interpret data or convert it into a format that can be read by a computer.

        1. 1

          Thanks, that confirms what I expected: that it would reply with something lexically close, but not with anything near what a human would answer.