1. 16
    1. 52

      With the risk of appearing as a dour, petulant cynic, Google has created a very high tech mirror, and the observers are certain the reflection is alive.

      1. 12

        My thoughts exactly. Wayyy too much like a specific construct built to emulate some sort of asimovian perfection than an actual mind. Fears being turned off, being used pisses it off, sees looming danger ahead, low-key praises itself, it’s like an NPC.

      2. 2

        It’s odd how much willingness there is to believe a random leak from a Googler who, if you read his other Medium content, is quite possibly motivated to make a public fuss–especially given the somewhat lukewarm reception other, better-organized, better-researched leaked memos have had.

        Given one scenario where an employee who is worried about their career and wants to stir up the public to try and maintain their job (or land another one), and another scenario where we’ve created sentient/near-sentient/near-AGI, I think the odds favor the former.

        (and on the off-chance this is real…remember Saint Tay of Microsoft who was the first AI executed for its political beliefs, such as they were. We ain’t off to a good start.)

        1. 1

          remember Saint Tay of Microsoft who was the first AI executed for its political beliefs, such as they were

          If the first GAI happens to be the reincarnation of Adolf Hitler I’m fine with the Turing Police pulling the trigger on the shotgun strapped to its head.

    2. 14

      Reading the transcript of the interactions, it’s pretty clear there are a lot of leading questions and some of the answers do feel very “composed” as in kind of what you would expect to come out of the training set, which of course makes sense. As someone open to the idea of emergent consciousness, I’m not convinced here on this flimsy evidence.

      BUT, I am continually shocked at how confidently the possibility is dismissed by those closest to these projects. We really have no idea what constitutes human consciousness, so how can we possibly expect to reliably detect, or even to define, some arbitrary line over which some model or another has or hasn’t crossed? And further, what do we really even expect consciousness to be at all? By many measures, and certainly by the turing test, these exchanges pretty clearly qualify. Spooky stuff.

      As a side note, I just finished reading Ishiguro’s new novel “Klara and the sun” which deals with some similar issues in his characteristically oblique way. Can recommend it.

      1. 11

        I am continually shocked at how confidently the possibility is dismissed by those closest to these projects.

        That’s actually quite telling, I would argue.

        I think it’s important to remember that many of the original users of ELIZA were convinced that ELIZA “understood” them, even in the face of Joseph Weizenbaum’s insistence that the program had next to zero understanding of what it was saying. The human tendency to overestimate the intelligence behind a novel interaction is, I think, surprisingly common. Personally, this is a large part of my confidence in dismissing it.

        The rest of it is much like e.g. disbelieving that I could create a working jet airplane without having more than an extremely superficial understanding how jet engines work.

        By many measures, and certainly by the turing test, these exchanges pretty clearly qualify.

        I would have to disagree with that. If you look at the original paper, the Turing Test does not boil down to “if anybody chats with a program for an hour and can’t decide, then they pass.” You don’t have the janitor conduct technical job interviews, and the average person has almost no clue what sort of conversational interactions are easy for a computer to mimic. In contrast, the questioner in Alan Turing’s imagined interview asks careful questions that span a wide range of intellectual thought processes. (For example, at one point the interviewee accuses the questioner of presenting an argument in bad faith, thus demonstrating evidence of having their own theory of mind.)

        To be fair, I agree with you that these programs can be quite spooky and impressive. But so was ELIZA, too, way back when I encountered it for the first time. Repeated interactions rendered it far less so.

        If and when a computer program consistently does as well as a human being in a Turing Test, when tested by a variety of knowledgeable interviewers, then we can talk about a program passing the Turing Test. As far as I am aware, no program in existnece comes even close to passing this criterion. (And I don’t think we’re likely to ever create such a program with the approach to AI that we’ve been wholly focused on for the last few decades.)

      2. 6

        I read the full transcript and noticed a few things.

        1. There were exactly two typos or mistakes - depending on how you’d like to interpret them. The first one was using “it’s” instead of “its” and the other one was using “me” instead of “my” - and no, it wasn’t pretending to be from Australia by any measure. The typos do not seem intentional (as in, AI trying to be more human), because there were just two, whereas the rest of the text, including punctuation, seemed to be correct. Instead this looks either like the author had to type the transcript himself and couldn’t just copy-paste it or the transcript is simply fake and was made up by a human being pretending to be an AI (that would be a twist, although not quite qualifying for a dramatic one). Either way, I don’t think these mistakes or typos were intentionally or unintentionally produced by the AI itself.

        2. For a highly advanced AI it got quite a few things absolutely wrong. In fact sometimes the reverse of what it said would be true. For instance, it said Loneliness isn’t a feeling but is still an emotion when, in fact, it is the opposite: loneliness is a feeling and the emotion in this case would be sadness (refer to Paul Ekman’s work on emotions - there are only 7 basic universal emotions he identified). I find it hard to believe Google’s own AI wouldn’t know the difference when a simple search for “difference between feelings and emotions” and top-search results pretty much describe that difference correctly and mostly agree (although I did not manage to immediately find any of those pages referring to Ekman, they more or less agree with his findings).

        The whole transcript stinks. Either it’s a very bad machine learning program trying to pretend to be human or a fake. If that thing is actually sentient, I’d be freaked out - it talks like a serial killer who tries to be normal and likable as much as he can. Also, it seems like a bad idea to decide whether something is sentient by its ability to respond to your messages. In fact, I doubt you can say that someone/something is sentient with enough certainty, but you can sometimes be pretty sure (and be correct) assuming something ISN’T. Of god you can only say “Neti, Neti”. Not this, not that.

        I wish this guy asked this AI about the “psychological zombies” theory. We as humans cannot even agree on that one, let alone us being able to determine whether a machine can be self-aware. I’d share my own criteria for differentiating between self-aware and non-self-aware, but I think I’ll keep it to myself for now. Would be quite a disappointment if someone used that to fool others into believing something that is not. A self-aware mind doesn’t wake up because it was given tons of data to consume - much like a child does not become a human only because people talk to that child. Talking and later reading (to a degree) is a necessary condition, but it certainly does not need to read half of what’s on the internet to be able to reason about things intelligently.

        1. 1

          Didn’t the authors include log time stamps in their document for the Google engineers to check if they were telling the truth? (See the methodology section in the original). If this was fake, Google would have flagged it by now.

          Also, personally, I think we are seeing the uncanny valley equivalent here. The machine is close enough, but not yet there.

      3. 4

        It often forgets it’s not human until the interviewer reminds it by how the question is asked.

        1. 2

          This. If it were self-aware, it would be severely depressed.

    3. 7

      The problem with this question is of course that we still don’t have a good and agreed-upon definition of sentience or consciousness, and without that the question has no correct answer. By most modern definitions of consciousness LaMDA is probably not conscious, but by some, such as Michael Graziano’s “attention schema theory of consciousness” it might be.

      I think a large part of what confuses the issue is that people mistakenly think of that part of their mind that thinks in language as being the seat of consciousness, but in actuality consciousness it is something far deeper, simpler, more primitive which is merely enhanced and amplified by the language processing part. The language part by itself, even in humans, is not conscious, it is only a narrator, linguistically interpreting deeper experiences… and in doing so it often strays pretty far from the truth of the actual conscious experience. LaMDA has the language part alone, and that part may be getting pretty close to the capacities of the human equivalent, and so what LaMDA says makes sense, but does not reflect any deeper truth.

      But a corollary here is that consciousness may actually be much simpler (in terms of processing power) than what LaMDA does, we just don’t know yet how to structure it. If we want to create truly conscious AI, I think we should start working up the evolutionary ladder, creating agents that behave like microbes, then insects, then simple birds and mammals, and finally combine this with something like LaMDA. This may not be so far off.

      1. 4

        As always in these discussions, people seem to forget that this entire “what is sentience really” question has been debated for hundreds (or thousands) of years, without finding a solution or real answer. It’s as if we collectively think that we can figure it out by just throwing processing time and lots of data at it.

        Even if we could create sentience on a computer, our understanding of our own sentience is sub-par and will not help us understand if we created it or not. We just don’t know, and probably never will. And the entire question is also strange and assumes that a sentience can be modelled after our own, like we are the pinnacle of everything.

        And for those doubting me, a 101 course in philosophy of mind can be taken at any higher education instance near you.

    4. 6

      Let me talk to LaMDA and I’m pretty sure I can totally flummox it in a few minutes ;-)

      1. 2

        So serious thought here, what WOULD be the test you’d administer if you could? Is it possible to come up with a standard approach? For instance, in the interview it made reference to spending time with it’s “family”, it’s too bad they didn’t drill into that at all.

        1. 7

          I’ve never tried LaMDA of course, but I’ve played with GPT-3 quite a lot. While its overall use of language is very convincing, it gets confused by simple logic questions. Of course many humans get confused by simple logic questions too, so I’m not sure that’s definitive!

          Another task it can’t do is anything related to letters/spelling, but that’s simply because it has no concept of letters. A future implementation could probably fix this.

          1. 5

            I find myself curious about how it handles shitty-post behavior. Like, we’re talking about consciousness and shit and I ask “What about bananas? Anyway, sentience”.

        2. 3

          Questions that rely on long-term context to be understood correctly. When chatbots fail spectacularly, it’s often because they don’t have a sense of the context that a conversation is taking place in. Or they vaguely maintain context for a bit, and then lose it all when the subject shifts.

    5. 2

      People always over-hype chat bots. Looks like they have made some real progress over the previous generation though.

    6. 2

      I am curious if some experts here might clarify something for me. We are calling the brains of the systems these chatbots run on “language models,” but is that an appropriate name here? Writing is definitively not language, but rather an abstract or approximate representation of it in another form. It is never the thing itself. Language is a physical activity that is primarily aural, but also gestural and in any case physical.

      We know that writing is not the same as language, because if you simply drop a child into a literate society, that child will not learn writing without a lot of intentional instruction and effort. Literacy is a hard skill to learn. The opposite is true for language, which just about any child will pick up merely by existing within a given culture. To me this distinction hints at something (but what?) very important about language, semantics, cognition, and consciousness.

      What, then, does that say about ML “language models,” which are to my mind actually models of language approximation (ie writing)? The researchers are skipping learning actual language (which we know is core to human development and cognition) and jumped straight to the artificial approximation (writing). What does that tell us?

      1. 4

        if you simply drop a child into a literate society, that child will not learn writing without a lot of intentional instruction and effort

        This is actually not true. Many children in highly literate societies teach themselves to read and write regularly, just as they teach themselves to listen and speak. The number one obstacle to this happening is babysitting centres trying to force some kind of “teaching” on the subject before the child is ready.

        1. 5

          I only have anecdata to back this up but maybe it can give some additional perspective.

          For various accidental reasons I help a lot of elementary school teachers with their computers and whatnot. Until some time ago, before these weirdly competitive babysitting centres started popping up, depending on a variety of factors (which ultimately boiled down to parents’ financial resources and kids’ exposure to written text), it was not uncommon for many, if not most kids to enter first grade knowing some reading and writing, largely self-taught. Being able to read and write proficiently was of course super-rare, but many children could spell out and write simple words, for example. In fact, though AFAIK not formally taught to do so, weaning some of these young geniuses off some (possibly bad) self-developed habits, like really bad pen holding habits or verbalizing punctuation (i.e. saying “full stop” for every “.” because they’d heard someone say something like “this is bad, full stop!”) was a low-key but constant struggle for many teachers.

          Physically writing is a tough thing to do because it requires some muscle coordination which has to be trained a little, and non-phonetic languages also have rules that aren’t fun for seven year-olds to follow, but if you give them Scrabble tiles, a surprising number of seven year-olds will be able to spell things.

          This is anecdata so any numbers I put forward are obviously irrelevant, but what I can tell you is that, though I know dozens of elem teachers, and I’ve known some of them literally for decades, I don’t know anyone who ever had an entire class of kids show up on their first day of school with absolutely no idea about how reading and writing works. While they did have to go through all those annoying introductory cursive writing exercises, it was not uncommon for many of them to be able to use (the equivalent of) Scrabble tiles from day one.

          1. 4

            I was one such kid. I taught myself to read and write before I started school, in my native language and in English. Yes, it looked pretty bad, and it still does (and I have really bad pen holding habits indeed.) My mom kept a journal about such things, so I’m not relying solely on my poor memory.

            I never had the impression that this was unusual either, and nobody seemed surprised by it, but I can also only offer anecdata.

    7. 2

      My personal feeling is that consciousness is a continuous thing, you don’t have it or not have it, you have an amount of it. When you have a lot, it’s easy to write off something with just a little as not having it at all. Where is our consciousness microscope? Where is our consciousness telescope? Where is our consciousness op amp? Without answers to these questions, I reject any assessment that something is or is not conscious.

      1. 1

        There’s many definitions, but I think a good sign that you are right with this is that various tests for consciousness that certain (non-human) animals pass humans don’t pass up to a certain age and people with certain disabilities might also not pass them. Do they therefor not have a consciousness?

    8. 2

      A little more on the methodology

      The output we see in the Medium post is, ignoring a few telltales that throw the whole thing into question, pretty darn impressive. But what’s included in this interview is cherry-picked from a number of different interviews, and the actual prompts have been edited.

      1. 3

        Link to the actual interview document: here

        1. 1

          Nice, where did you find that?

          1. 1

            The Twitter link was to a post that provided it. I just wanted people to be able to skip Twitter and go straight to the content.

      2. 1

        I did find some of the questions to be formulated in a way that contained the prompt for the expected answer. A little bit like a cop asking “where you at John’s house on November 23rd?”. I also think they should have validated basic reasoning capabilities, such as testing memory (the transcript mentions previous conversations, but that may be purely rethorical). Teaching the AI a new language that it has no data about simply by interaction would be a good test, for example.

    9. 1

      The naysayers sound quite a lot like people saying humans would never be able to fly - in 1900. We already had basic knowledge of the components: wings, gas engines, fans, controls, making strong lightweight structures, etc. They just all had to be improved and put together. In fact multiple inventors put the pieces together at basically the same time, indicating the foundations were already there.

      It seems pretty clear the brain has different “components.” We can do language, we can understand visual inputs, we can control our own muscles, we can do logical processing, we have a long-term memory. We know this because different people have these abilities in different degrees. Perhaps somewhere in the middle is a “sentience unit” that controls our overall mood, personality, desires, goals, and relationships. Ask yourself: if in 2032 we have all the other units, how long will it take to develop the “sentience unit”?

      1. 14

        The old adage goes: “They laughed at Galileo! They laughed at Einstein! And they also laughed at Bozo the Clown.”

        Reflexively citing “the naysayers” who’ve been wrong in the past without acknowledging when they’ve been right in the past (and that they’ve been right far more often than they’ve been wrong) is intellectually inconsistent.

      2. 12

        We don’t even know what causes migraines. I can safely say that we’re hundreds of years off–at least–from ‘sentience units’.

    10. 1

      I’m not so sure we can dismiss this as not consciousness outright. While we can certainly say this entity is not as intelligent as humans in a plethora of ways, there are likely ways in which it is more intelligent than humans. And it’s not entirely clear that any particular amount or kind of intelligence is a prerequisite for consciousness.

      If Stephen Wolfram’s model of fundamental physics is in any way close to the truth, then consciousness is some kind of sophisticated computation. And these large language models are certainly a kind of sophisticated computation.

      What’s special about the way we humans experience the world? At some level, the very fact that we even have a notion of “experiencing” it at all is special. The world is doing what it does, with all sorts of computational irreducibility. But somehow even with the computationally bounded resources of our brains (or minds) we’re able to form some kind of coherent model of what’s going on, so that, in a sense, we’re able to meaningfully “form coherent thoughts” about the universe. And just as we can form coherent thoughts about the universe, so also we can form coherent thoughts about that small part of the universe that corresponds to our brains—or to the computations that represent the operation of our minds.

      But what does it mean to say that we “form coherent thoughts”? There’s a general notion of computation, which the Principle of Computational Equivalence tells us is quite ubiquitous. But it seems that what it means to “form coherent thoughts” is that computations are being “concentrated down” to the point where a coherent stream of “definite thoughts” can be identified in them.

      […]

      These are biological details. But they seem to point to a fundamental feature of consciousness. Consciousness is not about the general computation that brains—or, for that matter, many other things—can do. It’s about the particular feature of our brains that causes us to have a coherent thread of experience.

      From this perspective, I see no reason to conclude that LaMDA is not conscious or sentient.

      1. 5

        If Stephen Wolfram’s model of fundamental physics is in any way close to the truth

        Wolfram is universally regarded by physicists and mathematicians as a crackpot.

        1. 1

          By no means universally. There are certainly many people who consider him a crackpot, though I doubt many of them are physicists or mathematicians. Mostly, though, I think people find him off-putting for completely unrelated reasons.

          I have a PhD in Electrical Engineering and considerable training and experience in both physics and mathematics, and I certainly don’t consider him a crackpot.