1. 5
  1.  

  2. 7

    After reading the article, I still am not quite sure of the definition for “the Eliza Effect”.

    Further, given some of the very real concerns that Weizenbaum raised about his own invention:

    “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

    … the last paragraph of the piece seems supremely dismissive to an almost harmful degree, of the allegations raised by Weizenbaum:

    the idea that humans engage in a kind of play when we interact with chatbots. We’re not necessarily being fooled, we’re just fascinated to see ourselves reflected back in these intelligent machines.

    That doozy of a conclusion reduces the whole of the article down to “Isn’t it bad that humans could be fooled into thinking they were talking to a human?!” instead of elevating the discussion to, “If normal people could ‘connect’ on some level, with a program, what about the people for whom this program would be implemented as treatment? People with (potentially) not as firm a grasp on reality as others?”

    Instead of addressing that issue, instead we got “Humans like to play - and we’re all just playing - and so we’re still smarter. So there. Nyah.”

    Edit: oh, on top of dismissing Weizenbaum as basically ‘just another crazy old guy, what does he know’ to trumpet some CEO’s work that ‘We’re totally not fooled by, so we don’t feel like stupid humans, and you can totally tell it’s a chatbot, so it’s all okay,’ all for the sake of what?

    Mental health is a serious issue, and great, there are folks out there trying to innovate - but dismissing some pretty foundational proponents with some very startling, relevant objections, and then sweeping it all back under the rug for clicks, hits (and hurts) way too close to home. Gross.