1. 6

The lack of foresight for the sheer existential horror we would be placing any sentient artificial intelligence in were we to create one is a continual mark against futurists for me.

This article makes the interesting claim that the first signs we’ll see of sentience may well be insanity.

  1.  

  2. 13

    “Can Airplanes Suffer From Lost Feathers? A Philosophical Matter to Consider”

    “Can Submarines Get The Bends? A Philosophical Matter to Consider”

    “Can Forks Get Broken Bones? A Philosophical Matter to Consider”

    1. 6

      “Can Cameras Get Dust in Their Eye?”

      Sometimes questions like these do make sense. Especially if true AI is achieved through something like brain simulation.

      1. 4

        I got a laugh out of that, but it’s not exactly how I’d have said it. More like:

        “Is 0/0 infinity or zero? A philosophical matter to consider”

        “Why is a raven like a writing desk? A philosophical matter to consider”

        “In JavaScript, what does the this variable refer to? A philosophical matter to consider”

        The only distinction I’m trying to make is that it isn’t just that it’s an irrelevant question, but also that there is no reason at present to make assumptions that would lead us to prefer one answer over another.

        1. 5

          I don’t think it’s an irrelevant question, I think the phrasing is so foolish I doubt there’s much of value here. Humans would be lucky if AI were merely sociopathic, but thinking in terms of human mental illness means ignoring the vast majority of the design space.

      2. 7

        Jokes aside, the article makes an interesting point. The question in the title isn’t quite what it’s about, and was probably meant to be catchy to a general audience, at the expense of probably a lot of their specific audience not being interested.

        1. 5

          The average pet in a family home is almost certainly utterly insane.

          1. 5

            This is especially bad for parrots. Be warned, a lot of feels. :(

            1. 4

              A lot of feels in this comment too, skip if you need to.

              And then there are common pet species who humans typically don’t care for properly at all, like how people try to keep a turtle in a tiny tank, or a rabbit in a tiny cage. A tank or cage offers nothing to alleviate boredom, which most animals are able to feel (just watch them and it’s obvious), and generally doesn’t offer a chance for adequate exercise. In the case of turtles, many people buy hatchlings under the impression they’ll stay two inches long forever, and release them into the wild sometime before they get to their full size of two or three feet long. That also happens with pigs, who start small and will generally mass more than a human when grown.

              Sorry for the grim thought.

          2. 3

            If the question is rephrased more reasonably. Can an AI develop a cognitive bias due to experiential learning which is harmful to itself and or others? The answer is unambiguously yes. This author is out of their element.

            1. 1

              Can an AI develop a cognitive bias due to experiential learning which is harmful to itself and or others?

              Isn’t a bias deviation from rationality? For example ,confirmation bias is putting more weight in evidence that supports your belief than evidence that is against it? How would an AI that is learning from experience subject to bias? It would have to be built into it from the start.