1. 9
  1.  

  2. 6

    I don’t think I’d call this the “biggest” danger, but it’s certainly a real one, and I wouldn’t be surprised to learn that it’s already happening. I wouldn’t call it specific to artificial intelligence; it’s applicable in any situation with an opaque algorithm. You don’t even need a malicious actor to wind up optimizing for something that hurts people.

    1. 7

      It’s the most present danger.

      1. 2

        I considered trying to express something like that, but couldn’t find a concise way to say it. Yes, thank you.

        1. 2

          i’d say the most present danger was people with power increasingly trusting the output of an opaque algorithm when making decision, and the people whose lives are affected by those decisions having no appeal because no one wants to believe their shiny decision-making system is broken.

      2. 6

        I’d argue we’re already at the point of being dependent on algorithms for many decisions. Just shut down Google for a day and see what happens.

        Behind every computer algorithm is a programmer. And behind that programmer is a strategy set by people with business and political motives.

        It’s upsetting how some people deny even the existence of non-commercial programming.

        The danger this author is talking about is that of an oracle. Computers were oracles before AI got big. Even newspapers are in a way because you can’t see how they operate, how they rate incoming news and aggregate it, and of course there is potential for abuse there too as many people trust their newspapers.

        Also, damn click-bait headlines.

        1. 4

          Nice point about oracles. Systems that drove our decisions that we don’t fully understand have been around for a long time. I think the difference now is both the effectiveness and immediacy of these oracles.