1. 16
  1.  

  2. 9

    This article’s criticisms fall into two buckets:

    1. Negative characterizations of the people interested in the problem. Thie author argues against the weakest valley celebrities instead of the academics leading the discussion, and drag in a terrible old “rapture of the nerds” take on things.
    2. Weak arguments that mostly misunderstand the problems. To rebut them in one go: if you grabbed someone from the middle of the 19th century and told them that corporations were going to expand and improve themselves until many of them had more money than any king and much of the powers that go with being king, that person could raise every one of these arguments against corporations growing into a significant force in the world.

    This article isn’t the vapid “oh no terminators” response that AI fears tend to get, but it’s not much deeper. I’ve read Bostrom’s Superintelligence and these articles, and this article doesn’t seriously engage with the best versions of the concerns.

    I’m not personally sold on AI as a problem on the scale of nuclear weapons, but it’s not like I have any great knock-down arguments, I admit it mostly just seems too weird and implausible that it would happen fast enough to be an existential threat. Since I read this article a few days ago I’ve been thinking about flash crashes. None of the creators of automated traders wanted to cause a crash, but, oops, the agents had complex and unpredictable behavior based on changing conditions. They’re not even trying to be general AIs and they managed to cause unforseen crises that, again, these arguments would’ve led you to think AI wouldn’t be causing crashes.

    1. 5

      Negative characterizations of the people interested in the problem.

      This is a pretty important thing to point out, though–and the fact that the author manages to point out Thiel and Kurzweil and you haven’t listed any of the “academics leading the discussion” kinda hints that maybe there’s something off in the popular rhetoric.

      Two major points that the author makes are:

      • There are people who are falling into very deep rabbitholes thinking on this subject.
      • Some of those people are in control of large resources and are probabaly (mis)allocating them towards fighting this theoretical menace when they could be fighting concrete problems like starvation, poverty, corruption, and so forth.

      That first point also has another characteristic: the folks that tend to look at these problems often are (at least online, where they’re heard most loudly) members of the LessWrong/SSC/technolibertarian crowd, and almost to a man those folks can be really insufferable when engaged with on these topics. That’s just a plain branding problem but again while you dismiss what you consider the “rapture of the nerds” rhetoric I think that there is a still a strong element of that in these discussions and an element that it’ll be hard to break away from.

      if you grabbed someone from the middle of the 19th century and told them that corporations were going to expand and improve themselves until many of them had more money than any king and much of the powers that go with being king,

      From a pure historical perspective, you’re kinda wrong here. After all, we had the Dutch East India Company and the South Sea Company were both pretty well-known examples of the sort of shenanigans those companies could get up to.

      I do agree with your other point about general AI though–we can do a lot of damage (as we’d discussed once on IRC) with pretty simple AI agents. I think that wondering about what to do if we created superintelligent GAI is in a similar bucket with “what if we created true supermen?”. Even if it is theoretically possible, the practical issues and lack of even a good system for talking about what that really means make me hesitant to spend time on it beyond speculation over beers.

      1. 1

        I didn’t list academics because it’s not a topic I follow that closely. But I ran into a response from the folks most concerned, some of whom are academics.

        I picked that date deliberately - there were a couple corporations you could point to, but the big changes were yet to come. Similarly, we can now only point at quant bots and Alpha Go.

    2. 1

      Today we’re building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can’t predict.

      But there’s also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.

      There seems to be a misunderstanding. Today’s advances in artificial intelligence are limited to pattern recognition. That’s it. There’s no way we’ll get emergent strong AI from this, so there’s no reason to panic.