1. 12
  1.  

  2. 6

    I think the author is missing the point of the concerns about AI.

    Elon Musk’s concern is not just about a superintelligent AI. His concern is about some form of AI which is controlled by a small group of people to gain advantage. So his Open AI initiative is about democratising AI and making sure that whatever it develops into, it’s widely available so that power is distributed more uniformly.

    Another thing about these counterarguments I don’t like is that they are similar to climate change denial arguments, basically: “But what if nothing bad comes of AI? We don’t know that it will, so best we do nothing to prevent bad outcomes, or, deity forbid, we might expend unnecessary effort.” That’s much too similar to sticking your head in the sand and hoping for the best.

    Or another one is “We don’t even know how to define intelligence! Therefore we don’t need to do anything.” That’s just absurd. This guy makes a similar argument that intelligence is not a single dimension, hence… there’s nothing to worry about?

    Another argument he makes that seems weak to me is that AI would have to operate in the physical world to achieve goals. Given that society & the economy is defined to a larger & larger degree by information flows, that doesn’t stack up. An AI that’s inserted into, say, the policy making process, could do a lot of damage without even being “superhuman”.

    I could go on. Much like the definition of intelligence, the issue of AI safety is multi-faceted.

    1. 8

      I agree with your last sentence, though that’s part of what I don’t like about the sci-fi-ish focus of much of the current AI-safety debate, especially the part that’s happening among wealthy tech types. There’s a lot of questionable stuff being done with AI right now, including at some of the very companies these people are making their money from! I’m happy for people like Nick Bostrom to discuss more speculative future risks as well (he’s a philosopher, that’s his job), but how about putting some resources into current actual bad things that actual companies today are doing?

      Instead people like Musk seem to be trying to put the focus on aspects of AI safety that are: 1) conveniently as unrelated as possible to what tech companies are doing with AI today, while 2) having enough of a sci-fi/futurist focus to make for nice tech press. There’s also a bit of an attempt to divorce the issue of AI leading to concentration of power/wealth from politics. I think open-source AI software is great, but I don’t think it’s a promising solution to what is just one facet of the ongoing concentration of wealth/power, which, yes, AI will probably accelerate. Obviously techno-libertarians disagree, and currently dominate the conversation.

      The debate in academia is at least somewhat broader compared to the debate in the Valley and tech press. Still quite a bit of Bostrom-inspired stuff (especially in the UK, where Oxford hired Bostrom and Cambridge naturally had to create its own AI-safety institute to match), but also increasing scrutiny of machine learning, which is often “money laundering for bias”, as the popular quip goes, not to mention used for mass surveillance by both companies and governments.

      1. 4

        His concern is about some form of AI which is controlled by a small group of people to gain advantage.

        Is this a “concern” at this point, though? Isn’t it just “an uncomfortable reality”? Haven’t bad things already come out of AI?

        Now, bad things come out of every technology, but in a world where user behaviors are modeled for the purpose of selling them crap they might want but certainly don’t need, hasn’t AI already committed clear harm against mankind?

        I don’t bring this point up to bash AI- electrical power generation has also committed clear harms, from accidents, fires, all the way up to global warming. The harm isn’t itself the problem, it’s the lack of mitigation that is the problem.

        1. 3

          Elon Musk’s concern is not just about a superintelligent AI. His concern is about some form of AI which is controlled by a small group of people to gain advantage. So his Open AI initiative is about democratising AI and making sure that whatever it develops into, it’s widely available so that power is distributed more uniformly.

          What are some of his ideas for democratising AI? It’s hard for me to imagine an approach that doesn’t involve democratising technology in general (abolishing proprietary software), or democratising the whole economy (abolishing corporate control). Elon Musk doesn’t appear to be against proprietary software or corporate control, so I am a bit skeptical.

          1. 1

            My understanding is that they will publish research and open source software. One of their stated goals is to make the technology widely available.

        2. 3

          My pet conspiracy theory is that the VIPs were tricked into issuing a warning against improbable AI developments as part of a PR campaign for one or both big-budget movies featuring AIs gone mad that year: “Avengers: Age of Ultron” and “Terminator Genisys”.

          It will get much funnier 50 years from now, when artificial intelligence will still be in today’s stage of glorified pattern matching.

          1. 3

            In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.

            This framing of AI safety as an orthodoxy capable of persecuting heresies is an idiotic reversal of the actual conditions. I guess it’s progress from casting those concerned as navel-gazing sci-fi-worshipping millenialists, but it’s still bullshit.

            Speaking of bullshit, the rest of the paper. His points are either totally confused (redefinition of “general purpose intelligence”), specious (something can’t be infinite, so it can’t be better than something else), or such tired retreads of false arguments that they’re the butt of satire.