1. 16
  1.  

  2. 7
    1. 6

      Excellent article. I am always amazed how people get the idea that “solving” purely combinatorial or pattern-matching problems is equivalent to general intelligence, something that we are not even able to define…

      1. 8

        We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades.

        Basically, all his claims based on this assertion. And to my knowledge it is blatantly false. By themself, computer, internet and mobile phone are game changer. To my knowledge, the similar game changer were the invention of writing (4 millenia AD) printing (15th century) and so internet&co (20th century). the time lapse between the two first invention are 5.5 millenia and the time lapse between the two last invention are 5 century. As such, there was a factor 10 of improvement of discovery speed due to improvement in science. I would bet that exponential progress appear to original author as linear, because when you look any exponential curves close enough it always appear linear.

        1. 1

          A friend of mine’s who’s doing a PhD in English literature said they don’t give much weight to criticism before the 80s - they simply couldn’t synthesize as many sources as they can now with online databases.

        2. 4

          When a distinguished but elderly scientist states that something is possible, they are almost certainly right. When they state that something is impossible, they are very probably wrong.

          – Clarke’s first law

          1. 3

            The author appears young.

            1. 2

              You’re right a distinguished but elderly scientist would’ve made a much better argument. The whole essay reads like an argument from personal incredulity.

          2. 4

            Chollet’s arguments read like pop sci handwaving while Yudkowsky’s rebuttal is pleasantly rigorous.

            On a related note, I often observe two huge cognitive failures around big issues like climate change or AI.

            One is in assessing possibilities and risks. AI explosion may not be probable but the risk is that it’s possible, and the potential negative consequences are huge, so caution is definitely warranted. Yet many people hasten to deny the possibility altogether by using weak or totally irrational justifications.

            The other is a failure to grasp non-linear effects or insistence on linear behaviour contrary to evidence (eg Chollet seems to be asserting that progress can only ever be linear, without really substantiating).

            1. 6

              climate change is a much bigger threat, because it’s super risky and super likely (certain, even).

              Apart from that, I’m not very convinced by the rebuttal. I still don’t believe in exponential curves in nature (the GDP? recent notion, moving target, not even clear what it measures). Progress in science becomes slower whenever a field matures; good luck making breakthroughs in areas of maths that are 2 centuries old. It should be the same for an hypothetical self-improving AI: it’s smarter, but making something even smarter is also much more difficult, so in the end it’s all about diminishing returns.

              1. 2

                Totally agree about climate change. What I was trying to say is: even if one takes the position that the worst effects of climate change have very low probability (contrary to established science), the consequences are so grave that action has to be taken regardless. But this obvious conclusion is lost on many people for some bizarre reason.

                It’s a similar story with AI. As soon as we establish that there is a possibility of superintelligent self-improving AI, we have to understand that there are huge risks associated with that, and have to proceed with caution rather than burying heads in sand.

                To your points:

                • I think the important thing is not to be convinced by the proponents of intelligence explosion, but rather to recognise that nobody has proof that it’s impossible.
                • We don’t need to find exponential processes in nature, because we’re not talking about a naturally occurring process (and it wouldn’t prove anything one way or another, anyway).
                • Progress in science, I believe, is pretty much impossible to measure, and I’m not sure that it has much relation to self-improving intelligence.

                Somewhat tangential to this discussion: for the purposes of assessing the risk of AI, it’s useful to take a broader perspective and realise that AI, in fact, doesn’t need to exceed human intelligence or be an autonomous agent to cause a lot of problems. In this context, arguments about the possibility of intelligence explosion are a distraction.

                1. 1

                  As soon as we establish that there is a possibility of superintelligent self-improving AI, we have to understand that there are huge risks associated with that, and have to proceed with caution rather than burying heads in sand.

                  That’s like calling for planetary defences against an alien invasion because the discovery of unicellular life on Mars is imminent.

                  We don’t have strong AI. The pattern matching we call “AI” right now is nowhere near that, yet we are supposed to believe that the qualitative jump is imminent. I’ll go with the voice of reason on this one.

                  1. 2

                    This piece on when technological developments are worth worrying about was a nice read on the issue. Not sure I’m convinced, but it’s at least taking seriously the question of whether anyone should care yet.

                    1. 1

                      But how do you determine what the voice of reason is? There are many reasonable people advising caution it seems. Are you sure you’re not going with comforting beliefs rather than reason?

                      1. 1

                        But how do you determine what the voice of reason is?

                        By the amount of changes of past mechanisms needed to fulfil the prophesied future and my own knowledge of medicine and software engineering.

                        1. 1

                          It’s not quite clear to me why expertise in medicine or software engineering is relevant to forming a reasoned position on intelligence explosion. (Let me know?)

                          I guess you might instead be referring to expertise in machine learning, AI, and neuroscience, in which case I’d love to learn your reasoning for why it’s impossible for intelligence explosion to occur (as long as it’s more substantial than reasoning by analogy, historical or otherwise).

                2. 2

                  Got a link to the rebuttal?

                3. 1

                  We already have general intelligence in the form of humans. Imagine if some of the money put into AI was directed towards improving institutions that cultivated human intelligence.