1. 3

Meant to link directly to the pdf, but put the wrong URL. Here’s the PDF link:

http://arxiv.org/pdf/1502.06512v1

  1.  

  2. 2

    Well, the problem with the naive version of self-improving software is you begin with a process and the code of the process. The process begins “less intelligent than a person” but still intelligent and is suppose to understand it’s own, something we presume is quite substantial. But there’s the rub, humans have a great deal of trouble understand their “code” to say the least and if we accept the naive intelligence continuum, it a “less than human” AI improving itself through self-programming seems unlikely.

    But the intelligence continuum assumption seems flawed to say the least - a program which beats human in chess isn’t more or less intelligent than humans but most incomparable - except in the game of chess, say.

    My guess would the effort required for achieving human-level AI would be 99.9% creating a program with something quantitatively measurable as intelligence rather than being a basket of features and capabilities as programs are today and 0.1% increasing that intelligence level.

    And anyway, it just feels like all claims and reasoning derived from Lesswrong.com and the “rationality” approach just pollute any coherent discussion of the subject - every single argument comes from assuming an AI that’s a rational, conscious agent and assuming that intelligence is a single quantitative continuum when rationality based software as AI was discarded in the first AI winter, well most serious progress was made and intelligence as a continuum isn’t a claim backed-up by anything but naive intuition.

    1. 3

      I don’t mean to object to most of what you say; I suspect we substantially agree. This comment is about your third paragraph only.

      We can’t quantitatively measure intelligence for humans; IQ tests are the closest there is to an accepted metric, and what they measure is profoundly out of line with everyday experience.

      We can’t even agree that we know intelligence when we see it; there is, in fact, profound disagreement about who is intelligent and what characteristics make them so.

      My personal view is that “intelligence” is such a general concept as to not be useful. I also don’t find the idea of measuring it desirable; why do we need to compare people quantitatively? So we know who’s better?

      My personal guess of the future is that when we get better solutions to problems around computers communicating with natural language, we’ll start to see that many of the systems we already have, have been substantially intelligent for a long while now. They just aren’t able to explain it to us.

      1. 2

        The ability to perform general inductive inference is an important facet of intelligence. Solomonoff induction formalizes inductive inference. http://www.scholarpedia.org/article/Algorithmic_probability

        My personal view is that “intelligence” is such a general concept as to not be useful. I also don’t find the idea of measuring it desirable; why do we need to compare people quantitatively? So we know who’s better?

        If we’re talking about AGI, then I think measuring the theoretically context-independent performance of one possible AGI with respect to another could be a useful metric

        As a practical example, chess AI are rated by elo.

        1. 1

          That does make sense. I think I went off on a favorite tangent about intelligence in humans, without really considering that different assumptions apply to AGI; in particular, test conditions are much more repeatable, and there’s a clear and specific purpose for having a test at all.

      2. 1

        The process begins “less intelligent than a person” but still intelligent and is suppose to understand it’s own, something we presume is quite substantial.

        But does a system have to “understand its code” in order to be self-improving? I’m not sure I’d take that as a given. Evolutionary algorithms, for example, get better over time without (so far as we know) having any innate “understanding” of their own workings.

        But there’s the rub, humans have a great deal of trouble understand their “code” to say the least and if we accept the naive intelligence continuum, it a “less than human” AI improving itself through self-programming seems unlikely.

        Here’s my take on that: We already have computers that are more intelligent than humans - for some definition of “intelligent”. Look at chess playing programs that can pretty much wipe the mat with the best human players these days. I’m not sure that it make sense to talk about one, generic, unified notion of “intelligence”. And in terms of domain constrained intelligence, we already have “beyond human” level AI. So why do we think it’s so improbable that a program could be (or is) “better than human” at the task “learn to improve yourself”?