1. 5
  1.  

  2. 20

    I wish people would lay off some of this nonsense.

    The idea that we know something about the computational capacity of the human brain is crazy. Walk up to any neuroscientist and tell them that a brain performs 1015 computations per second and at best you’ll get a blank stare. Even more than this, the idea that capacity in this sense is meaningful in any way is also nonsense. I can have an efficient algorithm that runs on my phone or an inefficient algorithm that runs on a supercomputer, both of which solve the same task. Who is to say how efficient the algorithms in the brain are? AI has nothing to do with computational power.

    AI isn’t a matter of degree. We don’t actually understand the scientific problem of what intelligence is. We know of a few subproblems: object recognition, language, theory of mind, kinematics, etc. But don’t get the big picture. Who is to say there’s a single key to solving all of them?

    The idea that we’re going to copy the brain in any meaningful way is also nonsense. There is no mechanism to image a brain at the level of detail required. And even if the brain is simply a neural network (in the CS sense of neural network; which it is not) recovering the weights even if we have the connections is going to be impossible.

    The idea that somehow “evolution” is going to help is also nonsense. All evolution is is an optimization mechanism. Saying “evolution” doesn’t make the problem easier in any way; it’s the same as saying we’re going to try really hard to solve it.

    The idea that intelligence is this 1D space and that all animals are ordered on it with us at the top is also totally unfounded. We don’t know how to measure the intelligence of an animal, nor do we have a clue about their relative intelligence or about ours relative to them. Maybe the difference between a human and a great ape is trivial or perhaps it’s huge. Who knows?

    I could keep going as everything in this article is totally unfounded in reality.

    1. 4

      I could keep going as everything in this article is totally unfounded in reality.

      Indeed explicitly so - I chuckled at presenting Back to the Future as some sort of scientific evidence.

      1. 2

        Heh. I spoke to an astrophysicist not long ago who was deeply upset at computer scientists repeating the notion of 280 as “the number of atoms in the universe”.

        Not all particles are part of atoms, and in the case of dark matter, we can’t reasonably estimate the number of particles, since we don’t know the mass of each. Also, the only number we can even begin to consider is for the observable universe, which is substantially smaller than the entire universe. And no, the interesting stuff is not solely in the observable part. We do have a good number for the mass of the observable universe, but no way right now to guess what portion of it is in supermassive black holes, where it is not useful for computation.

        I volunteered that this specific value probably got repeated so much because it’s the accepted number for cryptographic infinity. “Which is thought of as the amount of computation you could do if you converted the entire universe into a computer and ran it until the end of time, because that’s apparently a thing cryptographers fantasize about.” Executive summary: If that’s really what it’s an estimate of, it’s quite low.

        Anyway, yeah, this “AI singularity” stuff is coming from the same people who put “first multicellular life” and “transistor” on the same chart. Just for your amusement value.

        1. 2

          The idea that we’re going to copy the brain in any meaningful way is also nonsense. There is no mechanism to image a brain at the level of detail required. And even if the brain is simply a neural network (in the CS sense of neural network; which it is not) recovering the weights even if we have the connections is going to be impossible.

          What is your opinion of Blue Brain Project? They use biological neurons and their strategy to model neurons and recover weights seems eminently sound.

          1. 2

            I’ll let hundreds of neuroscientists speak for themselves in the open letter to the European Commission (click “Read the full letter” at the bottom, you can see the list of the 800 or so researchers that have signed so far). There was also some public press. I have yet to meet anyone serious in academic circles that didn’t agree with this position.

          2. 1

            I wish people would lay off some of this nonsense.

            I feel you. The problem is no one can demonstrate, with exactness, which part is (or is not) nonsense and should be laid off. There are impressive tools, an obviously impressive goal, progress in some areas but no overall perspective on what’s happening. Sure, most sober minds will say whole brain simulation is a crock, that’s what I believe but there are a lot of ideas out there and the problem of which approach to “prospect” is very hard. Full brain simulation in overall ignorance of the brains' function may be a fool’s errand but trying to simulate the brain may give insights that wind-up usable elsewhere. The idea of a single dimension to intelligence also seems to me fatally flawed, seems like the basic mistake that AIG boosters (or worriers) like Nick Bostrom, Eliezer Yudkowsky and the author of this article make. But it’s hard for me to say that with certainly unless I can demonstrate what intelligence actually is.

            As you say, we don’t know how the brain works and we don’t know what full intelligence is. That might mean the problem is unsolvable and the AIG folks are wasting their time. That might mean the problem is easier than we imagine and a few breakthroughs can prove the naysayers wrong. Extreme ignorance creates these paradoxes. If the problem was colonizing a continent of known size or visiting a planet of known distance, the difficulties would have quantities attached. Without that, we’re have “unknown unknowns” but the provision that since human intelligence has a material structure and software breakthroughs have happened, the prospects aren’t purely speculative. The situation might be comparable to the earlier European conquest of the Americas - the explorers were ignorant of the land and uncertain what the payoff would be - yet all this ignorance didn’t mean that found nothing or the quest wasn’t worth it.

          3. 6

            A few things that seem dicey to me about this kind of talk:

            We have no meaningful idea of what intelligence and levels of it really are AFAIK. What would it mean for an intelligence to be 100x smarter than a human? Does the idea of that even make sense? How would you measure that? Exactly what would it be able to do that an ordinary person can’t do?

            There’s a bit in there somewhere complaining about how people tend to be very dismissive of this stuff. Comparing it to how dismissive people were of grandiose predictions of the internet 20 years ago. I say that we are right to do that and that it’s a good thing. It’s easy to cherry-pick the grandiose predictions that came true and say that people were foolish to dismiss those predictions so glibly. That ignores the kazillions of other grandiose predictions that were available at the time that never came anywhere close to coming true. They were all completely different, and 99% of them were nonsense, so they were right to ignore them until the evidence got clearer.

            All of this is just one more grandiose prediction about the future. Maybe it will come true, and maybe it won’t. Maybe one of the other predictions of totally different futures will come true instead. There’s no way to put any meaningful probability on any of it, so it’s at least perfectly justifiable to ignore it all and wait and see what happens.

            1. 2

              We have no meaningful idea of what intelligence and levels of it really are AFAIK.

              I like this definition: “Ability to accomplish goals, divided by the resources needed.” Where resources can be money, time, computational power, etc. I don’t remember where I read that definition.

              So, for example, if two similar people are given $1,000 and told to use that money in aid of making as many unique people smile as possible in only one day, then on average, the more intelligent person will make more people smile. Because they will figure out better ways of chunking the money to give to people, or ways of distributing the labor, or will be faster at researching agencies to do this work, or something I haven’t thought of.

              What would it mean for an intelligence to be 100x smarter than a human? … Exactly what would it be able to do that an ordinary person can’t do?

              Here are some feats that I think might be doable by an entity with far more intelligence than humans:

              • Make scientific discoveries much quicker. Such as inventing twice-as-fast processors for itself after running for only a week. Or figuring out a design for solar cells that cost half the money after running for a month.
              • Persuade and manipulate people with a high rate of success, by using everything it knows about the person to understand their motivations and how to approach them to gain their trust.
              • Play the stock market with a high average return on investment.
              • Make complex, long-term plans with enough detail that they are still likely to succeed despite their complexity. Plans for goals like overthrowing a country’s government while minimizing violence, or solving world hunger with steps that people will actually be incentivized to carry out.
              1. 1

                I remembered where I read that definition of intelligence: in Facing the Intelligence Explosion – Playing Taboo with “Intelligence”.

                Intelligence = (optimization power) / (resources used)
                

                This definition sees intelligence as efficient cross-domain optimization. Intelligence is what allows an agent to steer the future, a power that is amplified by the resources at its disposal.

              2. 2

                We have no meaningful idea of what intelligence and levels of it really are AFAIK. How would you measure that?

                There is Universal Intelligence Measure and its realization as practical test, AIQ. You can argue whether AIQ is measuring the right thing, but AIQ is 1. completely well-defined 2. measurable in practice 3. somewhat plausible as a measurement of intelligence.

                http://www.vetta.org/2011/11/aiq/

                1. 1

                  We have no meaningful idea of what intelligence and levels of it really are AFAIK.

                  Well, there’s a large body of work describing in many forms. The problem is all this together doesn’t seem to go anywhere far enough to really characterize intelligence. So it seems pretty definite that there’s something there, it fits some description but we face a barrier in describing what it is. But how “thick” is that barrier? Our ignorance leaves us not know that either.

                2. 1

                  ‘In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.’

                  Yes we do: nonsensical.

                  1. 1

                    I’m really surprised that the response here has been overwhelmingly negative. I find Tim’s writing very enjoyable, even if speculative.