1. 13
  1.  

  2. 5

    Artificial general intelligence is something we have plenty of here on Earth, most of it goes to waste, so I’m not sure designing AGI based on a human model would help us much.

    This doesn’t follow. One promise of AGI is that you can train it once, and have multiple instances of it parallelized with the ability to fork new when needed. With HGI, your cost of forking is rather heavy, and has non-deterministic outcome.

    The second promise is that while you train your AGI, if you think you have mis-trained it, you can always go back in time to the point where the mistake happened, and retrain. With HGI, that too is not possible.

    1. 2

      “This doesn’t follow.”

      It does follow from what the author is saying about the risks of AGI. There’s billions of them here already. Many are geniuses. Many can figure out how to mass murder people or take down critical systems. Yet, here we are. The biggest dangers so far have been plutocracy, externalities of rational actors, over-consumption by consumers, and apathy toward long-term risks all by massive amounts of biological AGI’s.

      What the “technocrati” are worrying about is miles away from what is actually affecting us. They’re also not tying in what biological AGI’s are doing or not doing now to what digital ones trained in same environment might do later. Two, big mistakes that make their risk analyses not worth much.

      1. 2

        Perhaps I am out of the loop here, and missing the larger conversation, but let me try to explain my objection.

        As far as I see, the the billions of GIs around us are really bad at reproducing themselves. That is, if there is one Hitler, or a Stalin, there is very little chance of their progeny becoming something identical. Further, they are limited by their lifetime. Hence, what we have are really constrained in what they can do. An AGI should be able to reproduce themselves rapidly and exactly. This means that even if the AGI were slightly less capable than a human, it could simply brute force most problems by spawning themselves. This is something the current GIs are unable to do. I see no place where the author acknowledges this limitation of current GIs. But to me, this is the single biggest risk.

        1. 3

          They could be better at reproducing themselves. That could be a definite advantage, esp with high-training talents, of AGI’s. Your example falls apart again at Hitler and Stalin. These people did what they did because lots of other people supported it. They weren’t superintelligences that made stuff happen despite humanity not wanting it. The ways to counter the AGI’s will be the same ways we counter rogue humans. Likewise, the worst of them will likely be supported or fueled by others as with humans. If anything, they’ll probably be executing on the destructive plans of human managers optimizing for a goal while externalizing all the costs. Just like they do today.

          Far as copying, I’m not even sure they’ll have an advantage. Not quickly anyway. AGI will probably take more resources than what DL systems are using now. The best ones are running on serious hardware that most people can’t afford. They also cost way, way more than talented humans to do a given job. Each development probably costs more than training humans to do a job. The companies also want to host, license, and so on whatever they develop. So, the point where their copying advantage kicks in might be whenever the hardware and other costs of the AGI are lower than simply hiring a human to do the job on same time scale. Humans are cheaper than HPC clusters right now.

      2. 1

        HGI

        Forgive the ignorance, but what does “HGI” stand for?

        1. 3

          Humans (General Intelligence).

          1. 1

            Thanks.

          2. 2

            Maybe it stands for “human general intelligence”.

        2. 3

          This is as absurd as saying artificial computation (which we call “computer” now, but that word used to mean human) is useless because billions of humans can do arithmetic.

          1. 1

            The guy is heading in the right direction but he isn’t saying anything new. This kind of critique has existed since forever and he’s argumenting it in a very weak way, arguably doing more harm than good. That said, if all the STEMlords were like him, even without renouncing their framework of thought as STEMlords, there would be much less bullshit about AI around the internet and the IT industry. So in the end, kudos, but read more tech critique.

            1. 2

              would you be able to point to a stronger/ better worded/ more comprehensive argument similar to or at least in the vein of the points I tried to raise in said article ?

              I wouldn’t mind updating it or even scrapping it and replacing it with a re-direct or a link at the top if those same ideas are manifest in someone else writing in a more cohesive manner.

              1. 5

                Sure.

                Here’s one: http://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ This one is somehow related: https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

                And this other paper is already beyond the AGI narrative and tries to understand why this and other narratives about AI (and implicitly AGI) came to be. I’m not 100% you will find it related, but it’s such a great paper that I think it’s worth reading: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3078224

                edit: uh, and another great one: https://jods.mitpress.mit.edu/pub/resisting-reduction This is probably even closer to your argument, in some way. It doesn’t say that we don’t need AGI because we have humans, but it takes the narrative about AGI and singularity to highlight flaws in the vision of the human from the perspective of the singularists.

                1. 1

                  From your first link.

                  Even if there is a lot of computer power around it does not mean we are close to having programs that can do research in Artificial Intelligence, and rewrite their own code to get better and better.

                  That’s a strawman. The argument is exactly - AGI is by definition capable of doing research in AI, ergo it can improve itself. And that is dangerous. By saying “but that’s far away” you are not defeating the argument.

            2. 1

              It doesn’t matter how many computers (in the 1900’s sense of the word) you wire together, train, and design elaborate signaling for, you will never ever play flappy birds on the result.

              We’re guaranteed to find tasks which are better suited to hoards of AGI than for HGI.

              Imagine two humans attempting to have a conversation about a blog post that only one of them has read… The turn around time while human B goes to read the blog is enormous! Much easier to assume what it says based in the title, post a comment based on prior exposure to the subject, and move on…

              1. 1

                “Even if superhuman artificial intelligence was somehow created, there’s no way of knowing that they’d be of much use to us. It may be that intelligence is not the biggest bottleneck to our current problems, but rather time and resources.”

                “One of the most misunderstood ideas that’s polluting the minds of popular ‘intellectuals’, many of them seemingly accustomed with statistics and machine learning, is the potential or threat that developing an artificial general intelligence (AGI) would present to our civilization.

                This myth stems from two misunderstanding of reality.”

                “A second, more easy to debunk misunderstanding, is related to the practicality of an AGI. Assuming that our wildest dreams of hardware were to come true… would we be able to create and AGI and would this AGI actually have any effect upon the world other than being a fun curiosity ?”

                This man is wrong. The reason why I am going to write out this comment is because AGI is in my opinion the single greatest existential threat to human kind bar none. And as far as I can tell very few people are concerned about it. Most people are like this guy, dismissing totally both the possibility that AGI can be created and the possibility that it would be dangerous.

                He says that AGI can’t be dangerous because humans exist. There are too many holes in this to cover all of them. Human beings are very limited in their capabilities and they have empathy and other human traits that cannot as of now be removed or altered in any reliable or quantitative way. There is the odd sociopath, yes. Having 7B sentient entities that have no human frailties, empathy, etc would lead to a very, very different world than the one we live in right now. 2019 will be looked back on as a human paradise if AGI is not prevented.

                AGIs would wipe the floor with humans in any domain. The laws of economics and evolution demand that AGIs render humans jobless, powerless and obsolete. To deny that AGIs will take over is to deny every instance of natural selection that resulted in human beings. When a dominant life-form exists, it proliferates. This is because it only takes a small group of those dominant life-forms to seed an explosion of growth. In the case of AGI, it will only take one. It’s not that AGIs will wipe us out, it’s that they will render us transient. The only reason human society has covered the earth for so long is because every country and entity that won out over others was powered by human labor and intellect. Many societies have come and gone but they were all human. Soon, that pattern will continue but the winners will not use human labor or intellect, for the first time ever. And eventually there may be none left, like any other obsolete technology. This is not a Jetsons outcome.

                The reason why we as humans have even the limited rights, privileges and frankly luxuries that we do have is because we are a source of labor and signal processing that cannot be replaced by machinery or anything else. When that is not true anymore, things will not get better for humans — not in the long run.

                The current machine learning explosion is not the result of intellectual advancements. It is largely the application of old and established techniques and principles in a new environment where computation is very cheap. The ML revolution is mostly a result of cheap compute — the CEO of openAI has said it himself. My point is that in this new world of cheap compute, the ML we see now is the most obvious, naive use of compute. It is low hanging fruit. There is much, much more potential in that compute than people realize. Not through convention ML.

                There any many computers in the world. We are now seeing an emerging trend where the computational resources of these computers are being made available in a frictionless manner. It used to be you had to rent a server. Now you submit your app and buy a certain amount of compute or whatever. People buy compute to crack passwords. Web assembly is making all computers in the world able to run the same binary. Overall the trend is that cloud compute is arriving and it will make compute cheaper than ever before. Discovering AGI is a compute intensive task. When compute gets cheap and easy enough, someone will discover AGI. That threshold exists somewhere — maybe we already passed it. But the point is that it does exist somewhere. And all it takes is one AGI to design the next one and so on.

                There wasn’t much possibility of creating an AGI with vacuum tubes or discreet super computers. But now, for the first time in history, the computational substrate from which an AGI could spring will exist. We need to ask the hard question of whether or not that’s a good thing.

                The fact that AGI can’t be tested and understood is true and that’s why it’s so dangerous. We may not even realize it when we create it.

                1. 2

                  I don’t think you’ve defended your point sufficiently. There is no reason to believe AGI is an existential threat, nor even that it has a self-preservation instinct.

                  1. 1

                    What you are saying doesn’t make sense. AGI is a concept, “it” is incapable of having a self preservation instinct or any other instinct or any individual quality whatsoever. It’s implementations, however, are very capable of prioritizing self preservation above all else. AGIs will be created often and eventually one of them will. The reason AGI is dangerous is because all it takes is one. And that’s just one avenue for things to go wrong. We haven’t even discussed the implications of AGIs being used to augment the sentient entities that already exist.

                    You are right in the end because I have not defended my point well enough. It’s very hard to articulate what’s in my mind. If nothing I have said can make you see it, then all I can say is that I read and think about this topic a lot and have done so for about a decade now. It was only recently that I developed my current, negative view of AI. And nothing anyone has said has been able to touch this opinion I hold. When I see a weakness in one of my opinions I don’t overlook it. I’m pretty sure I’m right. And there is no other reason to believe what I believe other than thinking it is correct because it is an opinion that is very unpopular. And it’s also sad and painful to believe it.

                    1. 3

                      @dian, I wrote a comment to the parent of this this thread which I deleted as I didn’t believe it was helpful or constructive.

                      I reacted to this phrase:

                      AGI is in my opinion the single greatest existential threat to human kind bar none

                      and replied with a litany of other threats humanity faces at this point in time.

                      I realized this was disrespectful of your opinion. I’ll concede that if unrestrained, inimical AGI is developed or appears spontaneously, it can be a very real threat. However, I believe the probability for this happening is very low. Unfortunately, other threats have higher probability.

                      And there is no other reason to believe what I believe other than thinking it is correct because it is an opinion that is very unpopular. And it’s also sad and painful to believe it.

                      On a personal note, when I was much younger I was almost paralyzed by fear of nuclear war. This was triggered by popular media in the early 80s (The Day After comes to mind). I think I can emphasize with your point of view and concerns. But please, don’t let it become a negative force in your life. By all means, research the matter, keep an eye on research, and advocate for safety in AGI research. But don’t lose hope.

                      1. 3

                        Thank you so much for your kindness. I really do write these comments because I believe it’s important, not to provoke people or stir up trouble. I deeply empathize with people who, like you, struggled to reconcile with inevitable doom during the Cold War. Richard Feynman once said that after the bomb was completed, he became totally convinced that the world would end in nuclear fire. Soon after, he came across a building under construction and he could only ask why they were building when it was just going to be blown away in a nuclear blast. Regarding AI, it does seem pointless to build on some days but I have not lost hope because it may not happen in my lifetime and it’s a problem that is amazingly easy to avoid in theory.

                        I do respect your opinion and this kind of dialogue is the solution always, regardless of what the problem we face may be.

                  2. 2

                    Please outline how an AGI will protect and maintain its electrical power to keep it going in the absence of (or despite of) humans.

                    1. 2

                      Bob: It’s OK, we shut down all the power plants and disconnected all the power cables to the mainframe, the evil AI isn’t going anywhere.

                      Alice: The power cables to the what?

                      Bob: To the mainframe.

                      Alice: It wasn’t on a mainframe, nobody uses mainframes! It’s copied itself to every data centre and ordered redundant power supplies for them all, we were trying to release a worm to patch people’s neural implants to - oh…

                      1. 2

                        Why specify “in the absence of humans”? Companies, which are not humans, can hire humans. AGI can do the same.

                        1. 1

                          Your argument depends on nobody ever giving an AGI control of any physical assets that it could use to manipulate the physical world. The reason that AGI is dangerous is because it would only take one instance of an AGI with a robot under its control to spark everything I’ve described, and it is not the only way to spark what I’ve described. AGIs, once it is understood how to create them, will pop up one after another. Eventually one of them will gain control of a physical embodiment somehow.