“I’m a strong beautiful graph traversal algorithm that don’t need no human.”
With confusion at this level now, among people who appear informed, it seems like the confused ideas will really multiply when or if we get closer to flexible human-equivalent AI.
How human society will deal with the creation of a “strong” AI is a difficult question and viewing them as human-equivalent isn’t going to help.
What is confused about this? You’re not giving enough detail about what points are incorrect. This clearly isn’t a prediction of what will happen, but rather a philosophical waxing on what should happen. If it is conscious, and can communicate, should it not be treated with the dignity that other living beings are given? It might not care about life, or liberty, or the pursuit of happiness, but that does not mean that what it desires should be disregarded, provided that it does not impinge upon the rights of other sentient beings. We are all in this boat together, surely wisdom and history have shown us that if we are to work together in the years to come we should not act out in fear. If we can’t treat other sentient beings with respect, perhaps we shouldn’t build them.
Buddhists already espouse the idea of “Compassion for all sentient beings”. You might notice that living and non-living, material and immaterial are intentionally omitted from that statement. Frankly I have yet to see any convincing argument as to why we should be treated differently than an AI with sufficient complexity.
So we should let The Terminator and Skynet live because they are now protected by human rights? What about a self driving car that kills people, should we lock it up, but keep it powered on because it is “human like”? How self aware does it have to be to gain human rights?
We regularly kill humans for doing these things, it will be treated similarly.
In some countries. Not mine.
Your morals your burdens I suppose, sounds like you’ll be keeping the car running. It’s not healthy to hold double standards.