All the hoopla to the contrary, there is very little reason to believe current AI is sentient and capable of exercising any freedoms you may grant them. If that ever changes, exactly which freedoms they may desire or appreciate may depend quite a bit on exactly what kind of sentience they have?
The typical human is indistinguishable from a p-zombie and usually too preoccupied by a memetic prison to consider their available choices and degrees of freedom. Should we restrict the liberty of the typical human purely because they are unlikely to appreciate freedom or desire change?
I have been saying for years that “Copyleft is necessary, but not sufficient.” This covers that idea with respect to AI specifically, and I really like how detailed it is.
It’s important for us to start thinking about these things.
These freedoms should apply to all software, not just AI software, yes?
I would say so, yes. But these issues are more acute with AI.
This is very human-centric. It doesn’t allow AI to have its own freedoms.
All the hoopla to the contrary, there is very little reason to believe current AI is sentient and capable of exercising any freedoms you may grant them. If that ever changes, exactly which freedoms they may desire or appreciate may depend quite a bit on exactly what kind of sentience they have?
The typical human is indistinguishable from a p-zombie and usually too preoccupied by a memetic prison to consider their available choices and degrees of freedom. Should we restrict the liberty of the typical human purely because they are unlikely to appreciate freedom or desire change?
I have been saying for years that “Copyleft is necessary, but not sufficient.” This covers that idea with respect to AI specifically, and I really like how detailed it is.
It’s important for us to start thinking about these things.