1. 3
  1.  

  2. 5

    Even with the impressive results demonstrated by deep learning algorithms over the past few years, none of these algorithms demonstrate any real autonomy. There’s a categorical difference between autonomy and things like image recognition.

    I actually don’t think it’s that hard to encode ethical considerations into machine learning algorithms as part of the objective function that they optimize

    So, assuming this is true, and assuming we’ve achieved a general AI, I can’t help but wonder if enforcing so-called ethical constraints wouldn’t also diminish an AI’s ability to be generally intelligent.

    1. 4

      I think ideological discussions or projects relating to AI and ethics are silly at this point.

      We are so far away from AI’s that could operate in roles in which ethics would play any part that making claims or statements regarding such, seem to be without merit or reason.

      1. 2

        I imagine the thing that has prompted this question is the development of self-driving cars and the following scenario: your car is driving you down the road, there’s an 18-wheeler barreling towards you in the wrong lane, and there’s a group of nuns/toddlers/suitably innocent victims on the sidewalk. Your car must either get hit by the 18-wheeler, ensuring your demise, or run over the bystanders on the sidewalk. What does it do? Why? What does whatever decision it makes mean?

        I suspect that right now whatever decision the car made would be an artifact of the information it had at the time, not an attempt to weigh your life against those of a gaggle of adorable schoolchildren. I would guess that either (a) it doesn’t consider the sidewalk a valid route to avoid the oncoming truck, and you die because the car’s programmers didn’t equip it with lateral thinking for accident avoidance, or (b) it doesn’t recognize pedestrians on the sidewalk as such, and hits them because they appear to its sensors smaller and less solid than the oncoming truck.

        1. 2

          Which is why I think self-driving cars aren’t a particularly good idea. Humans should be making those ethical decisions, not machines. I can decide to sacrifice my life or not, I don’t really want to give a machine that ability.

          Also, machines are defective. I don’t believe that any self-driving car will be completely invincible to hacking, nor that they will make the right decision in every single ethical case.

          1. 3

            Which is why I think self-driving cars aren’t a particularly good idea. Humans should be making those ethical decisions, not machines.

            If a human would make the decision deterministicly, we can program a machine to do the same. If a human would not make the decision deterministicly then I don’t think we should be trusting them with it. Humans are just machines that happen to be made of meat in any case.

            Also, machines are defective. I don’t believe that any self-driving car will be completely invincible to hacking, nor that they will make the right decision in every single ethical case.

            Sure. But I can readily believe they will be safer and more ethical than the average human driver.

            1. 2

              Some people choose to intentionally hit the most people.