1. 5
  1.  

  2. 1

    One approach could be to randomly generate thousands of possible scenarios and crowdsource the ranking of their outcomes in terms of moral preference.

    When the self-driving car sees the impending schoolbus collision, it would attempt to find the closest and most morally acceptable match and replicate it.

    1. 1

      You cannot be serious?

      This madness needs to be stopped. It would be better to just nuke the whole planet than continue in the direction everything is unfortunately heading.

      Some call it progress. People are stupid and blind.

      I can’t wait the day when I’ll be able to go off the grid to live and die in peace.

      1. 3

        Although opinions vary, it’s generally accepted that morality is sourced from some combination of intrinsic and societal values. In any given situation, a person’s decision is gated by their moral reasoning, itself subject to the perceived and expressed values of the person’s peers – based on a corpus of data we have gathered during our development and life experience.

        The reason we are talking about this in terms of robotics is that self-driving cars, with their lightning-quick responses, are capable of making moral decisions during events that humans have never had to consider. Given a couple of milliseconds during an unavoidable collision, human reaction times don’t grant us the luxury of making any choice at all. Our behaviour is essentially random, or a continuation of that which we were exhibiting before the incident.

        The self-driving car gives us the ability to make a moral decision by proxy. In some people’s eyes, this perhaps goes beyond acceptable morality. It remains necessary to discuss the mechanism by which such a ‘proxy’ decision would work, since many people also consider the alternative (preventing the vehicle from reacting to a situation faster than a human could) unethical.

        If we do decide to allow such automated decisions, we need a way to provide the car with a similar corpus of moral data to our own. In the tightly bound world of a self-driving car, I suggested a way to load the vehicle with a set of example situations and ‘least worst’ outcomes. By ‘crowdsourcing’ the judgement on each situation from multiple people, you can avoid some of the dilemmas that come from a single decision-maker imposing their views. This is similar to the theory behind trial by jury, and similar to the way we arrive upon our own ideas of right and wrong.

        The alternatives are to allow a single government or organization to impose their own morality (imagine a world where Ford’s moral preference in driving style differed from Honda’s, and you bought a car based on its ‘personality’), to allow the owner of the car to make the decision in advance, or to abdicate moral responsibility entirely by programming the car with the same reaction times as a human (and losing the massive increase in safety brought by self-driving cars).