1. 3
  1.  

  2. 7

    This is a step forward: instead of trying to build “moral machines” to punish and blame for the errors of their creators (as many AI researchers looking for ways to teach ethics to machines would dream) this is a first admission that human accountability cannot be replaced.

    It also aims for a new intellectual honesty, that was missing before in their work with regulators.

    However, in the text, all bold principles are lessen by vague openings to moral tradeoffs, resulting in a rather bland list of good intentions.

    For example, in national cybersecurity you can find several offensive tools exploiting zero days to break an enemy system. Also nobody voted for them to decide whether the risks for a given application are overwhelmed by the benefits it provides.

    At the end of the day, the good intentions claimed from entrepreneurs cannot replace proper international regulation.

    And while I think this mild text is still a great success of Google’s employees, we cannot rely on internal corporate dynamics to ensure proper control of externalities.

    That’s what law exists for.

    1. 0

      Their blog already punishes me for violating correct order of reading text (scrolling up, when I must always only scroll down) by covering text with panel of half screen height. I definitely don’t want AI that classifies people as “good” or “bad” created by this company.