1. 2
  1.  

  2. 2

    There are a number of issues in this article.

    The main, most obvious error is talking about fairness of algorithms.

    Fairness, like trust, is a human construct that applies only to humans.
    Talking about “fair machines” is misleading: it’s the byproduct of the antropomorphic language we should replace because it fools both experts and laymen.

    So we should never talk about algorithm fairness, but about decision transparency and accountability.

    I find it funny when I see a self defined “AI/ML expert” stating that no human could explain a decision of a deep learning system:

    • first, an artificial neural network does not decide, it computes “decision” is a poor interpretation of its output
    • second, if you cannot explain a software exactly, it is simply broken

    Such people should probably be treated as a sort of apprentices: they can have great insights but should never be trusted for serious tasks.

    So basically if you cannot explain your software computations precisely (transparency), your software should not be applied to human input or produce output that affects decisions related to humans.

    However, transparency is not enough. Because of bugs.

    Even if you provide all the sources and all the data and information required to understand and explain the computation of your AI (that btw is what article 13 and 14 of GDPR requires if you apply such techniques to data of European people), you are accountable for errors.

    This is where fairness comes into play: it’s not in the design or development of the system that you decide what fairness rules you will grant! That would be too simple!

    It’s when the system is in production that you will be held accountable for any observed discrimination, even if it was not a relevant social issue when the AI was designed.

    That’s simply because we create machines to serve humans, and we cannot allow any human to violate the rights of another human through a machine proxy.

    Otherwise we would sacrify humans to machines (or more precisely, to corporate profits).