1. 5
  1.  

  2. 1

    Many people will probably not like what they see in these slides because of their political position. It was very refreshing to see a scientific “formalization” of both ethical approaches.

    I see a comparable effect in terms of “equality”. There can be two outcomes: “equal rights” or “equal status”. Many people will claim to support both, but one opposes the other. You can either have equal rights or equal status. To give an example: If you demand a 50% female-quota in leading positions, you are against equal rights as women are then treated differently compared to men. You could weigh both outcomes in some way, but one cannot truly exist while the other prevails.

    It would be nice to see more reasonable philosophically weighted debates on these things instead of this constant supercharged reproduction of political dogmas.

    1. 9

      I wonder what’s “refreshing” in this slide deck to you. I recommend supplementing with the original talk.

      The author imagines that racism cannot be unintentional, and that racist stereotypes have “an element of truth”. At the same time, they are unwilling to choose security over convenience, to stop calling heuristics “algorithms”, or to question why we have designed certain systems. This combination leads to a worldview where racism is a cost of doing business.

      As an example, the author discusses an algorithm which detects people blinking while a photograph is being taken. “If you can fix it, do that… I certainly can’t make my algorithm detect every single error,” they say. Implicit in this claim are the notions that algorithms are inherently buggy/lossy, that fixing showstopping product issues must be balanced with shipping the product, and most dangerously, that sometimes it’s acceptable to shrug, give up, and have a product with some racist behaviors.

      A legal theme in the USA that pervades the author’s critique of procedural fairness is intersectionality, which has become popular for legal arguments in much the same way that set theory has become popular for mathematical work.

      During the section about “maximizing profits”, the author takes a sickening tone, reducing people to data for the purpose of optimizing business. They fail to point out the typical humanist and socialist arguments against doing so, but claim to have objectivity; this is a glaring blind spot in their conceptualization of people. Indeed, you can hear it in their tone as they talk about how redlining “is considered…a big harm.” They don’t care whether people are harmed, or whether people are considered harmed; they care only about the numbers in their ethical calculus.

      You mention “equal rights”. This seems a wonderful opportunity to remind people what the term means, and to also reinforce why it is prime. Equality of rights ensures that people are treated without bias by The State; the biases of The State’s actions against its people are then assuredly the biases of its officers, and it is The State’s obligation to enforce its own rules against itself. From a highly social action, we gain a highly social institution, and from equality of rights, we gain an existence that is in stark contrast to this corporatized smorgasbord of data.

      The section on modelling criminality is not only typical for crime science, but is completely in line with typical research on the subject in terms of its assumptions.

      The author’s maths are alright, but the conclusions are quite wrong. Their analysis of base rates misses the base rate fallacy, their correlation between race and crime completely omits the well-known hidden variable of socioeconomic status (“wealth”), and their concept of “mathematically guaranteed bias” ignores statistical significance.

      They fail to link to the “impossibility theorem”. There exist overviews of the main result and concepts, but I want to offer my own conclusions here. First, note that the authors of the paper imply repeatedly, e.g. on p1 and p3, that their results generalize from decision-making machines to decision-making panels of people. We may comfortably conclude that the problem is with our expectation that bias can be removed from systems, not with the fact that our biases are encoded into our machines. Another conclusion which stands out is that compressing datasets will cause the compressor to discover spurious correlations; stereotyping, at least in this framework, is caused by attempting to infer data which isn’t present, much like decompressing something lossily-encoded. I would wonder whether this has implications for the fidelity of anonymized datasets; does anonymizing cause spurious correlations to form?

      I’m surprised that they waste time doing utilitarian maths and never mention that utilitarian maths leads to utility monsters or The Repugnant Conclusion.

      My choice quote:

      So, these are cases where there’s significant predictive power in demographic factors. So, your algorithm will actually be more accurate if you include this information, than if you exclude it.

      I wonder whether they understand why legislation like the Civil Rights Acts exist. It doesn’t come through in their tone at all; when they talk about the problems of inequality, they don’t discuss systemic racism. They discuss how poor FICO Scores are, but didn’t point out that FICO is a corporate entity like the credit bureaus. No consideration was made for systemic improvements. To the speaker, the banks are an immutable and unchanging wall whose owners always steer it towards profitability by careful management of balance sheets; any harm that they do is inadvertent, “second-order”, a result of tradeoffs made by opaque algorithms and opaque people trying their best to be “fair”.

      Their closing “meta-ethical consideration” is probably as good as it can get, given the constraints they’ve placed on themselves. If we can’t challenge the system, then the best that we can do is carefully instrument and document the system so that it can be examined.

      1. 4

        Thank you for this comment. The slides alone were giving me a weird vibe, but the talk cements that feeling.

        Approaching these issues through utilitarianism and formalism feels like completely the wrong approach and borders on scientism. After all, the author works for a lending company, so it’s not surprising he’s trying to paint bias as inevitable.

        1. 2

          I am incredibly grateful for your detailed dissection of this piece. The piece strikes me as pretty similar to the infamous James Damore memo, at least in its conclusions (trying to justify bigotry by an appeal to science), though the arguments it advances to get there are different. As somebody whose self-declared job as an activist involves figuring out messaging strategy for countering bullshit, I was deeply distressed by these slides because it would be better to have a coherent response ready to go, but I didn’t have time to dissect it in detail since there’s other stuff going on and no proximate need. I’m sure I’ll be referring back to your comment as necessary.

          1. 2

            Yes, this article is propaganda. Thank you for covering why much more thoroughly than I was willing to put the effort into doing.

            1. 0

              In the event, I just posted it to /r/SneerClub.

          2. 4

            I think that many people may agree with some pieces of what you’re saying, while finding your example to be inaccurate and harmful. I want to point this out so that people can be very thoughtful about whether and how they engage, and in particular so that people can remember to not treat these various separate ideas as if they’re one piece that must be accepted yes/no.

            It’s very easy, when responding to positions that are put forward as if they’re hard truths on contentious issues, to inadvertently accept some piece of a premise without understanding its full context and the harm it causes. People who try to calm things down then sometimes wind up exacerbating harms, instead. I think very highly of lobste.rs users and your critical thinking ability, but I still want to urge care in replies to this thread.

            I’m intentionally not taking a position on the actual topic, at this time, so that I can keep my personal feelings out of this to the extent possible, though that’s never 100%.

            1. 2

              Thank you for your thoughtful answer. If you look closely, I haven’t also taken any position in this regard and just given an example for such a concept.

              Where I should’ve been more clear is that the slides, in the end, actually give an idea on how to “solve” such a problem, namely by considering this process as a constrained optimization problem. If we look at “equal rights” versus “equal status”, it means that you could for instance proclaim “equal rights” as your main goal, but under the constraints of certain structural conditions that could steer processes into a more equally stated outcome. The thing is, if we discuss philosophical ideas, there is little wiggle-room, and a good solution must be found that is in-between both concepts.

            2. 2

              Isn’t your argument failing to do exactly what you praise the article for doing? To quote the last slide:

              formalize your ethical principles as terms in your utility function or as constraints

              Things like “equal rights” and “equal status” seem very poorly defined in this context vs the concepts of proedural and group fairness etc. outlined in the article.

              1. 3

                What I think is important is that the presenter first gives the two “extremes”, namely the goal of utilitarism and the goal of equity (each with their pros and cons), and in the end gives a possible solution to both by applying trade-offs to one goal so the other is at least partially respected.

                That’s what I meant with the following sentence.

                You could weigh both outcomes in some way, but one cannot truly exist while the other prevails.

                From what I understand, “equal rights” mean that any individual has the same rights, regardless of any traits or abilities; “equal status” consequently means that regardless of any traits or abilities there is a “fair” outcome for each individual in such a way that the structures reflect an equal status for all sub-groups. Granted, it is realtively easy to define for men-women, but much harder for other matters.

                Let me bring my point across by limiting ourselves to the men-women-matter for now; I agree that at least equal status is harder to define for other cases: Neither equal status nor equal rights is the golden way.

                Going with equal status would be crazy e.g. for engineering positions where only a minority of graduates are female and an equal status would intensely discriminate against men such that even much more qualified men compared to women wouldn’t get a position. To turn it the other way around, equal status would also discriminate women in female-dominated jobs.

                Going with equal rights only would possibly not bring change to areas that are male- or female-dominated due to structures. People weigh the importance of these structures differently, but sometimes, just equal rights are not enough to bring change that is desired, because there is written law and also societal norms, which are often different. Again, I’m taking no position here, as this is not the point.

                The presenter thus proposes this weighted approach in the end that allows a median that includes both factors. For instance, one could discuss an approach of “equal rights” which also reflects structural changes as constraints to this optimization problem. Or you could regularize the optimization problem with a weighted “status”-term. In the long run, this tends to get complex if you think about it, so these are just thoughts. However, it’s nice to see that it has been formalized like that.

                To give one more example, which is unrelated to the matters of equality discussed above: If you gave a machine the task to solve hunger in Africa, a valid solution in the machine’s sense would be to nuke the entire continent and wipe out all life on it. No humans = no hunger. This is because the machine hasn’t been given proper “constraints” for the solution. The main objective of “minimizing” hunger has to be matched with constraints that are easily forgotten when we, as humans, solve such problems. These constraints are about ethics, sustainability and so forth. The same problem, albeit a bit less dramatic than my example, was presented in the article. To go full circle, the point of the article is to give AI a sense of ethics it understands. We humans have a natural “limit” when it comes to realizing solutions. Big corporations and governments usually are only constrained by the biggest extent of a law and often act unethical. By thinking about laws and the lawmaking process, or just algorithm design, we simultaneously think about problems that affect better lawmaking and also better AI design.

                tl;dr: I may have been a bit short with my wording earlier and I explained it a bit deeper here.

                1. 2

                  I see what you’re getting at. As sanxiyn said, “equal rights” and “equal status” do seem to map to procedural and allocative fairness. I’m not sure that I agree that “going with equal status would be crazy”, but oviously there are tradeoffs there and this way of casting the discussion allows you to specifically discuss those tradeoffs, which is nice.

                  Thank you for taking the time to explain your thoughts in more depth.

                2. 1

                  I think “equal rights” and “equal status” is approximately procedural fairness and group fairness. I think both are wrong thing to focus on, as they result in things like FICO biased against Asians. See my other comment.

              2. 1

                Consider a set of protected features A, a set of non-protected features B, and a prediction target C. Group fairness wants prediction calibrated to A, procedural fairness to B, and utilitarianism to C. (That is, utilitarianism is that same C should get same prediction, etc.)

                Using FICO example, fixed threshold is group unfair to blacks, procedurally fair, and utilitarian unfair to Asians. I think utilitarian unfairness is the largest problem, as highlighted by Simpl’s Hyderabad example.