1. 20

In a nutshell: Blacks who don’t commit subsequent offenses get higher “risk scores” than whites who do commit subsequent offenses.

  1.  

  2. 31

    The article slightly hints at it, but the role played by outsourcing to private companies with ‘proprietary’ opaque scores seems to be an important part of the whole sorry affair. Besides making the process less transparent, it provides a sort of veneer of sophistication on top of what is probably, from the information available, just really elementary correlational analysis, which if treated openly many judges would balk at using. “You shouldn’t release so-and-so because he grew up in a trailer park, and people who grew up in trailer parks are more dangerous than people who grew up in nice middle-class homes” sounds obviously unjust, but the outsourced scoring system can effectively make the same recommendation and seem superficially less bad.

    1. 10

      This is key. Many times machine learning algorithms tell us basically what we could have learned from much simpler methods, but since the ML algorithms tend to be more opaque (or just straight-up proprietary), have a certain mystique about them, and operate on larger data sets people view them differently. Machine learning algorithms are also often highly sensitive to their training data, which is often secret or proprietary as well.

      Personally, I think that in order for any algorithm to be used against a person in court (or in sentencing / incarceration decisions) the state should be required to provide the algorithm itself (a description, possibly mathematical), the implementation (the source code and associated documentation), and access to any data that were used to train it. The state should also be required to describe, in a general sense, why the algorithm produced the results it produced. In some cases, and for some classes of algorithms, this might be impossible. Results from those algorithms should be entirely inadmissible in court.

      There were some bits that I found particularly interesting:

      When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.

      It seems to be that a coin flip isn’t the right benchmark here. I’d be interested to know how well a small panel of trained social workers and corrections officials would have done.

      Also, how many crimes were committed as a result of unjustified incarceration? One of the best ways to turn someone into a criminal is to stamp “FELON” on their permanent record because it cuts off almost all legal financial support options.

      The survey asks defendants such things as: “Was one of your parents ever sent to jail or prison?” “How many of your friends/acquaintances are taking drugs illegally?” and “How often did you get in fights while at school?” The questionnaire also asks people to agree or disagree with statements such as “A hungry person has a right to steal” and “If people make me angry or lose my temper, I can be dangerous.”

      Morally, I believe a hungry person has the right to steal. I think most people would agree with me, at least in some cases. Apparently that makes us more likely to commit scary, dangerous crimes. Honestly, this entire thing seems tuned to lock up poor people as much as anything else, and that scares the hell out of me.

      1. 3

        Personally, I think that in order for any algorithm to be used against a person in court (or in sentencing / incarceration decisions) the state should be required to provide the algorithm itself (a description, possibly mathematical), the implementation (the source code and associated documentation), and access to any data that were used to train it. The state should also be required to describe, in a general sense, why the algorithm produced the results it produced. In some cases, and for some classes of algorithms, this might be impossible. Results from those algorithms should be entirely inadmissible in court.

        There is some movement in U.S. constitutional law that might lead in this direction, under the label “confrontation clause formalism”. The 6th Amendment’s Confrontation Clause says that “In all criminal prosecutions, the accused shall … be confronted with the witnesses against him”. Formalists read this expansively to mean that a defendant has a right to cross-examine anyone giving evidence against them, including in cases where someone has used technological means to produce the evidence. An important test case was Melendez-Diaz v. Massachusetts (2009), which narrowly held (5-4) that when a prosecutor enters a forensic drug test into evidence, the defendant has a right to demand that the laboratory technician who performed the test appear in court for cross-examination, so the defense can pose questions about the evidence they produced. It hasn’t gone there yet, but it’s not a far jump from a right to cross-examine a forensic lab technician, to having a right to cross-examine some responsible party for the operation of computer software used to produce evidence.

        As an aside: Interestingly this split between Confrontation Clause formalists, and the opposite position, “pragmatists”, who argue that the clause should be read narrowly to only refer to traditional testimonial witnesses, doesn’t fit the normal liberal/conservative axis. Until his recent death, Antonin Scalia was the leader of the formalist, “pro-defendant” position side here, joined by several liberal justices along with Clarence Thomas, and opposed by several conservative justices and Stephen Breyer. There is therefore some worry about what will happen now: Souter has been replaced by Sotomayor, who may or may not side with the formalists as Souter did on this, and it remains to be seen who Scalia will be replaced by.

        1. [Comment removed by author]

          1. 7

            Throw out technology, or the state? I’m going to assume you mean the state.

            This is remarkably idealistic.

            More idealistic than imagining that we can do away with the state entirely, on a meaningful time frame, without (the state) effectively destroying the world in the process? I’m not so sure I agree.

      2. 8

        Older 538 article: http://fivethirtyeight.com/features/prison-reform-risk-assessment/

        One thing to note is that the Pennsylvania scoring system is public.

        1. 3

          One thing to note is that the Pennsylvania scoring system is public.

          Interesting; I hadn’t seen that. It does seem (maybe as a result) to use much less secondary/demographic data than the Northpointe product discussed in this article. For example, the Pennsylvania one (going by the rubric I see in the sidebar of that 538 article, at least) takes into account the person’s own criminal record, but the Northpointe one also adds points for their parents' criminal record. Which probably increases predictive accuracy, but feels rather less fair.

          Even the Pennsylvania one I don’t like, though, because it ends up sentencing people based on things that have not actually been found to be true in a proper court proceeding. For example, it takes into account number of prior arrests, not prior convictions, so someone who in the past was wrongfully arrested and cleared ends up having that count against them anyway.

          1. 2

            Yeah, parents' arrest record struck me as especially bad. I don’t mind (support even) the idea of using one’s own choices to make predictions, but they absolutely must be their choices, not somebody else’s.

            1. 1

              It seems like they are optimizing for prevention (minimizing false negatives) rather than fairness (which is what the US constitution requires, IIRC).

        2. 7

          Counterpoint: https://www.chrisstucchio.com/blog/2016/propublica_is_lying.html

          I’d be interested to hear if there’s something wrong with Chris’s analysis.

          1. 3

            Anyone interested in this topic would do well to read “Weapons of Math Destruction” by Cathy O'Neil (I am reading it right now, and highly recommend it). It touches on a variety of ways in which pernicious algorithms or the veneer of being mathematical (and, ergo, objective) can hide serious flaws and do real harm to the lives of the people who interact with them. Cathy also maintains a blog that covers the same subject.

            1. 1

              Ill try yo check it out. An old one that was good is How to Lie with Statistics.

            2. 3

              In a nutshell: Blacks who don’t commit subsequent offenses get higher “risk scores” than whites who do commit subsequent offenses.

              The facts is that blacks are more likely to commit crimes. So to predict that a black person is more likely to commit crime than a white person is simply statistical inference. To point to a few cases where a black person didn’t commit future crimes while a white person did, is simply cherry picking. You have to look at the predictive ability of the system as a whole.

              If this had been a system that predicts how much milk a cow produces, and it’s wrong sometimes, there would not be a big deal. It is only a problem because it predicts something that comes into contact with race and that is taboo.

              Im sure soon somebody will start selling a PC-filter for application to machine learning/AI application so they would stop saying all these racist things.

              1. 14

                Or be arrested for crime because of discriminatory policing. I don’t know, we should be open to the idea that culture affects recidivism, whether those cultures are based on race, gender, socio-economic status, whatever. But there are a lot of potential factors, and just pulling out a single aspect and saying that is definitively a problem may be disingenuous . It’s REALLY hard to say without knowing the algorithm and training data how it arrived at these conclusions. I find it hard to justify using that sort of black box to make decisions. And will it become a set-fulfilling prophecy? So many open questions

                1. 4

                  Culture definitely plays into it. This was obvious to about any white person who went to black schools like I did. They were telling us many things you read in comments about discrimination before they were even old enough to face them. They were describing what it did to them, though, as if they personally had the experience. They didn’t all do it to the same degree with some doing it more than others and some unstoppable in confidence. My theory as an adult is their parents passed a mix of truth and defeatist bullshit to the negative ones as part of their upbringing. Not that different from the racist whites passing down negative stereotypes to their kids. I started seeing it as an element of how human groups work rather than white or black.

                  Now, it is important in the white and black discussions. Paying attention to the negative lessons from parents plus damaging ones from peers shows the kids were basically taught to expect less, that their hard work or education would be less valuable, drug dealers are about the only ones making money, and it’s a dog eat dog world. Simultaneously, they were highly fad-driven… maybe more than kids at white schools… in a way that doesn’t help when you add the popularity of “thug life” of “money, bitches, and murder.” Shocks nobody from these places, black or white, that many people just coast optionally on welfare, sell drugs, or harm other people. The mentality passed from parents, peers, and environment is a huge component. They want to blame everyone else isn’t much different than the white CEO’s in another thread taking credit for all success stories but blaming any failures on external, market issues. Groups of people rarely want to own up to their own contributions to problems in front of adversaries but the black folks discuss it plenty in black media (esp music & blogs).

                  So, that’s just some of my early experiences. Subsequent experiences, including interviewing people at a tattoo shop for a year, just seemed to confirm it albeit with odd surprises. Interviewed plenty of white racists with occasional points that weren’t so dumb either. Just kept listening. Those closest to rough parts of the murder capital had a “it’s the jungle” mentality even in private conversations. And for racism? Most of the blacks I interviewed, although not all, were racist against at least one group in terms of jokes or beliefs. Some against many with a select few that plus not trusting black people either (must be lonely haha). I always pushed the buttons until I found out since people are usually full of shit in terms of what’s on the surface to varying degrees. They eventually told the truth if they thought I was sympathetic to it or at least non-threatening + open to viewpoint. The problem was universal among all the groups. It’s why I treat it all the same focusing on meritocracy with long-term solution of removing as much bias as possible with interims like blind auditions.

                  Cautionary Note: This is not to deny in any way how external factors like layoffs or racism contributed to creating the culture. It’s that the culture exists then reinforces and greatly expands the problems. Most keep aiming hate at original problem or people outside the group while that cultural aspect is just as important. Both sides must be tackled simultaneously. Both sides refuse to admit they’re part of the problem. Stalemate of hate.

                  1. 3

                    Culture seems like as big a factor as you can get, I don’t think it’s disingenuous to think it’s definitely a problem.

                  2. 11

                    Please read the article. What they found is one of the widely-used tools consistently overestimates recidivism for black offenders and underestimates it for whites. Also we aren’t talking about cows here, we’re talking about people’s freedom. Even if it leads to “less accurate” sentencing, we shouldn’t be taking people’s employment status into account for how long they’re put in jail. That’s a thin proxy for exactly the kind of class discrimination our legal system is supposed to be above.

                    1. 7

                      The facts is that blacks are more likely to commit crimes. So to predict that a black person is more likely to commit crime than a white person is simply statistical inference.

                      This is bordering on “guilty until proven innocent.” Note that statistical inference is not a two-way street: even though it may be true that a certain demographic commits more crimes, it is not true that some person is a criminal because they are a part of said demographic.

                      1. 15

                        I’d also note that these numbers are not the rate of “crimes”, they are the rate of convictions (or, sometimes, of arrests). When some neighborhoods are policed more than others, and some people are more likely than others to be arrested and charged for the same actions, … it’s a mistake to trust the numbers that come out of that process even to the extent of assuming they reflect some real pattern in what people do. A large portion of what the numbers embody is simply the way policing is done.

                        1. 8

                          There was an example of that very problem once on HN. I believe it was in California where they got the GPS records showing where patrols stayed most of time. They put that on a map with crime distribution. Crime distribution was 50/50 or 60/40 or something in middle for white vs black neighborhoods. Yet, cops were in black neighborhoods over 90% of their patrols.

                          Hmm. If not patroling based on crime rate, what else couldve motivated that split? The cops then change the subject.

                    2. 2

                      I don’t think its possible to code something that covers every single edge case. People should use technology as a tool, but we still need human over sight and common sense.

                      1. 4

                        This is a perfect example for where Experienced-Human-in-Loop decision making should be used. It’s how safety-critical industry handles weird edge cases. Reason being computers usuaply screw them up worse than people since they lack common sense.

                        Avoiding incarcerating the innocent is worth taking such precautions.