This is kind of a scary thing. These predictive models are being trained on faulty data yet are being used in situations that can have a meaningful and detrimental impact on human life. I really think any model or service that uses any type of facial or human recognition should be banned from any government use until we have standard vetted data sets and testing sets, and expected results.
Who knows what else they’re missing. I thought this would be a poorly researched video but it wasn’t. Thanks.
any model or service that uses any type of facial or human recognition should be banned from any government use until we have standard vetted data sets and testing sets, and expected results.
I think with these types of things we are making a mistakes by only focusing on governments. The reality is that for many people privately owned institutions have a bigger effect on their lives. And the shift towards the biggest power being held by private institutions is just getting stronger.
That combined with a situation where a company’s actions and ethics (at large) have less and less value, among other things, because stocks are being traded as part of bigger, combined packages (funds) makes me wonder where are heading there.
I would argue that this itself is a setup for such behaviors, reinforcing itself. It’s the classic cycles we see when fighting crime (small and large) with more violence, bigger punishments, lowering rights, alienation from society, reinforcing criminal structures. There is good, well researched/documented examples on this. Compare the War on Drugs with Norway’s successes by doing to opposite.
What I want to say by that is that this is also re-enforcing completely without AI, and likely also for other products. Think about loans. As soon as you have statistics on black people earning less on average, making it harder to repay loans, a company optimizing for profit will be less likely to give out loans to black people, thereby reducing financial options and at large reducing income.
So in short: This is an issue of systems set up to reinforce themselves. Of course AI is becoming a big part of that. And it’s also not a problem that should be fixed only for government interaction, because for many it’s not the biggest part of life.
Racism is a real problem that is socially and culturally constructed (see the race as social construct argument). Isn’t it naive to think that an algorithm will help “solve” or reduce the impacts of racism on society? The machine itself might not be programmed to be racist, but the people reading its output are bound to interpret it through their own biases, thus leading to racist ideas being spread and reinforced. Another subtle think to keep in mind is that most people see ML/stats models as “objectives” thus leading people to absolve themselves from the guilt of being racist and instead just say “but that’s what the machine told me” (see the controversies around the bell curve book).
In short, race is a social issue that is to be solved through social means and we won’t be able to code our way out of it.
It’s also just the fact that people who do any kind of image processing algorithm work will inevitably end up using their own face to test these sorts of thing. I certainly have cameras pointed at me, not random strangers, while working on video streaming related image processing code, and I would use myself as a test subject if I wrote face recognition algorithms, or automated soap dispensers, or if I just used somebody else’s image processing library.
If most people working on these kinds of technologies are white people, even in a society with no humans with racial bias and no pre-existing systemic racism, the default outcome is that the products are racially biased. We have to be conscious about this stuff and actively work against bias (be it racial, gender or otherwise) in the technology we make.
It’s a hard problem. It’s so easy for me work on image processing algorithms for myself with my own face as a test subject, but I’ll likely end up with an algorithm which works really well on white guys but which might not work as well for other demographics. I could do blackface, but… no. I could do user testing with a diverse group of users, but that’s expensive, time consuming and difficult; users’ faces from user testing certainly won’t be as integrated into the development process as my own face is.
Don’t get me wrong, society has a racism problem. But even if it didn’t, we would still be making racially biased technology. There are solutions here, but they’re difficult and require active and conscious work to implement. Because the default state of technology is to be biased in favor of people who are similar to its creators.
We absolutely can’t code our way out of it, but I think most people who caution about the encoding of biases into predictive models approach it more from the belief that we ought not to code ourselves even further into it. Although objects in this technical domain merely reflect the biases of their creators, they still represent an intensification of those biases both because of their perceived impartiality and because they’re increasingly empowered to automate high-stakes decisions that at least would have had the chance to be flagged by a conscientious human operator in a manual workflow.
There are structural forms of racism that are hard for any of us to see, let alone for ML models. For example, thoughtless use of postal codes in many use cases strengthens redlining.
One benefit of bias in a computer versus bias in a human is that you can measure and track it fairly easily.
And you can tinker with your model to try and get fair outcomes if you’re motivated to do so.
I think this is point is undersold. I’d rather a professionally designed model making certain classes of decisions than any human.
We are automating racism. But fixing racist automation is a more tractable problem than fixing racism.
This is kind of a scary thing. These predictive models are being trained on faulty data yet are being used in situations that can have a meaningful and detrimental impact on human life. I really think any model or service that uses any type of facial or human recognition should be banned from any government use until we have standard vetted data sets and testing sets, and expected results.
Who knows what else they’re missing. I thought this would be a poorly researched video but it wasn’t. Thanks.
I think with these types of things we are making a mistakes by only focusing on governments. The reality is that for many people privately owned institutions have a bigger effect on their lives. And the shift towards the biggest power being held by private institutions is just getting stronger.
That combined with a situation where a company’s actions and ethics (at large) have less and less value, among other things, because stocks are being traded as part of bigger, combined packages (funds) makes me wonder where are heading there.
I would argue that this itself is a setup for such behaviors, reinforcing itself. It’s the classic cycles we see when fighting crime (small and large) with more violence, bigger punishments, lowering rights, alienation from society, reinforcing criminal structures. There is good, well researched/documented examples on this. Compare the War on Drugs with Norway’s successes by doing to opposite.
What I want to say by that is that this is also re-enforcing completely without AI, and likely also for other products. Think about loans. As soon as you have statistics on black people earning less on average, making it harder to repay loans, a company optimizing for profit will be less likely to give out loans to black people, thereby reducing financial options and at large reducing income.
So in short: This is an issue of systems set up to reinforce themselves. Of course AI is becoming a big part of that. And it’s also not a problem that should be fixed only for government interaction, because for many it’s not the biggest part of life.
Racism is a real problem that is socially and culturally constructed (see the race as social construct argument). Isn’t it naive to think that an algorithm will help “solve” or reduce the impacts of racism on society? The machine itself might not be programmed to be racist, but the people reading its output are bound to interpret it through their own biases, thus leading to racist ideas being spread and reinforced. Another subtle think to keep in mind is that most people see ML/stats models as “objectives” thus leading people to absolve themselves from the guilt of being racist and instead just say “but that’s what the machine told me” (see the controversies around the bell curve book).
In short, race is a social issue that is to be solved through social means and we won’t be able to code our way out of it.
It’s also just the fact that people who do any kind of image processing algorithm work will inevitably end up using their own face to test these sorts of thing. I certainly have cameras pointed at me, not random strangers, while working on video streaming related image processing code, and I would use myself as a test subject if I wrote face recognition algorithms, or automated soap dispensers, or if I just used somebody else’s image processing library.
If most people working on these kinds of technologies are white people, even in a society with no humans with racial bias and no pre-existing systemic racism, the default outcome is that the products are racially biased. We have to be conscious about this stuff and actively work against bias (be it racial, gender or otherwise) in the technology we make.
It’s a hard problem. It’s so easy for me work on image processing algorithms for myself with my own face as a test subject, but I’ll likely end up with an algorithm which works really well on white guys but which might not work as well for other demographics. I could do blackface, but… no. I could do user testing with a diverse group of users, but that’s expensive, time consuming and difficult; users’ faces from user testing certainly won’t be as integrated into the development process as my own face is.
Don’t get me wrong, society has a racism problem. But even if it didn’t, we would still be making racially biased technology. There are solutions here, but they’re difficult and require active and conscious work to implement. Because the default state of technology is to be biased in favor of people who are similar to its creators.
We absolutely can’t code our way out of it, but I think most people who caution about the encoding of biases into predictive models approach it more from the belief that we ought not to code ourselves even further into it. Although objects in this technical domain merely reflect the biases of their creators, they still represent an intensification of those biases both because of their perceived impartiality and because they’re increasingly empowered to automate high-stakes decisions that at least would have had the chance to be flagged by a conscientious human operator in a manual workflow.
There are structural forms of racism that are hard for any of us to see, let alone for ML models. For example, thoughtless use of postal codes in many use cases strengthens redlining.
I think this is point is undersold. I’d rather a professionally designed model making certain classes of decisions than any human.
We are automating racism. But fixing racist automation is a more tractable problem than fixing racism.
I think we need to need to stop using the word AI. This is pattern fitting and pattern guessing.