Twitter’s racist algorithm is also ageist, ableist and Islamaphobic, researchers find
Aug. 9, 2021
"While the photo-cropping incident was embarrassing for Twitter, it highlighted a broad problem across the tech industry. When companies train artificial intelligence to learn from their users’ behavior — like seeing what kind of photos the majority of users are more likely to click on — the AI systems can internalize prejudices that they would never intentionally write into a program."
From Carlos: It's hard not to see in this story about machine learning something instructive about human learning: When we train little human intelligences to learn from their adult models' behavior, they can internalize prejudices that the adult models would never intentionally teach?
As disturbing as this analogue is (basically what is referred to these days as the implicit bias phenomenon), there's also hope in this story that if we can be more aware, careful, and intentional as models of inclusive perception and treatment of others, the algorithms (habits of mind) we pass on to our little ones might not be as flawed as those we've absorbed.
Full story here.