Heightening the danger of harm is a human inclination to trust that technology is more objective than our own decision making. But economists and data scientists are just as likely as call screeners to hold mistaken cultural beliefs about poor white families and families of color. When systems designers program their assumptions into these tools, they hide consequential political choices behind a math-washed facade of technological neutrality.
A lot has been written about the role of bias in the context of AI or machine learning these days — and rightfully so.