Sorelle Friedler Studies Programming and Prejudice
Details
The assistant professor of computer science was part of a team of researchers that discovered a way to see if algorithms can be biased like human beings, as well as a method for fixing them if they are.
All of us are at the mercy of computer algorithms in some way. They determine what ads we see on Google, what movies are suggested to us on Netflix, the product recommendations we are given on Amazon, and, perhaps more seriously, even the jobs we're hired for and the loans we receive, since companies often use algorithsm as a quick way to screen applicants. Software like this may appear to operate without bias because it uses computer code to reach conclusions. But, as new research shows, those algorithms might not be as objective as they seem.
Haverford Assistant Professor of Computer Science Sorelle Friedler was part of a team of computer scientists from the University of Utah and the University of Arizona that discovered a new way to see if these algorithms can be biased like human beings, as well as a method for fixing them if they are.
"Algorithms are already being used to make decisions that affect people's lives and livelihoods, and this trend is only increasing," says Friedler. "Often, one of the selling points of using an algorithm is that it will be less biased than the current human process. While it is possible to create algorithms that reduce bias, the use of an algorithm does not on its own guarantee that. It's important that computer scientists, as well as policymakers, understand the limitations and work to make algorithmic decisions fair."
The types of software used to sort through job applications are what computer scientists call“machine-learning algorithms.” They scan resumes for keywords or data, such as GPA, detecting and learning patterns of behavior, like humans, so they can better predict outcomes. But, also like humans, these kinds of algorithms can also introduce unintentional bias.
"If the original data, for example hiring decisions as made by people, was biased, then the algorithm may replicate that bias," says Friedler. "These algorithms are designed to find and exploit patterns. There isn't any sense of what patterns might be 'off limits,' so previously discriminatory choices might be found and imitated."
Friedler and her collaborators developed a technique that can figure out if this software discriminates unintentionally and violates the legal standards for fair access to employment, housing, or other opportunities, findings they presented this week at the Association for Computing Machinery's 21st annual Conference on Knowledge Discovery and Data Mining last week in Sydney, Australia.
This technique uses a machine-learning algorithm similar to those it is testing to see if it can accurately predict a person's race or gender based on the data being analyzed, even though race or gender are explicitly hidden. If it can, then there is a potential problem for bias.
"We hope that when an algorithm is being created to make decisions in hiring or granting loans that our test will be applied on the data used to train the algorithm," says Friedler. "If the test shows that biased results are possible, the algorithm designers could consider alternative plans that allow those decisions to be made in an algorithmic and yet unbiased way."
One such plan, according to the researchers, is to redistribute the data that is being analyzed so it will prevent the algorithm from seeing the information that can be used to create the bias.
"Any attempt to guarantee fairness in algorithmic decision making relies on an understanding of what we mean by a 'fair' decision," says Friedler. "Since algorithms do not understand human nuance, this must be precisely specified. Developing a mathematical definition of fairness that conforms with our human understanding of the term seems like an important next step for this work."
-Rebecca Raber