Jordan Allred, Deseret News
A team of computer scientists from the University of Utah, University of Arizona and Haverford College have developed a way to determine if an algorithm used for hiring decisions, loan approvals and other significant responsibilities could be biased.

SALT LAKE CITY — Computers may be designed to be neutral in the way they operate, but research conducted by a scientist at the University of Utah indicates that they may only be as unbiased as the people who program them.

A team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania has developed a method to determine whether an algorithm used for hiring decisions, loan approvals and other significant responsibilities could be biased similar to a human being. An algorithm is a procedure or formula used for solving a problem.

The research suggests that software might appear to operate without predisposition or preference because it strictly uses computer code to reach conclusions — one of the main reasons many companies often use algorithms to help cull the number of job applicants during the hiring process. But Suresh Venkatasubramanian, lead researcher and associate professor at the U.’s School of Computing, said the algorithms are also subject to the inadvertent biases of the programmers who create them.

In fact, scientists discovered a technique to determine whether such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities. The team also devised a method to fix the potentially troubled algorithms, he said.

“What we’re trying to do is link an algorithmic notion of bias that the law says is discrimination for hiring and housing and other things like that, and if you can figure out a way to mitigate it somehow,” Venkatasubramanian said. “If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair.”

Because a number of companies use algorithm driven software programs to help filter out job applicants during hiring process, he said, the issue of potential bias has become increasingly prevalent. Sorting through applications manually can be overwhelming, but a well written computer program can perform the function of scanning résumés and searching for keywords or numbers, then assign an overall score to the applicant.

The programs also can “learn” as they analyze more data, he explained. Known as machine-learning algorithms, they can change and adapt like humans so they can better predict outcomes, similar to the way companies like Amazon use algorithms to learn customer buying habits or more accurately target advertising.

“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations,” Venkatasubramanian said.

The research determines whether the software algorithms could be prejudiced through the legal definition of disparate impact — the legal theory that a policy may be considered discriminatory if it has an adverse impact on any group based on race, religion, gender, sexual orientation or other protected status.

The team’s research revealed that algorithms could be tested to determine possible bias, ironically using another machine-learning algorithm. If the test reveals a possible problem, Venkatasubramanian said it would be relatively easy to correct.

All you have to do is redistribute the data that is being analyzed — for instance, the information of the job applicants — so it will prevent the algorithm from seeing the information that can be used to create the bias, he said.

While the notion of algorithm bias is a relatively new concept, the idea seems credible to real world analysts.

Jason Taylor, chief technology officer at South Jordan-based customer experience software and research firm MaritzCX, said the integrity of program data could certainly be manipulated, albeit unintentionally, during the algorithm development process.

“A lot of (the problem stems) from a business-driven decision about where you invest your money and how much emphasis you place on the integrity of the data you render to the people using your systems,” he said. “It’s about whether or not businesses are spending their money in a way that has the professional integrity of guaranteeing that the data they present is done in a non-biased way.”

Email:; Twitter: JasenLee1