If you’re suffering from kidney failure in the United States, your chances of being put on the transplant waiting list may depend on the output of an algorithm called the CKD-EPI eGFR equation. This examines a range of variables to estimate something called the glomerular filtration rate (GFR).

The GFR is a measure of how well your kidneys filter blood. The lower your score, the less well your kidneys are working and the higher up you go on the waiting list for a transplant. This all sounds very fair. Using an algorithm to assess organ function surely removes any scope for human bias. Right?

Sadly, no. A recent study published in the Journal of General Internal Medicine found that 33% of African-American patients, more than 700 people, would have been given a lower score if they had been white. Bias, the study found, was built into the algorithm, which boosted the GFR score of black patients by almost 16%.

This illustrates something that is becoming more obvious almost by the day: using algorithms and artificial intelligence (AI) to make decisions is not a guarantor of impartiality. Humans design, train and deploy AIs. That leaves plenty of scope for the introduction of human bias and frailty into the functioning of these systems. And the results for those affected can be catastrophic.

Not getting a kidney transplant when you need one is possibly one of the severest examples of discrimination by algorithm blighting the lives of those subject to its arbitration. But it is not alone. In Ireland, for instance, it was recently revealed that the algorithm exam bodies used to estimate pupil grades was written to mark boys down. In Scotland, the algorithm performing the same task was found to be four times more likely to mark down, and fail, a pupil from a school in a socially deprived area than a pupil from a fee-paying private school.

The Scottish example is particularly instructive, because it’s highly unlikely that anyone deliberately intended the algorithm to produce this result. First Minister Nicola Sturgeon has publicly asked voters to judge her both on the state of education and her success in closing the attainment gap between rich and poor. The algorithm’s bias was embarrassing and politically inconvenient for her.

How then, if no one intended it, did the algorithm become prejudiced in the first place? There are several ways in which an AI can become biased:

  • Bias is introduced in the design and programming of AI, for instance, by choosing which variables to include in the model and which to exclude.
  • The choice, collection and sampling of data can introduce flaws and biases into the AI, particularly when it is being trained.
  • Through the misidentification of cause and effect, for instance identifying someone as being at risk of a medical complaint simply because they have accessed healthcare frequently.
  • The ability of the AI’s subjects to identify the variables and models in use and then to game them, SEO being a good example.
  • The introduction of bias during optimisation. For example, if an AI is programmed to optimise for high returns, it may exclude poorer minority groups from eligibility for loans.

What then can those who design, build and use AI do to prevent bias and unfairness being built into or learned by the systems they create? The first, and most simple, is to try and attract a more diverse workforce to the sector.

If you want to build AIs that do not embed racial or sexual bias, a team with more women and people from a range of ethnic backgrounds is an important first step. Firstly, a more diverse team is more likely to be alert to these issues. Secondly, this team may be more likely to spot problems in training models and algorithms that could lead to bias.

Greater diversity isn’t the only thing that will help eliminate bias in AI. Teams can also introduce stricter auditing and testing regimes, both during design and when the AI is being trained on test data. Just as public bodies now routinely do impact assessments to see how their decisions and actions will impact minority or marginalised communities, so can AI designers.

AI designers can also be highly deliberate and conscious in the choice of data they use to train their algorithms, testing the connections and patterns in the data before and as they train their AI to spot and eliminate unfair and biased outcomes.

With the right approach, the right checks and balances and a conscious decision to aim for AIs that are unbiased by design, it is possible to create a system which allows us to make use of the huge volumes of data available today, which only AI can do efficiently, without introducing new injustices into society.

So yes, a robot can be biased. But only if we don’t take the time and thought to ensure that it doesn’t end up that way.

To learn more about AI design and how you can become an expert in artificial intelligence, enrol on the University of Bath’s Artificial Intelligence online MSc. Taught by leading experts in AI, you will learn about design and build, how to analyse AI operations and propose novel solutions to the problems of building successful algorithms and how to build AIs that meet the highest ethical, legal and professional standards.

Find out more about doing an Artificial Intelligence online MSc at the University of Bath by filling in our form below and requesting information.

Request Information

Complete the form below for detailed course and pricing information and to be contacted by phone and email.

        By submitting your information, you confirm you have read the Privacy Policy.

        *Required field

        To prevent automated spam submissions leave this field empty.