Machines don’t add bias, people add bias to machines

This week an article was published in the news that Amazon had to scrap an AI that was trained using Machine Learning.

The intended use case was ‘given a list of 100 resumes the AI would select the top 5 for hiring’. It was scrapped because it was found to be heavily biased towards favouring male applicants over female applicants. The AI was trained on 10 years worth of applicants data.

Whilst some readers might be surprised by the fact that the application process over the past 10 years worth of data doesn’t paint a gender neutral picture, I find myself completely unsurprised by the fact that Machine Learning didn’t work for this use case.

History is biased too

It’s easy to understand the use case for such an AI application. Humans are often biased. We have and sometimes rely on our ‘gut feelings’ which make it difficult sometimes when we have to conduct post-justification of human reasoning.

AI portrays a utopian society because machines should provide an unbiased view. They only care about the facts... the data! Not the font size or the layout that the applicant has chosen. So it’s natural to think that AI would be a perfect fit for scanning 100 CVs and selecting the top 5 in an unbiased fashion.

Machine Learning models however build up their intelligence based upon data alone so the data itself is quite distinctly a prerequisite. If we try to adopt Machine Learning for an AI that performs hiring tasks then we have to look to the past and train Machine Learning to perform hiring tasks based upon existent hiring data… carried out by humans.

Trust in the Machine is gained by trusting the Humans

So we have a Catch-22 situation. We want an unbiased AI but if we turn to Machine Learning to do this then we require data based upon human decisions in the past and we have already admitted that humans are often biased. We are essentially putting the faith back into the human.

So how did the AI become so gender biased?

A noteworthy point in the news article that was published was that Reuters claimed,“it was built on data accumulated from CVs submitted to the firm mostly from males”.

Given the nature of Machine Learning, it starts to become far more apparent why the outcome has become so gender biased.

The Amazon case is a very interesting one. We can look to statistical learning to understand the data that we are feeding into Machine Learning. For example some might argue that the data fed in should have been 50% male and 50% female. But if we think about CVs for a second and all of the other potential biases that might have affected selection for hire in the past, for example an applicant’s age, type of language used in the application (as highlighted in the Reuters article) or even the fact that they have a particular hobby or play a particular sport in their spare time. Is it possible for us to filter our data set into a niche set that is fair across all of those potential biases? I fear that this is extremely unlikely when used to solve these types of problems.

So what can we do?

Understanding your data is an important aspect when considering using Machine Learning. Its size shape and spread can provide an insight into its potential performance. This can help manage our expectations. Please keep an eye out for a future post on this.

Prior to even understanding your data, it is important that we understand what types of tasks Machine Learning is best-fit for. In the Amazon case, I mentioned at the beginning of this article that I was not surprised that Machine Learning did not work. I am not going to say that I think that Machine Learning is not a best-fit solution for an AI that can perform hiring tasks. In fact, on the face of it, it seems suited to the task. However what I will categorically say is that Machine Learning is not a best-fit for an AI performing tasks in an unbiased way unless it is built upon data sourced from humans in an unbiased way.

Black Pepper recently carried out some data discovery work for Jaguar Land Rover to confirm whether Machine Learning was an amenable solution to their business problem.

At ITDF 2018, we presented a distilled generic process which can help interested parties introduce Machine Learning to their business.

This site uses cookies. Continue to use the site as normal if you are happy with this, or read more about cookies and how to manage them here.

X