As humans, we struggle to make decisions without some sort of unconscious bias coming into itusually influenced by how we were brought up. That’s okay, though, because now we can trust Artificial Intelligence (AI) to make unbiased decisions for us. Can’t we?

Free from bias

AI is taking a bigger and bigger role in our lives, helping make day-to-day tasks a little bit easier. There’s AI like Google’s Duplex that lets you make reservations on the phone, and Amazon’s Alexa that answers all your queries. Then there’s Tesla, which is using AI to drive its cars. And some banks are using it to help with loan and mortgage applications.

We’re now trusting AI to make important life-changing decisions. One reason for that is that AI can do so by looking at data without falling foul to any bias. Whether we’re aware of it or not, we’ve all inherited certain biasesthey could be from our cultural upbringing or our socio-economic class. And that can influence our decisions and the way we interact with other people. That’s why in San Francisco, AI is being used to remove racial bias in the justice system, and why in Sweden it’s being used by recruiters to stop first impressions influencing the success of a job interview.

But is AI really free from bias? AI is being designed to be more human to improve people’s experiences when engaging with it. In inheriting our character, is it also inheriting our bias?

Learning bad habits

Conway’s Law states that “organisations which design systems are constrained to produce systems which are copies of the communication structure of these organisations”. Simply, AI made by humans is going to act like humansand as a result it’s going to inherit our bad habits.

This isn’t a debate about whether we’re making AI too human, it’s whether we’re creating it in the best way in the first place. A recent article by Mutale Nkonde, an expert in AI Governance, has prompted the discussion of whether AI is going to be the civil rights argument of our time. Nkonde’s argument is that while AI is enabling us to do wonderful things, there’s a bigger problem behind the scenes. Take Google for example, it lists 893 people working on “machine intelligence”only one of these is a black woman. How can a system reflect all of us, when it is being programmed by just some of us?

As it becomes more intelligent, AI is going to make more and more of our decisions for us. As it does, we won’t be able to keep up with its thought processesit can take in much more disparate data than a human could handle. So how will we be able to tell if it’s basing its decisions on the data alone, without any bias? The thing about computers is, you get back what you put in. That’s why it makes sense that if we want unbiased AI, it should be developed by people who are representative of all of usotherwise, could our advancements in technology become a step backwards for equality? 

Posted by John on 5 July 2019