Can artificial intelligence be unbiased?

On 7 March, a symposium on biases was held at the University of Bern. Gülser Corat, former Director for Gender Equality at UNESCO, argued that the current design of artificial intelligence often promotes bias and explained how that could change.

uniAKTUELL: Mrs. Corat, while you were Director for Gender Equality at UNESCO, you conducted research on gender bias in virtual assistants such as Apple’s Siri or Amazon’s Alexa. What did you find?

Gülser Corat: In 2019, over 90 percent of voice assistants had female voices, names, and personas. Some companies project that people will have more conversations with digital assistants than with their spouses in the next couple of years. And because most voice assistants are projected as female, obedient and obliging, it sends a signal that women are docile and eager-to-please helpers, who are available at the touch of a button or with a blunt command like “hey”.

How is this related to artificial intelligence (AI)?

This “servility” of the virtual assistants illustrates the gender biases coded into AI products and influences both how people speak to female voices and how women respond to requests and express themselves. To change course, we need to pay much closer attention to how, when and whether AI technologies are gendered and, crucially, who is gendering them..

Where do the biases come from?

New technologies such as AI are produced by us, human beings who live in today’s societies. As such, new technologies, including AI, reflect society, its values, prejudices, and biases. All of these are included in the information – or data – that is used to “teach” the AI, as well as the algorithms programmed to make it fulfil certain tasks.

So, the problem is essentially biased data?

Yes, that is correct. Biased data is a big problem. The most common AI model used today is data-driven machine learning and this model uses huge amounts of data, most of which comes from the internet. Just one example is gender bias in data on Wikipedia: Only 17 percent of Wikipedia pages about people are about women.

Is there a way to counteract this?

With the think tank «No Bias AI?», which I founded in 2022, we are exploring a different model called “knowledge-based machine reasoning”. The advantage of such machine reasoning systems is that data sets are not the determining element in their training and that they can “understand” a problem and its context. The results are therefore explainable and justifiable – unlike the “black box” that machine learning is often characterized by.

"New technologies participate in shaping the values we hold. It is critical that we pay attention to how they are developed and implemented."

And beyond the technical approach?

Some of the measures we list in our report «I'd blush if I could» include: stop making digital assistants female by default, develop a neutral ‘machine gender’ for voice assistants, discourage gender-based insults and abusive language, and develop the advanced technical skills of women and girls so they can steer the creation of new technologies alongside men.

Why do you focus on bias?

Because I am most concerned about AI technologies being deployed at scale without a thorough understanding of the consequences and how the technologies affect people and communities. This does not only relate to gender but also race, ethnicity, religion, socio-economic factors, age, and disability. New technologies also participate in shaping the values we hold. So, it is critical that we pay attention to how they are developed and implemented..

Subscribe to the uniAKTUELL newsletter

Discover stories about the research at the University of Bern and the people behind it.

About the person

Gülser Corat

was Director for Gender Equality at UNESCO from 2004 to 2020. In 2019 she published the landmark study “I’d Blush if I Could: Closing Gender Divides in Digital Skills in Education”, which found widespread inadvertent gender bias in the most popular AI virtual assistants. For her follow-up research “Artificial Intelligence and Gender Equality” she was included in the 100 Most Influential People in Gender Policy 2021. In 2022, she launched the global think tank No Bias AI? with dual objectives: to reflect on a possible paradigm shift in AI from data-driven machine learning (ML) to knowledge-based machine reasoning (MR) and to develop gender audit tools to address gender bias in ML algorithms and datasets. Gülser Corat is a TED and international keynote speaker and a member of Women Executives on Boards and Extraordinary Women on Boards.

Top