Salzburg Global Fellow Mutale Nkonde provides insights on tackling algorithmic bias and fostering diversity in AI development
Mutale Nkonde was born in Zambia, but grew up in the United Kingdom. Living in New York City, she runs a nonprofit called AI for the People, which works towards advancing policies that reduce algorithmic bias.
Júlia Escrivà Moreno, Communications Intern, Salzburg Global: How would you explain algorithmic bias to someone on the street?
Mutale Nkonde, CEO of AI for the People: Algorithmic bias is a situation where the racism that people experience in their everyday lives is expressed by machines. For example, there is this idea that all Black people look alike and that we can't be told apart. That's a racist assumption. But facial recognition, which is an AI system that uses computer vision, also cannot recognize Black faces. It's this idea that in normal life, people do not want to see the individuality in Black people, and that becomes part of the way the system operates.
JEM: Could there be a future where DEI experts are involved with the creation of AI models or AI policymaking?
MN: I certainly see that future... There's a very famous paper that was published... called "Annotators with Attitude"... These scientists found that in the data sets that were being built to identify toxic speech on social media, Black speech, African American Vernacular English (AAVE), was more often tagged as toxic when it wasn't; it was just normal speech. Things like "such and such was the bomb" in AAVE means that it's good, but it was tagged as a threat. "Something is fire," which means in AAVE that something is cool or hip, was tagged as arson, whereas racist speech, such as likening somebody to a monkey, wasn't tagged as toxic... Even in development, these ideas of racial bias are integrated by the people tagging the data.
I think where people of color really come in is first doing that research. People that do that research tend to be the people impacted by it. Second of all, by working with policy groups, research groups like ours, to bring that to the fore. So that's actually an example that we're going to be using in a race and AI paper that we're doing for the UN Human Rights Council, because they're trying to understand how this happens from a design perspective. How do we create a policy intervention? But it’s also something that we can take to the US Congress as well as companies. I think the people that are doing this critical work are the people that are most impacted. Our role is to hire hopefully some of those people and empower them to build amazing careers outside of us.
JEM: What are some ways that we can make AI models more inclusive? Who should be in charge?
MN: I think everybody needs to do it... I'm often in situations where I'm told Black people don't work in AI, Black women don't work in AI. That's both a racist and sexist assumption. I just don't think enough Black people work [in AI], and I don't think enough women work [in AI]. It's all of our responsibility to make sure that our teams look like the rest of humanity.
JEM: How do you look into the future? Do you see a more inclusive and less discriminatory AI or the opposite?
MN: I think we see the AI that we advocate for, and if our voices are not in these policy spaces, then we are going to see the AI of the people around the table who have traditionally been white, older, and who will think of things like inclusive technology as being "woke" when an inclusive technology is an effective technology. If you cannot effectively design for the future that we're in, then you're just playing. I also think that we have the future that we create, and we get the future we deserve. In every moment, we should be involved in that. I would like to think, in some small way, I'm creating a more inclusive future for AI in other technologies.
JEM: Is responsible AI possible?
MN: Responsible AI is a goal. I think if we have capitalistic goals for AI, it can never be responsible. However, there are alternative ways of looking at capitalism. One of the people I really admire is Joseph Stiglitz, who is a Nobel Prize-winning economist... he always says that there can be a capitalism in which workers are protected, in which non-discrimination is a goal, in which the impact of industry and commerce, in this case, AI, can have pro-social applications and can be regulated. He calls that responsible capitalism. I think if we have responsible capitalism, then we can have responsible AI.
JEM: Have the discussions you've been a part of during the program "Creating Futures: Art and AI for Tomorrow's Narratives" given you any further ideas for your work?
MN: For sure. I hosted a discussion that was looking at protest, confidentiality and privacy, and the storage of video data... The group I was with... made me think of the ways of using encrypted networks like Signal, for example, in the United States, as a way to transmit information in moments of protest, because I think those of us that have been involved in the Palestinian encampments, I was part of the first one at Columbia, we now have all of this video data that's very politically sensitive, and we can't put it on our university servers because we were protesting at the university. I really appreciated being in a group, for example, with somebody who'd been involved in the civil rights protest and had faced very similar questions, and I would never have had that otherwise.
JEM: Can you share a takeaway from this week?
MN: I think the questions that are being raised in AI futures are the questions that we've been raising about humanity. And I think that this is a good inflection point to really lean into these issues of representation, diversity, and inclusion, which are all conversations about power. Who has power? Who doesn't have power? Where is that power distributed? It's exciting to be so central to those discussions because I certainly stand for the redistribution of power and the rewriting of history. It's been interesting to do that work at Salzburg Global because I think so much of bringing critical people in, like me, to a space that has such a deep history of oppression is also creating a new power and reclaiming that space.
To hear more from Mutale, watch the video below:
Mutale Nkonde convened alongside around 50 participants including artists, technologists, futurists, curators, and activists for Salzburg Global's annual Culture, Arts and Society program in May 2024. "Creating Futures: Art and AI for Tomorrow's Narratives" explored the emergent possibilities at the intersection of creative expression, technology, and artificial intelligence.
This article featured in our Shorthand story, which includes more coverage from the program "Creating Futures: Art and AI for Tomorrow's Narratives".