A practical approach for addressing bias in artificial intelligence

November 14, 2022

University of Pennsylvania Professor and U-M alum Desmond Patton says building more ethical technological systems starts with bringing more voices to the table.

A portrait of Desmond Patton with a city skyline as a backdrop
Credit: Columbia University/SAFE Lab

Over the past decade, there’s been no shortage of examples of human biases creeping into artificial intelligence processes. Back in 2020, Robert Williams, a Black Farmington Hills resident, was arrested and jailed after a police facial recognition algorithm wrongfully identified him as a man shoplifting on security footage — a known weakness such systems have in accurately identifying people with darker skin. In 2019, researchers demonstrated that a software system widely used by hospitals to identify patient risks was preferencing white people for many types of care. A few years ago, Amazon mostly abandoned a system it was using to screen job applicants when it discovered it was consistently favoring men over women.

How human biases get baked into AI algorithms is a complicated phenomenon, one which we covered with UM-Dearborn Computer Science Assistant Professor Birhanu Eshete and then-UM-Dearborn Associate Professor Marouane Kessentini in a story last year. As noted in that piece, bias doesn’t just have one source, but bias problems are often rooted in the ways AI systems classify and interpret data. The power of most artificial intelligence systems rests in their ability to recognize patterns and put things into categories, but it’s important to note that process typically starts with a training period when they’re learning from us. For example, consider the image recognition algorithm that lets you find all the photos of cats on your phone. That system's intelligence began with a training period in which the algorithm analyzed known photos of cats that were selected by a human. Once the system had seen enough correct examples of cats, it acquired a new intelligence: an ability to generalize features essential to cat-ness, which allowed it to determine if a photo it had never seen before was a photo of a cat.

The important thing to note about the above example is that the algorithm's intelligence is fundamentally built on a foundation of human judgment calls. In this case, the key human judgment is an initial selection of photos that a person determined to be of cats, and in this way, the machine intelligence is embedded with our “bias” for what a cat looks like. Sorting cat photos is innocuous enough, and if the algorithm makes a mistake and thinks your dog looks more like a cat, it’s no big deal. But when you start asking AI to do more complex tasks, especially ones that are embedded with very consequential human concepts like race, sex and gender, the mistakes algorithms make are no longer harmless. If a facial recognition system has questionable accuracy identifying darker-skinned people because it’s been trained mostly on white faces, and somebody ends up getting wrongfully arrested because of that, it’s obviously a huge problem. Because of this, figuring out how to limit bias in our artificial intelligence tools, which are now used widely in banking, insurance, healthcare, hiring and law enforcement, is seen as one of the most crucial challenges facing AI engineers today.

University of Pennsylvania Professor and U-M School of Social Work alum Desmond Patton has been helping pioneer an interesting approach to tackling AI bias. At his recent lecture in our Thought Leaders speaker series, Patton argued that one of the biggest problems — and one that’s plenty addressable — is that we haven’t had all the relevant voices at the table when these technologies are developed and the key human judgments that shape them are being made. Historically, AI systems have been the domain of tech companies, data scientists and software engineers. And while that community possesses the technical skills needed to create AI systems, it doesn’t typically have the sociological expertise that can help protect systems against bias or call out uses that could harm people. Sociologists, social workers, psychologists, healthcare workers — they’re the experts on people. And since AI’s bias problem is both a technical and a human one, it only makes sense that the human experts and the technology experts should be working together.

Columbia University’s SAFE Lab, which Patton directs, is a fascinating example of what this can look like in practice. Their team is trying to create algorithmic systems that can use social media data to identify indicators of psycho-social phenomena like aggression, substance abuse, loss and grief — with the ultimate goal of being able to positively intervene in people’s lives. It’s an extremely complex artificial intelligence problem, and so they’re throwing a diverse team at it: social workers, computer scientists, computer vision experts, engineers, psychiatrists, nurses, young people and community members. One of the really interesting things they’re doing is using social workers and local residents to qualitatively annotate social media data so that the programmers who are building the algorithms have appropriate interpretations. For example, Patton says, one day, he got a call from one of their programmers over a concern that the system was flagging the N-word as an “aggressive” term. That might be an appropriate classification if they were studying white supremacist groups. But given that their communities of focus are Black and brown neighborhoods in big cities, the word was being used in a different way. Having that kind of knowledge of the context gave them a means to tweak the algorithm and make it better.

Patton says SAFE Lab’s work is also drawing on the hyper-local expertise of community members. “The difference in how we approach this work has been situated in who we name as domain experts,” Patton said. “We [hire] young Black and brown youth from Chicago and New York City as research assistants in the lab, and we pay them like we pay graduate students.  They spend time helping us translate and interpret context. For example, street names and institutions have different meanings depending on context. You can’t just look at a street on the South Side of Chicago and be like, ‘that’s just a street.’ That street can also be an invisible boundary between two rival gangs or cliques. We wouldn’t know that unless we talked to folks.”

Patton thinks approaches like this could fundamentally transform artificial intelligence for the better. He also sees today as a pivotal moment of opportunity in AI’s history. If the internet as we know it does morph into something resembling the metaverse — an encompassing virtual reality-based space for work and social life — then we have a chance to learn from past mistakes and create an environment that’s more useful, equitable and joyful. But doing so will mean no longer seeing our technologies strictly as technical, but as human creations that require input from a fuller spectrum of humanity. It’ll mean universities training programmers to think like sociologists in addition to being great coders. It’ll take police departments and social workers finding meaningful ways to collaborate. And we’ll have to create more opportunities for community members to work alongside academic experts like Patton and his SAFE Lab team. “I think social work allows us to have a framework for how we can ask questions to begin processes for building ethical technical systems,” Patton says. “We need hyper-inclusive involvement of all community members — disrupting who gets to be at the table, who’s being educated, and how they’re being educated, if we’re actually going to fight bias.”

###

Want to dig deeper into this topic? Check out another recent installment in our Thought Leaders speakers series, where U-M Professor Scott Page explains why diverse teams outperform teams of like-minded experts.