It seems to be an unbreakable human habit to unleash technologies into the world and ponder the consequences later. The most recent dramatic example of this phenomenon is the artificial intelligence boom. For years, critics — even many who believe the technology can do a lot of good in the world — have been sounding the alarm that AI applications are creating sticky situations when it comes to safety and fairness. Autonomous vehicles have injured and killed pedestrians. Algorithms used in healthcare, banking and law enforcement are embedded with human racial biases. More recently, companies behind generative AI models like ChatGPT have apparently developed their models using an unfathomable amount of copyrighted material — without bothering to ask content creators.
AI’s reach into our lives is bound to get broader, and Associate Professor of Computer and Information Science Birhanu Eshete wants his students to be prepared — not only to create powerful problem-solving technologies, but to reckon with the implications for our lives. That’s the mission behind a new trustworthy AI course that Eshete created and launched in Winter 2024, which blends a project-based curriculum focused on specific technical AI applications with explorations of the technology's social, political and economic contexts.
With a two-pronged approach and a sprawling topic, it wasn’t the easiest course to design. Eshete says trustworthy AI is a label that ecompasses a diverse set of issues, from security, privacy and safety to transparency, ethics, fairness and regulation. Any one of those seven topic areas he used to organize his class could be a course on its own, he says. “But the value of an overview course is that students get introduced to a pretty comprehensive set of issues — and then they can go deeper into anything that interests them,” he says. Plus, by reinforcing that almost anything AI touches has human implications, he can drill home to a bunch of future AI professionals that AI is hardly just a technical discipline.
For example, in the ethics section of the course, the class pondered looming real-world applications of AI like autonomous trucking. Eshete says proponents of long-haul autonomous trucks often advance a safety argument — namely, that highway accidents involving trucks are often catastrophic and a result of human error. If the job could be done more precisely by machines, then we could reduce the number of injuries and fatalities. “So it might sound ethically right to make trucks autonomous, because we could save lives,” Eshete explains. “But on the flip side, you’re going to push thousands of truck drivers out of a job, and they and their families will suffer. In fact, the World Economic Forum estimated that roughly 75% of jobs have the potential to be replaced by robots and artificial intelligence. Imagine the disruptions to society. So workforce replacement is not just a technology issue. It’s a human issue.”
You can also see the complexity and variety of the trustworthy AI space in the range of topics students chose for their final projects. One team looked at algorithms used to detect skin cancer to analyze the technology's level of racial and gender bias and explore ways to enhance fairness while protecting privacy. Another group explored techniques for reducing the error rate of stop sign detection algorithms used in autonomous vehicles. A third team investigated techniques used to acquaint a model with tricky inputs designed to fool it. And the final group of students explored how to “poison” databases of customer reviews, a technique attackers use to influence consumer sentiment about products.
Doctoral student Firas Ben Hmida, who’s also a research assistant in Eshete’s lab, says the trustworthy AI topic has been a real eye opener for him. “When I started at my engineering college back home in Tunisia, I started off in cybersecurity, and my interest, probably like everybody else, was advanced attacks and how to defend against attacks,” Hmida says. “But then, as I started working with Professor Eshete, you see that there is this whole other side to cybersecurity and AI. Outside the course, even several of the projects we’re working on in the lab deal with trustworthy AI, so I’m even thinking my PhD is now going to be about trustworthy AI.”
Eshete heard that kind of feedback pretty consistently from students. “I got a lot of comments like, ‘It allowed me to look at AI beyond neural networks,’” Eshete says. “That’s great to hear because you don’t know how students are going to react when you spend half your lecture in an AI course talking about Socrates, Plato and Aristotle and ethical philosophy.” In fact, Eshete says the feedback was the most positive of any course he teaches. Specifically, he says students appreciated that the overview format allowed them to explore many topics in a single course. They also liked the project-based format, which eliminated the need for exams and quizzes.
Eshete plans to teach the class on campus every winter semester, with the exception of Winter 2025 when he’ll be on sabbatical. He’ll actually still be teaching the trustworthy AI course though — just at Addis Ababa University, his alma mater in Ethiopia, where he'll be spending part of his leave. “I’m actually really looking forward to teaching the course in two different countries, because any of these AI trustworthiness issues could mean many different things depending on the context,” he says. “For instance, if you’re thinking about fairness and race here in the U.S., we’re accustomed to thinking about things in terms of Black and white. But in Ethiopia, everybody is Black. So the context of race is slightly different, and we think more about different ethnic groups. So it will be interesting to see how students respond to thinking about these issues for AI applications in high-stakes contexts.”
###
Story by Lou Blouin