At the crossroads of technology, trust and human emotion

November 13, 2023

Assistant Professor Areen Alsaid explains why it’s not enough for technologies like AI and autonomous vehicles to function well. We also have to trust them.

Areen as a CECS student
Computer Science senior Caleb Nickens, left, assists Industrial and Manufacturing Systems Engineering Assistant Professor Areen Alsaid with her ongoing study of autonomous vehicle driving and human emotion. Photo/Kathryn Bourlier

Even as driverless taxis roam the streets of San Francisco, there’s still good reason to believe that autonomous vehicles may not be on the fast track to becoming a regular part of our lives. Many of the remaining hurdles are technological. But Assistant Professor of Industrial and Manufacturing Systems Engineering Areen Alsaid thinks an even more fundamental challenge is whether humans ultimately feel like they can trust AVs. Indeed, surveys show that a majority of Americans still have their doubts. A 2022 survey by the Pew Research Center revealed roughly two in three Americans would turn down the chance to ride in a driverless vehicle, with safety concerns being a major reason. About three in four said they were either unsure about or disagreed with the idea that widespread AV use would be good for society. 

Trepidation over new technologies is nothing unusual, and it’s possible, if not likely, that attitudes will change. But Alsaid says with technologies that have the potential to do harm, like AI and AVs, we shouldn't assume people will naturally get beyond their trust issues. In fact, it’s possible that the opposite can happen, and first impressions are important. “This is a technology that has the potential to make the planet greener, to provide mobility to people who cannot drive anywhere and make transportation a whole lot more equitable. But if we fail to address these concerns early on in the design process and later in the implementation stages, we might end up in a state of what we call ‘technology disuse,’” Alsaid says. She points to the example of the recent decline in use of some social media platforms, like Facebook and Twitter, after those platforms started to lose users’ trust. Likewise, it’ll be interesting to watch if the less than perfect rollout of AV taxis in San Francisco impacts the technology's longer-term future.

Building trust has become a high-stakes domain as companies unveil more automated technologies, particularly ones like AVs and ChatGPT, which have humanlike abilities. Alsaid thinks successful trust-building could be served by developing much more robust tools for understanding and measuring trust. Typically, we rely on tools like surveys, and Alsaid says asking people how they feel about technology is indispensable, especially as a means of validating other techniques. But subjective user data  has its limitations. Getting good insight from a survey, after all, depends on people having accurate, self-conscious insights about why they feel a certain way —and being able to clearly articulate them. For many, the experience of trust boils down to a gut feeling, which is likely built on an amalgam of experiences. But that can be hard to describe. So researchers like Alsaid are increasingly incorporating other approaches to reveal what contributes to humans learning to trust new innovations.

One of Alsaid’s central theses is that trust and comfort are tightly intertwined: When we’re comfortable, both physically and emotionally, it contributes to developing trust. If we’re uncomfortable, it can lead to feelings of distrust. One of the benefits of bringing comfort into the equation is that comfort, unlike trust, is typically more visible to researchers. Physiological cues like heart rate variability or facial expressions can be giveaways to our emotional states, which is why things like eye tracking technology and heart rate monitoring have become go-to tools for detecting fatigue and distracted driving. But Alsaid says more sophisticated approaches are necessary.  “For example, with regard to driving behavior, when people start to drive poorly, that was always assumed to be a good indicator that someone is distracted or fatigued,” Alsaid says. “But some research shows that people who are cognitively distracted — they’re mind wandering or thinking about something other than driving — they actually tend to be better lane keepers and maintain better speed than those who aren't distracted.” Alsaid says user states are also highly context dependent: A higher heart rate could signal anxiousness, but it could also be the case that “someone just had an extra cup of coffee that morning.” Bottom line: If we can feel happy without smiling, appear anxious while actually feeling fine, or be totally zoned out but still come off like focused drivers, we probably need more nuanced tools for assessing user emotional states.

It’s still early days, but Alsaid is working on new approaches that could support more robust emotion-detection systems. In one study, she explored the potential of machine learning, a form of artificial intelligence that’s adept at finding patterns, to analyze facial expressions from people immersed in driving simulators. That work yielded a new web tool for efficiently annotating video frames with corresponding human emotional states, which has often been a practical impediment to creating better user state estimation tools. Her latest study uses generative AI to create hundreds of images of different driving contexts, which participants react to and, importantly, tell researchers why the images elicit such feelings. It’s a first stab at building a tool that could better capture the individualized context of user emotional states, which Alsaid says is vital to getting highly accurate data. “If the ultimate goal is to be able to monitor a driver’s state in real time, so we can have systems that respond to those needs and build more comfort and, in turn, trust, we will need tools that can make sense not only of physiological cues, but also the context in which they are happening,” Alsaid says. “We’re still a long way from being able to do that, but it’s an exciting challenge.”

Even if researchers like Alsaid could crack that code, she says we’d still have our work cut out for us. Creating an empathetic AV that can accurately interpret human emotional states, while interesting, isn’t an end in itself. The ultimate goal would be to use that intelligence to inform a response from the AV that enhances the experience in some way, especially regarding safety. For example, before we have fully autonomous vehicles, we may have Level 3 and 4 vehicles that would require intervention from the driver in certain scenarios. That means the car would not only have to interpret the outside world and detect when we’ve totally zoned out, but prompt us to engage after long stretches of not paying attention to the road. The design of such prompts could be very consequential. Alsaid says you'd need a system that, among other things, would give a user enough time to refocus on the task at hand, doesn't cry wolf and isn't annoying or distracting. “So even if we know a person is in a certain state, we still don’t always know what to do about it!” Alsaid says. “We know that many people find alerts annoying and people begin to ignore them.” That's probably not something we can afford to get wrong if we expect the machines to one day earn our trust.

###

Want to learn more about Assistant Professor of Industrial and Manufacturing Systems Engineering Areen Alsaid’s work? Check out the website for her Safe, Empathetic, & Trustworthy Technologies  Lab. Story by Lou Blouin