Designing autonomous vehicles to be pedestrian friendly

February 10, 2021

For autonomous vehicles to become mainstream, we’ll have to train them to play nice with everyone who uses the road.

 Pedestrians cross safely at a crosswalk in front of an autonomous vehicle, which is using a message board to signal that it is safe to cross.
Pedestrians cross safely at a crosswalk in front of an autonomous vehicle, which is using a message board to signal that it is safe to cross.

If you read the first piece in our series on autonomous vehicles (AVs), you know that one of the most intriguing questions surrounding the future of AVs is exactly why we need them in the first place. The promise of improved safety is often a primary argument: If we can make cars that drive more safely than humans, then we could save lives. But Associate Professor Shan Bao, who’s a specialist in human factors, says it’s important to remember that safety includes more than just driver safety. “If you look at the past five years of national U.S. accident data, pedestrian fatalities have been steadily increasing, and 2019 was the deadliest year yet,” Bao says. “It’s clear pedestrians are becoming more and more vulnerable, so both manufacturers and government agencies are now very interested to see if AVs can be part of the solution.”

Creating pedestrian-safe, let alone -friendly, autonomous vehicles is still very much a work in progress. Bao says current sensor accuracy is still limited when it comes to detecting smaller objects like pedestrians or bicycles. And driverless car systems capable of navigating hundreds of complicated real-world scenarios, like jaywalking, which unpredictably break the rules, are still a long way off. Interestingly, though, solving for so-called “edge cases” can’t simply be approached as a technology problem, according to Bao. If the goal is to create AVs that are at least as safe to pedestrians as human-driven cars, we first must have a very thorough understanding of how drivers currently interact with pedestrians. Otherwise, the training we’ll provide them about what conditions to be prepared for is destined to be incomplete, and bad things are likely to happen.

A headshot of Associate Professor Shan Bao
A headshot of Associate Professor Shan Bao

Bao is currently immersed in such an effort, with hopes that it can provide something like an “edge case library” of future safety testing scenarios for AVs. One of the most interesting aspects of that work is a deep analysis of studies on “naturalistic driving behavior,” which offers insight into how human drivers and pedestrians typically navigate situations when they’re competing for the same space. Bao is indexing hundreds of such scenarios, but even considering a few reveals the complexity of the challenge. “If you’ve ever been to New York City, you know that pedestrians don’t obey the crosswalk signals,” Bao explains. “So even if the traffic light is green, a car won’t be able to go because there are pedestrians in the way. A human driver knows to safely nudge their way out toward the intersection — to indicate their intentions to pedestrians. But an AV today, it would basically detect pedestrians and refuse to move. It’d be stuck there forever.”

This kind of subtle communication and negotiation happens all the time between pedestrians and humans drivers. We use eye contact and hand gestures to signal who gets the right-of-way at neighborhood intersections. If you’re driving on a residential street and notice a kid playing in a driveway, you intuitively know to slow down in case their ball unexpectedly rolls out into the street. If you see someone approaching a crosswalk with their eyes glued to their phone, you're a little more prepared to stop. Since the deployment of test AVs in cities, researchers even have their eye on a new kind of edge case: Pedestrians who, trusting that AVs will defer to them, feel empowered to break the rules even more.

It’s a huge challenge to code cars that can mimic all the little techniques we’ve mastered to safely share the road. But with help from researchers like Bao, who are helping account for more and more of them, developers can slowly begin chipping away at solutions. For example, in Europe, she says they’re now experimenting with driverless cars that use LED lights or audible commands to indicate to pedestrians that it’s safe to cross in front of them. And programmers are increasingly aware that their algorithms must account for the fact that humans come in all shapes and sizes, have different physical abilities that impact crossing speeds, and may sometimes be getting around with assistive technologies like wheelchairs.

Interestingly, a new phase of Bao’s work is applying this kind of anthropological lens to the current AI-powered technology. Her team recently got their hands on data from the MCity shuttle project — a low-speed Level 4 autonomous shuttle that operated through 2019, moving people and interacting with pedestrians in a real-world environment on UM-Ann Arbor’s North Campus. Equipped with multiple optical video sensors, the shuttle provides hundreds of hours of video footage that Bao can use to observe how it behaves in the presence of pedestrians. On the whole, she says the shuttle, which still had a human safety monitor onboard, does a pretty great job: It’s systems for detecting pedestrians are about 95 percent accurate, a figure she says is no doubt aided by a low-density environment and a 25 m.p.h. max speed (which makes its sensor systems more accurate). But she’s already noticing some areas for improvement. For example, sometimes the shuttle appears to react to pedestrians that it doesn’t have a conflict with, such as a person who has already finished a crossing. Even so, many folks would argue it’s far better to have an AV that’s overly cautious than the other way around.

In fact, Bao thinks that’s a safety ethos that is likely to shape the AV landscape for years to come. “With currently available features, like crash warning systems, you get false positives pretty frequently,” Bao explains. “But drivers have learned to tolerate them because the expectation is if it’s the real thing, the warning can save your life. I think it will be the same thing as AVs develop. The technology will start out overly safe, overly conservative, and gradually get better and better, step by step. And I think you could say we are still taking our first steps.”

In many ways, it’s human factors, not technological ones, that are causing the long development timeline. “Machines follow rules. Humans are complicated,” Bao says. And as long as we keep up our unpredictable, rule-bending habits, the machines face an uphill climb to keep up.


Story by Lou Blouin. This is the second story in a series exploring the future of autonomous vehicles. For more on this topic, check out the rest: "Why your first driverless car is decades, not years, away," “How we’ll ultimately learn to trust autonomous vehicles,” and "Building hack-resistant driverless cars." If you're a member of the media and would like to contact Associate Professor Shan Bao for an interview, drop us a line at [email protected].