How An Autonomous Vehicle’s “perception module” killed a Pedestrian

 

Last week, the National Transportation Safety Board released its preliminary crash report related to the pedestrian fatality caused by the autonomous vehicle (AV) in Tempe, Arizona this past March.

Trust The Economist to wade right into the muddy waters; since this report has not received much coverage in the rest of the media, we’ll join the fray.

The NTSB confirmed what has been previously reported — the AV’s braking system had been disabled. But why?

There are three computer systems that run the autonomous vehicle.

The first is the “perception” system that identifies objects that are nearby. The second is the “prediction” module which games through how those identified objects might behave relative to the autonomous vehicle.

The last module implements the predictions of object movement suggested by the second module. Also called the “driving policy”, this third computer system controls the speed of the car, or turns the vehicle as required.

It’s no surprise that the perception module is the most challenging to program, but also the one that is required to ensure that all users can safely use the road surface. Sebastian Thrun from Stanford University describes that in the Google AV project’s infancy, “our perception module could not distinguish a plastic bag from a flying child.”

And that may be what happened to the pedestrian killed while walking the bicycle across the street in Arizona. Although her movement was detected by the perception module, a full six seconds before the fatal crash, it “classified her as an unknown object, then as a vehicle and finally as a bicycle, whose path it could not predict.

And here is the sad — and scary — part: “Just 1.3 seconds before impact, the self-driving system realised that emergency braking was needed. But the car’s built-in emergency braking system had been disabled, to prevent conflict with the self-driving system; instead a human safety operator in the vehicle is expected to brake when needed.”

“But the safety operator, who had been looking down at the self-driving system’s display screen, failed to brake in time. Ms Herzberg was hit by the vehicle and subsequently died of her injuries.”

Because random braking can cause challenges like being rear-ended by others, the perception system on AV’s does not slow down when it gets confused — that’s why there are human safety drivers to “trouble shoot” the system when the car can’t make the right choice.

The problem is that humans are fallible, and do not pay attention all the time. While AV’s will be safer than today’s vehicles which have 94 per cent of “accidents” (really crashes) caused by human error, it will be the fine tuning of the prediction module that will increase consumer confidence on the ability to keep other road users safe too.

As the senior vice president of Intel Corporation Amnon Shashua states “Society expects autonomous vehicles to be held to a higher standard than human drivers.” 

That means zero road deaths and zero deaths of vulnerable road users. This accident needs to be carefully examined to ensure it never happens again.


Photo: TheInternetofBusiness