The ethical puzzle behind autonomous vehicles

March 20th
(Waymo)

This post is sponsored by Brilliant, which offers courses like Logic, The Joy of Problem Solving and Probability Fundamentals, as well as fundamental courses like Statistics Fundamentals, Programming with Python and Algorithm Fundamentals. Sign up for Brilliant through Diagram and get 20% off today

Self-driving vehicles are, in theory, safer than human drivers, but their ethical world is far more complicated.

The argument for autonomous vehicles is simple: vehicle-related accidents are among the leading causes of death in the US, where over 30,000 people die a year. Self-driving vehicles will lower that number, because they make far fewer errors than human drivers -- they don't get sleepy or distracted, and they don't text and drive.

The technology's most ambitious proponents argue that even marginally safer autonomous vehicles are better than nothing, and that companies should be encouraged to deploy them at scale in the near term. "Every day we delay, we’re literally killing people," Gary Shapiro, CEO of the Consumer Technology Association, said recently.

The utilitarian argument for deploying marginal improvements isn't that simple, though. While the number of total deaths and injuries may go down, other complicated questions remain: how will a driving AI make decisions in crisis, and who will be responsible for the accidents that do happen?

Decision making

Since autonomous vehicle technology became an investment focus for large tech and car companies, writers and developers have argued whether a philosophical debate known as the trolley problem can be applied to it.

The trolley problem goes like this: you're standing next to a track while a trolley speeds toward a group of people. You can divert the trolley to another track, where it will kill one person. If you do nothing, the trolley kills a group. If you pull the lever, it hits only one.

Many think the problem is unrealistic, but its question is very real: in their inevitable accidents, how will a self-driving car respond, and who will it choose to save?

In a survey, 90% of respondents claimed that they would personally send the trolley down the second track, killing one to save the larger group. Researchers have elaborated on the original dilemma to make more complicated problems: what if, to save the group, you had to push a bystander onto the track? Would you be justified in killing an innocent bystander to save the rest?

It's not too much to imagine that an autonomous vehicle will face a similar problem: barreling toward a group at a crosswalk (or someone especially vulnerable), it may need to consider turning onto the sidewalk, where a healthy, middle aged bystander is minding his own business.

The problem gets even more complicated when the driver’s safety is involved. If given the choice between saving oneself or something younger, it’s not clear who the average person would choose.

"A move toward autonomous vehicles means we must determine some standard of value for human life and program it into our vehicles in anticipation of future tragedies," Benjamin R. Dierker wrote in fee.org.

In 2018, MIT launched an online quiz called Moral Machines which asked people to think through various trolley problem scenarios involving an autonomous vehicle's passengers and bystanders. The quiz repeated the question in a variety of ways, from change the bystanders from elderly people to doctors and children. In some cases, the choice is between a single driver and a handful of people.

According to the 2 million people who took the quiz, the results are as follows:

The strongest preferences are observed for sparing humans over animals, sparing more lives, and sparing young lives. Accordingly, these three preferences may be considered essential building blocks for machine ethics, or at least essential topics to be considered by policymakers.

Responsibility

Self-driving vehicles need to be trained to make preferential decisions, whatever those may ultimately be. The question is, then, what the correct decisions are, and who will ultimately be responsible for them.

In 2018, an Uber operated semi-autonomous taxi struck and killed a 49 year old woman. The taxi wasn't entirely autonomous -- it had a safety driver hired to interfere and manually operate the car in case of emergencies. At the time of the pedestrian's death, the safety driver was streaming video on her phone.

Investigators later learned that the car saw the pedestrian six seconds before hitting her, but didn't break until 1.3 seconds before impact.

Who's fault was it? The safety driver, the software developers, or the victim? The National Transportation Safety Board laid blame on all parties, including the state of Arizona for allowing the vehicle on the road. The safety driver did not go to prison, and Uber settled out of court with the victim's family. In March, 2020, Uber began deploying autonomous taxis in San Francisco.

"Our testing area will be limited in scope to start, but we look forward to scaling up our efforts in the months ahead and learning from the difficult but informative road conditions that the Bay Area has to offer," a spokesperson said.

If you'd like to explore these ethics yourself, trying playing MIT's Moral Machine game.