Last September, Uber, the app-based ride service, rolled out a small fleet of self-driving cars in the US city of Pittsburgh. Reporters delivered breathless accounts, largely along the lines of: “Not me! I would never opt for a self-driving car!”
Two things unsettled the reporters.
One was the threat to their sense of autonomy. Self-rule is a core value in constitutional democracies, enshrined in a culture that deifies the lone cowboy or the lone driver setting off into the sunset. A car is often one’s first major possession. In the California of my childhood, wide open freeways were an expression of individual choice and power.
The other thing that unsettled them was how the machines would act if faced with the “trolley problem”. It’s a favourite of every first-year ethics class. Imagine you are a trolley or tram driver on a set of fixed tracks and the brakes fail. Just ahead of you, a group of five people are crossing the tracks. You cannot stop, but you can pull a lever and switch the speeding trolley to a side track. Sadly, there is also a person crossing there. Can you kill the one to save the five? Most people say yes. Does it matter if the lone person is Einstein, and the five are members of a criminal gang? Does it matter if the one is your elderly parent and the five are innocent children? And do you want to make a rule for all trolley operators that it is always better to kill one to save five?
So how does the self-driving car solve this problem? I asked J. Storrs Hall, a Virginia-based artificial intelligence expert and author who has given machine ethics a great deal of thought. He published a book called Beyond AI: Creating the Conscience of the Machine.
Hall thinks the public worry about self-driving cars comes from mistaken ideas that they would employ some disturbing utilitarian machine ethics to come up with a solution. They wouldn’t.
The trolley problem is just as unsolvable for a machine as it is for philosophers. In his view it’s not actually a moral problem; it’s a technical problem. You design the machine to cause as few accidents as possible. You test and compare the results with what human drivers actually do. And then you reiterate the process. It doesn’t need to be perfect; it only needs to be better.
Ultimately, the question for an ethicist is: even with an imperfect algorithm, is the self-driving car a more ethical choice?
Hall and I agree: the answer is a resounding yes.
Nearly 1.3 million people die in road crashes each year – 3,287 deaths a day, on average. An additional 20 to 50 million are injured or disabled. More than half of all road traffic deaths occur among young adults aged 15-44. About 32% involve alcohol.
And that’s just the tip of the iceberg when it comes to the thoughtless or downright criminal stupidity of drivers. Many other accidents are caused by driving under the influence of drugs, texting while driving or exhaustion.
The safety programming on a self-driving car would eliminate the risks imposed by such behaviours. Self-driving cars would also give autonomy to those who currently don’t have it, like the aged and infirm.
There are other values to consider. Creating a car that is wildly technologically advanced instead of, for example, creating better public transportation systems is a choice for one sort of world over another.
Like any technology, self-driving cars are not perfect. But stopping the needless tragedies on our roads means the choice to develop them is not even a close call. No doubt there will be mistakes and dreadful accidents. But by preventing risky behaviours in the first place, there would undoubtedly be less carnage than we have today.
In Hall’s view, the real trolley problem is: how long do you leave the status quo versus speeding up the development of self-driving cars? I agree. In the not- so-distant future, the question for ethicists will be: why did we wait so long to fix this?