Is it better to allow multiple people to die by inaction, or save them by causing one to die instead? This is the basis of the ethical problem commonly known as the trolley problem. The scenario is pretty simple: an out of control tram is racing towards five people tied to the tracks. It looks like the people on the tracks are about to become mincemeat, but you can save them. You just happen to be standing in front of a lever that if pulled will switch the tram to a different set of tracks. The catch is that there’s one person tied to that track, and by pulling the lever you will be directly responsible for their death. What’s the most ethical decision to make?
As a thought experiment, the trolley problem has been used to befuddle philosophy students since the 1960s and more recently has become popular as a meme where the circumstances are decidedly more humorous. But it turns out there’s an emerging technology where the ethical considerations of the trolley problem become a real-world problem: driverless cars.
Consider this. You’re an engineer tasked with designing the decision making protocols of a driverless car. You know that no matter how closely the vehicle follows the road rules, there will be occasions where that car is involved in an accident. Some of those will involve fatalities. Sometimes, in those rarest of rare occasions, the vehicle is going to have to make a choice about who dies, because any course of action is going to cause death.
This is where the simplicity of trolley problem begins to expand into the ugly, all-sales-are-final real world. What do you, as the engineer, program the car to prioritise? Is the passenger, presumably the person who bought the car from you your priority? What if there are multiple pedestrians likely to die? What if there are multiple passengers? Do you take age into account? What about social standing?
The Massachusetts Institute of Technology has a pretty interactive way to experience this moral dilemma, allowing you to make choices in a variety of circumstances and then compare your results against others. I, for example, favoured saving the young over the old to such a degree that I now wonder if not growing up with grandparents has fucked me up. The thing is, my and any other participants answers aren’t right; they’re just embodiments of our personal biases. And so the engineer tasked with programming the driverless cars of tomorrow has to consider their own biases, and the biases of the company.
It’s those biases which could get automakers in trouble. In October 2016 Mercedes executive Christoph Von Hugo tried to answer the trolly problem and driverless cars question, and it went about as well as you might expect. He implied that the company’s future autonomous vehicles would prioritise the car’s driver and passengers (you know, the one’s who paid for the car), even if it meant sacrificing pedestrians. “If all you know for sure is that one death can be prevented, that’s your first priority,” he said. The ensuing media storm lead to Mercedes trying to explain that Von Hugo was misquoted and that the company believed in providing “the highest safety” to all road users.
So if we’re all biased, if the variables are wild and uncontrollable, if death is sometimes unavoidable, what do you, the engineer holding the proverbial lever, use to guide your programming decision? The answer isn’t all that comforting for anyone who plans on sharing the road with these new vehicles. In a paper published earlier this year in Northwestern University Law Review, Stanford University researcher Bryan Casey basically says that ethics takes a back seat to legal liability. Essentially, automakers will program autonomous vehicles to make the decision which is least likely to get them sued.
“Profit-maximising firms will design their robots to behave not as good moral philosophers, but as Holmesian bad men—concerned less with ‘ethical rule[s]’ than with the legal rules that dictate whether they will be ‘made to pay money’ and can ‘keep out of jail’,” Casey writes.
“Far from following a ‘clear and consistent’ moral code, optimised systems will instead follow an amoral code that reflects the messy economic realities of society’s imperfect legal regimes. These robots will not maximise morality, but minimise liability.”
Yikes. Not exactly the kind of utopian future solution we might have imagined, right? Perhaps, then, the only smart decision is to stay off the tracks altogether and hop on the tram instead.