Aris Chatzistefanou
Source: Efsyn
Would you buy a car that was programmed to kill you, in the case that this could save the lives of two or more pedestrians? This might seem like a purely hypothetical question but it has lately troubled various governments, philosophers and car industries around the world who just can’t find an answer.
It’s 2030, you are cruising along a mountainous country-road with your self-driving car when suddenly, out of nowhere three kids jump into your course. Within a few tenths of a second, the car’s control system must choose between making a sudden turn and lead to your certain death or keep moving forward and kill the three children.
The decision will be made by an algorithm created by the car’s company in order to calculate the pros and cons of each scenario and find out the course of action that will result in the fewer possible deaths and damage.
Today, the programmers of those algorithms face the same problems that Moral Philosophy has already encountered for some decades now. It is basically just another variation of the famous Trolley Problem that was introduced in the 60s: A runaway trolley is speeding down the railway. On the tracks ahead, five people are tied up. A person standing out of the trolley can pull a level and switch it to a different set of tracks. However that will cause the death of an unsuspecting railway worker doing maintenance on the tracks ahead.
Studies showed that 90% of the people that were asked, think that the person that controls the level should sacrifice the maintenance worker in order to save the five people.
In a more complex variation of the same problem, introduced by the American philosopher Judith Jarvis Thomson, a bystander can only save the five people by throwing a heavy object into the rails. And the only available “object” suitable for this appears to be an overweight gentleman that happens to be near.
Will the bystander attempt to kill a human being bare-handed in order to save five other people? How would the decision be influenced if the two people knew each other? Or if the overweight person was the one who tied the other five in the first place? Would the decision be affected by whether the fat gentleman was a good or a bad person?
Thomson introduced an even more extreme scenario (no trams involved this time) where a doctor has five dying patients who need organ transplantations in order to survive.
When a stranger (who happens to be a compatible donor for every patient) knocks the door, the doctor wonders if he should kill the stranger to save the five people in need of his organs.
The self driving car is not self-aware. It’s just driving; it’s not thinking.
Eric Schmidt – former Google CEO
The more complex a Trolley Problem is, the clearer it becomes that there are no predefined and absolute moral answers. Morality, as Lenin himself said, takes different forms in different eras and societies. As the social conditions and the relations of production change, so do morals.
However, does anything mentioned above have actually anything to do with the new autonomous cars or are we just talking about plain thought experiments used to keep our philosophers busy?
In the case of the self-driving cars, there is a question that arises: Who will create the algorithm? In other words, who will be given the authority to decide who lives and who dies?
If it is up to the insurance company, accidents with minimal economic cost will be preferred. On the other hand, car industries will seek to protect the passengers of the vehicle instead of the pedestrians or other drivers.
Otherwise no one would buy a car that could possibly sacrifice its own driver for the greater common good.
Studies have shown that the same people have who have answered that it would be better if one person died instead of five, stated that they would not buy a car that would make such a decision.
It would be naïve to believe that the new technology will bring equality in a system that feeds off inequalities. It would also be naïve to believe that the rich drivers wouldn’t have a way to bypass the algorithm that could kill them. By allowing the invincible hand of the market to take control of the steering wheel we will probably end up mourning more victims then ever before.
In theory, the state should control and enforce the decisions that will cause the less possible victims. However that could only happen if the state treated and valued the lives of its citizens equally, something we know from experience that it almost never happens.
Some will argue that the most important problem is that the state itself is nothing but a tool that secures the dominance of the economic elite and is therefore more likely to succumb to the demands of the most powerful lobbies.
Before we label all those people as conspiracy theorists we would do well to remember that the Obama’s campaign manager David Plouffe, joined Uber, when the company started facing problems with the American authorities in charge of regulating Uber’s field of activity.
Today Uber along with Google are leading the research on self-driving cars and will soon be in need of people who know how to pull the strings inside the White House.
Technology is once again here to drive us to a safer and easier future. But who will we share the keys of our new private car with?
Info
Moralmachine.mit.edu
A platform from Massachusetts Institute of Technology where the user must choose the lesser of two evils in various accidents scenarios, taking into considerations factors such as the number and the age of the victims, their gender, their occupation etc.
Translation: Panos Chatzistefanou