Algorithms that Kill: The Ethics Behind Self-Driving Cars

By: Marta A.

As Arna M. outlined in the February issue of 2016, self-driving cars are not so far way. Engineering-wise we are there, but there is another big issue with these vehicles. That is the ethics behind it. As about 90% of traffic accidents occur due to human error those would not occur with self-driving cars, what about the rest of them? Things such as bad weather and vehicle malfunction among other things, can also cause accidents and will continue to cause accidents, even with self-driving cars.

Let’s have a scenario; a self-driving car is on a busy three-lane highway behind a truck. Some cargo falls off the truck. The self-driving car can either hit the cargo; hit the car on the left or the scooter on the right. The car on the left has a family with two children in it, and a middle-aged man is riding the scooter on the right. Since the self-driving car is going too fast to stop, should it hit the cargo saving more people? Or should it protect its passenger, so turn left or right? Should it hit the family or the man on the scooter? An appropriate solution that most people would think of would be to have set algorithms to minimize harm.

Another scenario to reconsider setting algorithms to minimize harm could be something such as two self-driving cars are face to face on a bridge due to vehicle malfunction. They are both going too fast to stop, so a crash is unavoidable. Consider that one car has three elderly men, while the other has two teenagers. Which car should turn, and fall off the bridge, to minimize harm? Who should be saved? This case is more complicated as one could argue that teenagers have a life ahead of them while elderly people have lived most of their life, but then again if you kill the elderly people you are ending three live instead of two.

This brings me to an important point: before we have self-driving cars, someone will need to devise algorithms which would indicate who is saved in case of accidents. We would need to put a value on a person’s life like in the previous example, where you either value more three old people’s lives or two teenagers’ lives. What if the scenario involves killing an average citizen or a head of state? There would also be legal issues after accidents. Would those responsible for devising the algorithms, also be responsible in court in the aftermath of an accident? You see, if any of this were to happen with the manual (human-driven) car, no one could be blamed because they didn’t actually make a decision using reason. They acted in the moment without understanding the full consequences in the little time they had to make the decision. On the other hand, with self-driving cars someone designed an algorithm long before the accident.

In conclusion, the moral and ethical issue behind self-driving cars is no easy issue and must be solved before we can all have access to these last generation vehicles. Along with the issue of hacking, it is surprising how the real obstacle of self-driving cars does not lie in the engineering side of it.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s