Would a Self-Driving Car Kill You Rather Than Allowing You to Cause a Collision?

So here’s a scary thought: self-driving cars are intended to provide you and those around you with the safest driving experience possible, but what happens if this artificial intelligence is placed in the ethical conundrum of choosing your life over the life of someone you could potentially crash into? Would it save you, killing the other person? Or would it kill you?

Many questions have been raised about the safety of self-driving cars, and while Google remains adamant that its autonomous automobiles won’t launch you out of the driving seat at random intervals or anything, many are still skeptical in regards to how this technology will react to being released on a consumer level, and if they’ll be able to handle the roads outside of their test areas.

Researchers at the University of Alabama at Birmingham have raised the important question of what course of action a self-driving car would take if it had to make a decision between endangering the safety of its passengers, or endangering the safety of pedestrians and other drivers. “Ultimately, this problem devolves into a choice between utilitarianism and deontology,” said UAB’s Ameen Barghi. “Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people,” he continued, adding that deontology argues that “some values are simply categorically always true.” “For example, murder is always wrong, and we should never do it,” he concluded.

What Ameen is suggesting is that the self-driving car would, in a hypothetical life-or-death situation, have to “choose” between either saving a larger group of people that could potentially be put in harms way due to a collision, or essentially sacrificing the passengers’ safety. The Google Car is capable of making decisions in fractions of a second, acknowledging its surroundings in order to avoid obstacles on the road, and acknowledge pedestrians and other vehicles. The question posited by the UAB is how the car’s artificial intelligence will be programmed to respond to such a situation, if it prevents itself.

Google has released details of its Google Car test runs, and though it hasn’t given a detailed rundown of the 11 accidents it has been involved in since the tests began six years ago, the tech giant assured the public that none were particularly dangerous, and that all of them were allegedly not the fault of the Google Car, but rather surrounding drivers. It remains to be seen whether the autonomous vehicles will retain their low number of road accidents when they reach consumers, but when they do, how will Google program them to respond in the kinds of life-threatening situations posed by the UAB, in which the vehicle must make a decision between protecting the safety of its passengers or the safety of those around it?

(Via Science Daily)

Photo: Getty Images

TRENDING
No content yet. Check back later!

Load more...
Exit mobile version