THE MORAL MACHINE EXPERIMENT

      No Comments on THE MORAL MACHINE EXPERIMENT

 

AUTONOMOUS VEHICLES AND MORALITY

On this blog we have discussed before how things are going to have to change when fully autonomous vehicles finally hit the streets. We’ve discussed how the unpredictable behavior of pedestrians and drivers pose major problems for an AI attempting to operate a motor vehicle without causing any accidents or fatalities. In a closed environment these programs perform perfectly in areas related to performance and safety. But once you throw humans into the mix everything becomes more complicated. This discussion can be found here.

Moving forward developers of these autonomous vehicles have realized that in situations that a collision is inevitable the program must be able to make moral decisions regarding the lives of humans. Never before in our history have we allowed machines to makes these kinds of decisions which means there is a lot to lose if we don’t do this properly. For example, if the brakes were to fail the AI may have to choose between swerving right into a brick wall killing its passengers or swerving left and killing a group of pedestrians that are crossing the road. There are many possible scenarios like this that could possibly arise and the AI has to be equipped to handle all of them in real time.

STRICTER JAYWALKING LAWS

We have mentioned before that one thing we can do to help minimize the need for solving complicated moral dilemmas is to impose strict restrictions on when and where pedestrians can cross the road. That would be a decent way to remove some of the unpredictability in an autonomous vehicles environment which would increase its level of performance on whatever safety metric it was programmed with.

Unfortunately, this alone would not completely solve the problem as the passengers in these vehicles would still be at risk. Also, there will always be people who break jaywalking laws that don’t necessarily deserve to be killed by an autonomous vehicle because of it. Obviously we could put something in the law that states “All pedestrians walk at their own risk” or something along those lines but that would also not solve the problem. And the close proximity of sidewalks to the roads in major cities dictate that these vehicles need to perform perfectly. That includes making difficult moral decisions about who lives and who dies.

MORAL DILEMMAS WITH NO EASY ANSWERS

An autonomous vehicle must be programmed with strict rules regulating who to prioritize. On the surface it might seem somewhat simple. Children for example have their whole lives ahead of them and the elderly obviously do not so it might make sense for the AI to favor children in those instances. However, there are other arguments such as the fact that young people have a greater chance of surviving a collision like that. Not to mention it doesn’t take many other variables that determine intrinsic value into account.

Another common example would be between men vs women. Most people instinctively think that due to reproductive value the AI needs to favor women over men but when you consider the fact that there are over 7 billion people on this planet that point proves to be merely a sub conscious bias humans have instead of a legitimate point. There are all kinds of other criteria to consider as well like whether a law abiding citizen should be prioritized over a criminal or whether a healthy person should be prioritized over an unhealthy person. These examples highlight just how imperfect humans are which means ideally we shouldn’t be attempting to decide what kind of intrinsic value other humans have. But unfortunately we must pioneer forward into an imperfect world and make the most of things for the sake of innovation that will benefit us all.

THE MORAL MACHINE EXPERIMENT

A group of researchers decided to have a global conversation about these moral dilemmas. They accomplished this by creating an experiment they called the Moral Machine. This was an online platform that presented scenarios that involved prioritizing the lives of some people over the lives of others based on things like gender, age, perceived social status and things of that nature. It gathered over 40 million decisions from 233 different countries and compiled them all together.

They plan on compiling all of this data and presenting it to the companies who create and test moral algorithms in hopes that they will take the opinion of the people into consideration when programming certain preferences into the AI. Experiments like this are somewhat helpful because no person or group of people really want to make decisions like this on their own.

One could argue that people high up the chain in large corporations like Amazon or Google would have absolutely no problem making these decisions since people with less empathy tend to have an easier time climbing those ladders and they would be right. However, when it comes to the regulation of self-driving cars and the moral decisions they make corporations alone will not oversee the creation of regulations and rightfully so.

In their paper the researchers wrote:

Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision.

We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation.

Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

They have decided that the best way forward is to let the people decide what kinds of ethical decisions these vehicles would make. It is very possible that this is the lesser of the two evils since we don’t have super intelligent AI yet to help us with decisions like this. When lawmakers decide on regulations to impose on autonomous drivers public opinion will be a strong influencing force, more so than with any issue before it.

THE RESULTS

The results can be summed up with this graphic:

moral machine experiment results

According to the results you would be relatively safe from being on the wrong end of a moral decision conducted by a machine if you do not belong to one of these demographic groups.

-Male

-A passenger

-Unhealthy

-Poor

-Unlawful

-Elderly

-An animal

ANALYSIS

Now there are a few of these that make sense but this final list is still extremely flawed and riddled with human biases and irrationality. For example the idea that pedestrians should be favored over passengers forces us to return to the issue of people crossing the road at all. In order for consumer confidence in these vehicles to be high enough to actually purchase them and trust them with their lives than the car should not favor pedestrians over passengers for obvious reasons.

We already have a solid system that restricts when and where pedestrians could cross the road but those rules would have to be strictly enforced. Once that is the case any pedestrian walking when they weren’t supposed to be would now be considered unlawful and thus not be valued over a passenger any longer. On the other hand once the pedestrians are given a green light and the autonomous vehicles must stop and one of them has a problem with their brakes it would be unfortunate to strike the pedestrians who diligently waited for the signal to tell them to cross.

However, if the vehicles were designed to favor passengers over pedestrians it would force pedestrians to be extra careful even when crossing the road legally which would ultimately help create an environment for the self-driving cars that was a little more free from unpredictability than before. An idea that comes to mind is simply building pedestrian bridges that rise over top the roads in major cities. Then the only thing to be concerned about would be cars striking pedestrians.

The reader also must keep in mind that the above analysis is incomplete and  meant only provoke deep reflection on these issues and nothing more because at the end of the day this will affect each and every one of us.

SYNOPSIS

We could explore some of the many other issues that arise from the results of this experiment but that would extend beyond the scope of this post. I will say however that one possible solution to all these problems is that the AI could be programmed to not prioritize anyone at all. In the event of an imminent collision the program could simply roll an imaginary dice and chance could decide who lives and who dies. A computer program could easily perform this task in a fraction of a second which means the vehicle would still be able to respond effectively to the situation in real time.

This would force the manufacturers of these vehicles make sure that their product doesn’t ever end up in a situation like that by employing more rigorous safety tests into their process. We’ll leave the remaining moral questions for the diligent reader to think about.

For anyone wishing to check out some of the moral dilemmas that were used in the Moral Machine experiment, they can be found here. Also for further reading the research paper can be found here.

Leave a Reply

Your email address will not be published. Required fields are marked *