This has always been an interesting subject. As AI is increasingly being used to make ethical decisions more and more possible solutions have been put forward in order to ensure these algorithms make the most morally sound decisions. An example of this is the moral machine experiment that took place not too long ago.

The researchers who conducted the moral machine experiment addressed this problem by having volunteers offer their opinions. This seemed to work, having the people help decide in part who lives and who dies on a road full of autonomous vehicles. It isn’t just self-driving cars either. Autonomous military weapons will have to make ethical decisions about sparing innocent civilians and crime assessment tools will need to utilize the proper criteria. However, a new potential solution has come to light. This solution, may in fact, steer us in the proper direction. One can only hope.


One example of  the questions presented in the moral machine experiment.


Its no secret that typical algorithms are not designed to make moral trade offs in the way we expect them too. They are designed to provide a singular mathematical objective such as, maximize the number of people saved. When you add conditional criteria in there such as accounting for “freedom” or “human rights” the answers get murky.

If we wish for future AI programs to provide us with what we want than we have to address they way programs are designed. Humans tend to want multiple incompatible things and this will not be easy to address with computer code. If we were dealing with super intelligent AI of the future than we would only have one shot to program in the ethical code we wish it to follow. Getting it wrong could have severe consequences. Therefore, it is imperative that this issue be addressed as soon as possible, before that point is reached.


There exist many solutionless dilemmas in which humans must solve problems with much uncertainty. This in part allows us to narrow down the possibilities and settle for the one that seems to be the lesser of the many evils. These kinds of dilemmas aren’t just limited to algorithms. They have existed for centuries and philosophers and ethicists have studied them for nearly as long. The director of research for the partnership on AI, Peter Eckersley, has suggested that we find a way to build uncertainty into our algorithms in order to better equip them to consider multi faceted issues surrounding ethics and morality.


Eckersley puts forth two possible ways to express this mathematically. The first technique is known as partial ordering. If we begin with the premise that a typical algorithm is programmed to prefer friendly soldiers over friendly civilians and friendly civilians over enemy soldiers than we have a good starting point in which there is little room for uncertainty. We would simply adjust the former premise to tell the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers but not specify a preference between friendly soldiers and friendly civilians. This gives the algorithm some more room for decision making.

The second way in which this could be implemented is referred to as uncertain ordering. In this method you would have you have several lists of absolute preferences, but each one has a probability attached to it. Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers. A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers.

At this point the algorithm would be able to compute multiple scenarios and provide a menu of options for humans to deliberate on. Another way of illustrating this example is normally an algorithm would recommend the best single treatment based on the output from its training data. Using the techniques described above the algorithm would put forth three possible treatments. One for maximizing patient life span, one for minimizing patient suffering, and another for minimizing cost. The idea is to essentially keep the machine unsure and have it hand off the ethical decision making to the humans.


A method like this would not be able to address instantaneous real time ethical decisions that would have to be made by self driving cars. So in those cases all of the decision making would have to be coded into the program ahead of time. After that, whatever the vehicle decided to do in whatever situation would have to be accepted.

Nevertheless, this is a problem that can be put off no longer. The complexity of our world is changing so fast that it will be impossible for one single person to understand every aspect of military response systems or the entirety of the ethical problem governing autonomous vehicles and their decisions. We will have to pass off some of that complex decision making to AI programs to keep things running efficiently. Experts believe that while Peter’s approach is a step in the right direction more research is needed to address all concerns.

Leave a Reply

Your email address will not be published. Required fields are marked *