It is no secret that the more we develop AI technology the more we will want to utilize robots in the workforce. There is such a large incentive for large companies to employ cheap and efficient robots that will out produce their human counterparts. Some jobs will be easy to automate, others will not be. For a long time humans and robots will have to work together which brings up all kinds of moral concerns.

Early on robots were being employed for more dangerous tasks such as detecting and disarming mines. Humans are obviously very grateful for the existence of these robots and don’t seem to be averse to them in any way. However, robots are finding themselves in fields that involve dealing with people such as household helps and nursing assistants. We also have a robot news anchor in China and a robot that interviews job applicants in Sweden.

As an increasing number of machines equipped with the latest Artificial Intelligence software begin taking on a wider range of specific tasks the question of how they are perceived by humans becomes that much more important. Researchers out of the Ludwig-Maximilians-Universitaet (LMU) in Munich have begun looking deeper into the psychological reasons behind why we view robots in certain ways. The aim of this study was to determine the degree to which people show consideration for robots and behave towards them in accordance with moral principles.


The core question these researchers were hoping to answer was “Under what circumstances and to what extent would adults be willing to sacrifice robots to save human lives?” This is a bit different from many of the other studies conducted in the past that only focused on how much humans hate robots or how creeped out by them they are.

In the study the participants were faced with a moral dilemma: Would the be prepared to put a single individual at risk in order to save a group of injured persons? In each scenario presented the victim was a human, a humanoid robot that was designed to look and sound as close to a human as possible, or a robot that was quite clearly a robot. It was important for them to distinguish between the human-like robot and the actual robot because humans have a tendency to help people who look and sound like themselves and tend to avoid people who are different than them. Study after study has confirmed this when looking at how humans tend to respond to people of different races and ethnic backgrounds as opposed to people of their own race and ethnic background.


The results largely indicated that the more human-like the robot appeared the less likely the participants were to sacrifice it. On the surface, this more or less confirms what psychologists have been saying for years about human behavior. The results also indicate that the study group attained a certain amount of moral status to the robot.


There is more than one way to interpret the results. On the one hand, you could say that we should make every effort possible to humanize the robots so that we would elevate their moral status and would be viewed equally. However, it is also important to note that when it comes to certain robots in certain roles it would be better if we didn’t humanize them.

When we elevate the moral status of a robot equal to our own it could cause us to lose sight of what the robot is truly there for, to help us, not necessarily to be citizens or our equals. The argument of whether we will end up making robots autonomous citizens of nations, despite the fact that is had already been done once, will be put off until we can create a truly conscious AI. So the issue with overly humanizing robots before they are truly conscious is that it can cause the humans working with it to lose sight of reality and of what is truly important.

Regardless of ones interpretation this area of psychology will no doubt be explored further and further until we can come to a place of mutual understanding between us humans and our new robot friends.

Leave a Reply

Your email address will not be published. Required fields are marked *