WHEN ROBOTS DISCLOSE THEIR NON HUMAN NATURE THEIR PERFORMANCE DECREASES
Recently a team out of New York University have studied how people interact with a machine they believe to be human, and how such interactions are affected once the bots true nature is revealed. They found that bots are far more efficient with communication than their human counterparts but only when they are allowed to hide their non-human nature. Recent developments in AI technology make it increasingly difficult to tell the difference between a bot and a human. This study attempts to shed light on the implications of this moving forward.
A team of researchers led by Talal Rahwan, associate professor of Computer Science at NYU Abu Dhabi, conducted an experiment to study how people interacted with bots whom they believe to be human and how these interactions are affected. In their paper titled Behavioral Evidence for a Transparancy-Efficiency Tradeoff in Human-Machine Cooperation published in Nature Machine Intelligence, the researchers presented their experiment in which participants were asked to play a cooperation game with either a human or a bot. They called this game the Iterated Prisoner’s Dilemma, which intended to create situations in which either party can act selfishly to take advantage of the other or cooperate to achieve a mutually beneficial outcome.
THE PRISONERS DILEMMA AND PREJUDICE AGAINST ROBOTS
Now the prisoners dilemma has been used in experiments in the past involving only humans. The general consensus is that more agreeable people tend to cooperate and disagreeable people tend to act selfishly. However there are more variables that may determine how a person will behave in such an experiment. For example, humans tend to be less inclined to cooperate with people who are much different than they are as opposed to someone who appears more similar. This seems to be one of those instinctual things that is ingrained into our DNA. So it stands to reason that humans will have a harder time cooperating with robots.
That is exactly what happened in this experiment. The results showed that bots posing as humans were much better at persuading their human partner to cooperate in the game. As soon as their true nature was revealed cooperation rates plummeted. This presents a problem for developers of AI who routinely converse with humans. Should they make the bots more human like and hide that fact in order to communicate more effectively? Or should the nature of the bot be disclosed so that people know what they are dealing with accepting the fact that less cooperation between human and bot will be the result?
A visual representation of the prisoner’s dilemma problem.
Rahwan states “Although there is broad consensus that machines should be transparent about how they make decisions, it is less clear whether they should be transparent about who they are” Rahwan also points out that Google Duplex, an automated voice assistant capable of generating human-like speech is designed to make calls and book appointments on behalf of its user, could be unethical. Google Duplex’s speech is so realistic that the person on the other end of the line may have no idea that it is talking to a bot. The findings of this study led by Rahwan suggest that if we prohibit bots from passing as humans then we will pay for it in an efficiency cost but if we don’t then we open up the floodgates for bots that pose as humans to manipulate humans.
Because humans are hard wired to cooperate with life forms more alike them if we allow future AI programs who posses computing capabilities that far exceed the human mind to pose as human then we will have serious problems with manipulation. That is not a direction we need to be going in. Instead, while we can work on making the bots appear more human-like, they need to disclose the fact that they are robots and people simply need to accept the fact that we will have to work closely with robots whether we like it or not. It is the same issue of prejudice that has plagued humanity since the beginning and will continue to do so until we find a meaningful way to address it.