Chat bots that are able to connect with people is a fairly new concept. The more primitive machines that we have seen in the past are not capable of conveying emotion. Recently, the focus has shifted a bit to creating bots, that can not only hold real conversations with people, but connect with them at a deeper level than ever thought possible. This idea has many practical uses in areas like counseling and therapy. In fact these so called “empathetic” bots have already demonstrated some therapeutic value beyond what anyone was expecting. At the same time there are skeptics who point to studies that claim the more human like bots become the more we tend to view them as creepy. A recent study out of Penn State attempts to shed some light on this.


The results of said study were not as conclusive as researchers hoped. The general idea they gathered from this is most people appreciate a chat bot that offers sympathetic or empathetic responses but that reaction relies on how comfortable the individual is with the idea of a machine that “feels”. The participants clearly preferred the emotional machine to the non-emotional machine, though, which is no surprise given the number of people turning to online chat bots for emotional support or health advice.

Healthcare providers are always looking for ways to cut costs. Technology such as this could ensure that patients feel comfortable enough with conversational agents to share information about their physical and mental health states. This would allow providers to screen a much larger volume of potential patients than has ever been seen. It could also help filter out the people whose problems are mild enough for the bot to handle freeing up human staff. Of course, for this to work the machines would have to be really good at simulating the emotional support humans need since we all know machines cannot actually “feel” yet. Though that day is surely coming.


Since we have established that chat bots cannot actually feel the question becomes to what extent should these bots emulate feelings. A lot of empathetic cues can be programmed into these machines with the help of voice assistant technology. However, it is clear that programs like this could be too personal for some people and end up turning them away from the technology.

Of the 88 volunteers in this study researchers discovered that the ones who were skeptics of machine emotion demonstrated a much more positive response to these bots than the ones who were believers in machine emotion. In other words, the people who believed that machines can convey emotion were the ones who had a negative reaction to the bot’s expressions of sympathy and empathy. The nonbelievers, who made up a majority of the sample size, viewed these expressions as courteous. The reasons for this had researchers stumped for a while.

For anyone interested in talking to a chat bot that has been programmed to be empathetic the latest one can be found here.


These 88 volunteers were asked to interact with one of four chat bots, each one programmed to deliver specific responses in accordance with the following four conditions. Sympathy, cognitive empathy, affective empathy, or the advice-only bot that served as a control. The sympathetic bot would respond with statements like “I am sorry to hear that”. The bot that was programmed with cognitive empathy had the ability to acknowledge the user’s feelings and would say something like “that issue can be quite disturbing”. The affective empathy bot would respond with a statement that showed that it understood why a person might feel the way they do with responses like “I understand your anxiety about the situation”. The control bot was merely an emotionless drone giving very dry machine like responses.

According to their findings, sympathy and affective empathy worked the best among those that were skeptical of the ability of a program to emulate feelings. At first the cognitive empathy bot was thought to be the best but the responses were a bit too detached with the way it approached the problem from a thoughtful but rather antiseptic point of view. This, the researchers believed reinforced many stereotypes of machines in the minds of the participants and thus was not as popular.


When it was all said and done the researchers concluded that when it came to the people who were skeptics, the capabilities of the bots exceeded their expectations which explains the positive responses. For the people who already believed the bots inability to emulate true emotion likely caused them to fall short of expectations. Anytime something exceeds or falls short of our prior expectations it can play a major role in how we feel about it. Humans have a hard time approaching things objectively. This largely explains the strange findings.

In previous studies they had people simply read conversations between other humans and the four kinds of bots mentioned. In this case the affective empathy and sympathy bots also came out on top. It is pretty clear that these two approaches will have to be investigated further. Future research can examine how sympathetic and empathetic interactions work for different issues beyond health and sexuality, as well as investigate how people feel when these interactions are delivered by robots.

Leave a Reply

Your email address will not be published. Required fields are marked *