A NEW NEURAL NETWORK CAN CATEGORIZE FEELINGS
A new neural network with the appropriate name of EmoNet has been designed to automatically take in and categorize the feelings of a person. This new AI was designed by researchers out of the University of Colorado and Duke University. The goal is to one day create an AI that can fully understand human emotions and react accordingly. Such an AI would have uses in treating a wide array of mental health disorders.
Programs like EmoNet renew the debate that questions whether we should be making emotional AI at all. Regardless of where you stand there is a serious trust issue that must be overcome before such an AI would be useful in the real world.
HOW DOES EMO NET WORK
EmoNet is simply a neural network that was trained using thousands of videos depicting 20 different identifiable human emotions. Some emotions such as ‘anger’ and ‘happiness’ are very easy for machines to categorize as their definitions are clear and distinct. Reportedly the neural network had much more trouble with more abstract emotions such as ‘surprise’ or ‘confusion’. Below is a list of all the emotions that were depicted in the videos used to train EmoNet. It is easy to see which ones may be difficult for an AI to understand as humans barely understand these emotions themselves.
Specific emotions in the videos:
– Aesthetic appreciation
– Empathetic pain
– Sexual desire
After the videos were shown to the neural network human volunteers were brought in. Their brain activity was measured when shown images from the videos depicting emotion. This improved the model drastically as now there was data showing more concrete differences in the more abstract emotions.
Each image is categorized as a particular emotion.
REAL WORLD USES AND DEPLOYMENT PROBLEMS
Products like this have been touted as potentially life saving for people with mental health conditions as it would help doctors diagnose them sooner. Of course, critics point to the many potential problems with trying to program machines with these kinds of capabilities.
The biggest problem is the fact that human emotional states are too complex for an AI without actual intelligence to infer. It is assumed that a person’s emotional state is easily identifiable based on their expression and facial movements. Assumptions such as these can influence everything from judgments in legal cases to the diagnosis of mental illness. Facial expressions brought on by emotion are very powerful and there are consequences for those who don’t understand. How exactly people express emotions such as anger or fear varies widely not only across cultures but across individuals. Not to mention the fact that if someone has an angry look on their face they are almost never feel just anger but many more emotions that the person may not even be aware of.
The bottom line is you cannot accurately gauge exactly what a person is feeling through facial expressions or tone of voice alone. A smile can communicate far more or less than mere happiness and a likewise a frown can come from many sources not just sadness. An AI that does not possess actual intelligence will never be able to accurately gauge human emotions. This means that any attempt to use AI for tasks related to emotional awareness such as the diagnosing of mental disorders will yield inaccurate results. This is dangerous.
The only way such an AI may work is if it was not only gifted with human level general intelligence but also the ability to read the emotions of other humans with the same accuracy. A digital emulation of a human brain may be able to pull it off but we are far away from accurately mapping all the connections in the human brain. Until we develop more advanced AI’s they will not be all that valuable in the realm of identifying human emotions.