IMAGE RECOGNITION AIs AND THEIR FLAWS
It is well known that computers have had an edge over humans when it comes to mathematics and topics of that nature. An area where computers have not excelled all that much relative to humans is image recognition. In some cases, all it takes for a program to call an apple a car is the reconfiguration of a couple of pixels. Even with the recent advancements of neural networks that mimic the human brain there are still easy ways to fool the system.
Certain images can trick a computer to mistake random scribbles for trains, fences, and even school buses. What is hard for computers is typically easy for humans so researchers at Johns Hopkins University conducted a unique experiment. Most of the time research in this field is about getting computers to think like people. This time it is all about whether humans can think like machines. Research like this can be used to see just how closely these neural networks actually mimic the human brain.
Just by adding a layer of colored static computers can be fooled into thinking the panda is a gibbon.
The idea is that computers tend to misidentify objects in ridiculous ways that humans never would. How do we truly know humans can’t make the same mistakes? Until now it has never been tested. To test this Johns Hopkins Researchers essentially asked people to “think like a machine.” Machines tend to have a very small vocabulary when it comes to naming images. The researchers showed their participants dozens of fooling images that had already tricked computers and gave the people the same labeling options the machines had.
They asked which of the two options the computer decided the object was, one being the computer’s real conclusion and the other being a random junk answer. For example they would present an image that seemed to be nothing more than a blob and asked whether the computer had decided whether it was a bagel or a pinwheel. People strongly agreed with the conclusions of the computer, choosing the same option as it 75 percent of the time. Next the researchers changed things a bit and had the participants choose between the computers favorite answer and its next best guess making it harder as both options were legitimate as opposed to the random junk answer from before.
This time people agreed with the machine’s first choice 91 percent of the time. Even when the researchers changed things again so that there were 48 choices for what the picture represented the people strongly agreed with the machines. A total of 1,800 people were tested in this manner.
The findings suggest that modern computers may not be as different from us as we think. They demonstrate how advances in artificial intelligence continue to narrow the gap between the visual abilities of people and machines. Chaz Firestone an assistant professor in Johns Hopkins’ Department of Psychological and Brain Sciences was one of the lead researchers for this experiment. His take on the results are as follows. “We found if you put a person in the same circumstance as a computer, suddenly the humans tend to agree with the machines. This is still a problem for artificial intelligence, but it’s not like the computer is saying something completely unlike what a human would say.”
In other words. More research is clearly needed but what we do know is even when neural networks make mistakes they are making very human like mistakes. This ultimately means the technology is moving in the correct direction. The issue left to solve is the one of the fooling images. Being able to fool these neural networks with junk images is a serious security issue. That is a topic for another post.