THE BLACK BOX PROBLEM

      No Comments on THE BLACK BOX PROBLEM

 

A PROBLEM OF UNDERSTANDING

Artificial Intelligence is a large part of daily life for computer and smart phone users. We rely on algorithms everyday to perform various tasks quickly and efficiently. It is imperative that these algorithms work properly and that we understand how they work. A deep understanding of what is going on is necessary for further progress. However, when we use AI to tackle larger and more complicated problems we encounter a “black box problem” which makes it difficult to explain what is going on under the hood.

It is a serious problem but at the moment it is limited to very large deep learning models and neural networks. These neural networks break down problems into millions or even billions of pieces and then assembles them step by step in a linear fashion to solve them. Because the human brain doesn’t work that way we have no good way of knowing what exactly the algorithm is doing or what methods it is using. This has been called the “black box problem” because during these times AI seems to emulate a black box that has no way of looking inside. This not only prevents us from gaining deep insight required to tweak the algorithms but it causes all kinds of issues with trust. Trusting the AI will become more and more important in the future when it starts playing an even larger role in our lives than it already does.

black box

It can be difficult to understand where the output really comes from.

DECISIONS WITH HIGHER STAKES

It has been pointed out by various experts that these algorithms tend to work by deriving a simple understanding from massive amounts of data while our brains tend to do the opposite. It is worth noting however that there is a small percentage of humans who are capable of thinking like these algorithms and they also tend to suffer from a similar black box problem. This is beyond our current scope but further study of these individuals in the field of psychology will turn out to be very useful in the future.

The bigger issue with deep learning is that is uses calculations and has the capability of reaching a correct conclusion without using steps that we would view as “logical”. On top of that, when a deep learning program performs 200 million calculations to reach that conclusion we have no way of dissecting how it got from point A to point B.

Some critics say that since we don’t really understand how humans reason either and since the algorithms seem to work as intended that there is nor real problem. The obvious issue here is that decisions with higher stakes that involve medical or military applications require more transparency and call for a deeper understanding of what is going on for obvious reasons. Data scientists are constantly making trade offs between prediction ability and explain ability. There is no good way to draw this line when dealing with higher stakes. Therefore, some kind of method that allows us to gain insight into the black box is necessary.

ISSUES WITH TRUST

Most of the time the black box problem only presents itself with large scale projects taken on by tech giants like Google or Amazon. But those are unfortunately the situations in which it is even more imperative to be able to glance inside the box than ever. AI is based on statistical thinking, which in a lot of ways, is the art of uncertainty.

People may not trust the machine because they can’t see how it can have an accurate understanding of the problem at hand. And no one wants to be told by a machine that their thinking is biased or that they are wrong without having the “why” explained to them. Fortunately there are steps being taken to address the black box problem and in turn solve underlying issues of trust.

NEW METHOD ATTEMPTS TO PEEK INSIDE THE BLACK BOX

Computer Scientists out of the University of Maryland have attempted to develop a new method for looking inside this mysterious metaphorical black box that surrounds and mystifies AI. Researchers are now reducing the size of the input and attempt to compare the resulting output with the former output that came from a larger input.

Many algorithms are programmed to output an answer even if it doesn’t know or if the answer is nonsense. In certain cases researchers have been able to get the applications to derive a relevant answer from a one word inquiry which can allow for more insight to be derived from the algorithms decision making process. Providing multiple very small inputs as opposed to one larger one can inadvertently reveal how the algorithm reacts to specific language.

In one case a machine learning algorithm that analyzed photos was asked what color the flower was in the photo. It correctly responded with yellow. The researchers trimmed down their question from “what color is the flower?” to “flower?” and the program once again got the answer correct by outputting yellow. This demonstrates that even nonsensical inputs can yield correct answers which reveals a lot about the algorithm than was previously known. Because many algorithms will force themselves to provide an answer regardless of accurate data or an accurate input this method might allow programmers to design novel algorithms that are actually aware of their own limitations opening the floodgates for an even deeper level of understanding than ever before.

Leave a Reply

Your email address will not be published. Required fields are marked *