No Comments on HISTORY OF THE AI WORLD PART 3


    In the 1980’s is where AI really began to take off. The return of neural networks around the year 1986 sparked a lot of progress. At least four different groups had reinvented the popular back-propagation learning algorithm which was first discovered in 1969 by a man named Bryson. In addition to that many connectionist models of intelligent systems were improved upon as well. These models were viewed as direct competitors to symbolic models that Newell and Simon invented and also to the logicist approaches. This is where the field of AI began to divide into smaller sections of study. The history of AI also began to merge with what we know as the modern industry of artificial intelligence.

    At this point in time it was common for researchers to build on existing theories as opposed to proposing new ones. The entire industry had to focus on producing results in the real world. The scientific method became more and more relevant and it got to the point where AI had to embrace certain fields like control theory or statistics despite the fact that it had been founded, in part, to branch out from the limitations of those fields. Even a new industry called data mining popped up as a consequence of this merge.


The Bayesian network was invented to allow efficient representation of and reasoning with uncertain knowledge. It was another major breakthrough of the 1980’s. This network solved many of the problems that plagued the industry in the 1960’s and 1970’s. This approach allowed for learning from experience directly and it utilized neural networks better than anything before it. A Bayesian Network is basically a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph. For example, a Bayesian network could depict the probabilistic relationship between diseases and symptoms or between a set of rules and a desired outcome. Today Bayesian Networks remain one of the most likely models that could end up curing cancer someday. It would just require the right dataset and a sophisticated enough algorithm.


By the mid 1990’s AI researchers began looking at problems related to the “whole agent” meaning they wanted to create an intelligent program that excelled at multiple  unrelated tasks given enough data. However until the early 2000’s they didn’t have access to large enough datasets to do this. Also when trying to build a complete agent researchers realized that certain subfields of AI needed to be reorganized in order to tie their results together. For example, sensory systems like vision and speech recognition cannot deliver accurate descriptions of the environment without some kind of reasoning or planning system to handle all the decision making and uncertainty. During this time the idea of a human level AI was being thrown around despite the obvious limitations.

    They figured out that in addition to all the other problems, that they would also need much more vast knowledge bases. A lot of brainstorming took place to figure out where these datasets would come from. A related subfield of AI was created around this time that was called Artificial General Intelligence or AGI which didn’t hold its first conference until 2008. In this conference there were even some people present who had also been there during the 1956 Dartmouth conference like Ray Solomonoff who wasn’t satisfied with the amount of progress that had taken place since the 1950’s. History was made once again with this conference.


    An effort to cultivate extremely large datasets was the main focus of the early 2000s. It was theorized that the answers to many issues would be solved by simply adding more data as earlier in AI history researchers were plagued by this problem. In part they were right, but it proved to be not quite that simple. As soon as these datasets were made available it became clear that they had to be pickier with the kind of learning algorithms they used. They also figured out that the problem of how to express all the knowledge that a system needs may be solved many times by learning methods as opposed to hard coding the data into the system. This sparked a debate between the people who studied machine learning and the knowledge engineers which still rages on today.


    Self driving cars also took off in the early 2000s when a robotic car named STANLEY sped through the Mojave Desert at a whopping 22 miles per hour. It was the first Volkswagen vehicle with cameras, radar, and large rangefinders to sense the environment and also with software to command the steering, braking, and acceleration. The following year another self driver named BOSS won the Urban challenge by safely driving in traffic through streets of a closed Air Force base, successfully obeying the rules of the road and avoiding pedestrians and  other vehicles. More recently Google also came out with a self driver that navigated public roads throughout the US for months without having any major issues.


All of the progress that has already been made really sets the stage for a fast paced future in which intelligent programs are a very big part of our lives. Advances in artificial intelligence subfields like speech recognition, autonomous and logistics planning and scheduling, game playing, spam fighting, and many more are the direct result of all the progress that has taken place since that fateful Dartmouth conference in 1956.That conference had a major influence on how the history of AI evolved into the industry we know today. For further reading you can check out my mini eBook “AI and the New World” which you get for free when subscribing to my email list. The first section of the book goes more in depth into what our future will really look like as a result of all this progress.

Leave a Reply

Your email address will not be published. Required fields are marked *