No Comments on HISTORY OF THE AI WORLD PART 2


    The early years of AI history were filled with successes. In the 1950’s computers were primitive and seen only as things that could do arithmetic and nothing more. Two leading  researchers from Carnegie Tech named Allen Newell and Herbert Simon created a General Problem Solver or GPS as they called it. This program was designed to imitate human problem- solving protocols. The order in which the GPS considered goals and subgoals and possible actions turned out to be very similar to the way that humans approached problems. This program was more than likely the first to demonstrate that machines can think “humanly” at least in a limited way.


    Meanwhile, at IBM, Nathaniel Rochester and his colleagues made a handful of AI programs that were considered to be breakthroughs at the time. A man named Herbert Gelernter created a Geometry Theorem Prover, which was able to simplify theorems that many mathematics students found quite difficult. Then, there was Arthur Samuel who wrote some programs for checkers that ended up being able to play at a strong amateur level. This program beat him multiple times which proved that computers were capable of much more than most people believed.


    Given their successes these early AI researchers made many bold predictions for the future. There is a famous quote by Herbert Simon from 1957 that highlights this. “It is not my aim to surprise or shock you but the simplest way I can summarize is to say that there are no in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until, in a visible future, the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”

    When he was referring to the “visible future” it is clear that they all believed that the technology was going to hit even greater milestones in the years to come. Simon also predicted that within 10 years from that quote a computer would be the chess champion and that breakthrough mathematical theorems would be proven by machines.

    These two predictions ended up coming true in 40 years instead of 10 which highlights Simon’s overconfidence given his success in AI. Despite the breakthroughs they all  struggled with a few big problems. Whenever they tried out these programs for a wider range of problems they always seemed to fail miserably. For example the chess programs were  able to play chess in a brilliant way without even knowing what chess is or realizing that it was playing a game, but it couldn’t really do anything outside of chess.


    One of the biggest problems was the fact that many early programs knew nothing of their subject matter. They only succeeded by what was considered to be means of simple syntactic manipulations. This problem arose in early machine translation efforts in 1957 that were supposed to speed up translation of Russian scientific papers after the Sputnik launch. It  was believed at the time that simple syntactic translations based on the grammar of English and Russian would be good enough to extract the meaning of sentences. They didn’t realize until after the fact that real translation requires serious background knowledge in order to accurately depict the content of the sentence. Today machine learning has been perfected to the point where this process has become much better but it still has a long way to go.

    Another notable difficulty was the fact that most of these AI programs solved problems by merely trying out different things until a solution was found. Initially this was fine because the problems given to these computers were simple and had few combinations of possible solutions. But when harder problems were presented the issue became much more pronounced. Most believed at the time that all they had to do was to use faster hardware so the computer could try out more possible solutions in a shorter amount of time. However, this didn’t really address the underlying cause of the problem. Knowledge based systems were created in the late 1960’s specifically to address these issues.


    Early problem solving that had come from the first decade or so of research into the field of AI was mainly a general purpose search that attempted to string reasoning steps together to find a solution. They were criticized for being weak methods because they didn’t translate well to more complex problem solving. The alternative came in the form of a knowledge based system. This method utilized large database of data from which stronger reasoning could be applied and therefore solving higher level problems.

    The first program of this kind was known as DENDRAL. It was developed by a team of AI researchers at Stanford in 1969. This program was tasked with inferring molecular structure from the information provided to it by a mass spectrometer. It had a bank of knowledge to base any reasoning or problem solving steps on.

    This was a significant step in the right direction, because DENDRAL was, in fact, the first successful knowledge based system able to understand large numbers and special rules. It was able  to generate all possible molecular structures from formulas provided to it and predicted what mass spectrum would be observed for each. Many later systems also utilized that approach.


    Fast forward to the 1980’s and all the progress from the success in the 1950’s to the knowledge based systems of the late 1960’s all comes together. During this time period we saw the first successful commercial expert system which caused the entire industry to boom from a few million dollars to billions of dollars. The mid 1980’s also saw the return of neural networks and AI successfully adopted the scientific method. All of this will be covered in the next post in this series.

Leave a Reply

Your email address will not be published. Required fields are marked *