The Myths of Artificial Intelligence

by Attila Narin

December 1993



Every decade technology and science provide us with new keywords and terms like ‘virtual reality’, ‘fuzzy logic’, ‘artificial intelligence’ and many others. Among these, ‘artificial intelligence’, the idea of making computers and machinery think, learn and even correct itself from its own mistakes, just like human beings, is a concept that has brought about countless discussions, disagreements, arguments, misunderstandings and wrong hopes. Myths and fiction influence persons who regard the computer as an almighty tool. Since these people usually do not know much about computers and algorithms, it is necessary to present what is really happening behind the scenes of this scientific discipline. The fact that even experts are split into two schools of thought does not make it easier to discuss the real essence of artificial intelligence. Some believe in artificial intelligence and are convinced that it will exist soon. Others argue against it and regard it as impossible to make computers act intelligently. All misunderstandings are due to different points of view and different definitions of intelligence. Considering the true and deep meaning of intelligence, it is obvious that computers can never act intelligently like human beings.

Defining Intelligence

To understand the differing beliefs of experts in this field, it is essential to briefly discuss the definitions of intelligence. In the English language, intelligence refers to a large collection of data, a compilation of knowledge, as it is maintained by the CIA (Central Intelligence Agency), for example. This definition might justify the point of view of experts arguing for artificial intelligence.

A typical statement by a critic of artificial intelligence is as follows: “[Roger] Penrose does not believe that true intelligence can be presented without consciousness and, hence, that intelligence can never be produced by any algorithm that is executed on a computer.” (Noyes 535)

According to Binet, a psychologist who developed the IQ test made the statement: “Intelligence is what intelligence tests measure.” (Weizenbaum, Wer erfindet 74) Despite the obvious irony, Binet gave the correct answer, since this test began to turn into the main criterion to reckon intelligence in our society. Thus, the meaning of intelligence has drifted away from its original sense and lost most of its value. Today advanced computer systems are capable of successfully passing this intelligence test.

Another interesting definition is cited by Douglas Hofstadter:

It is interesting that nowadays, practically no one feels that sense of awe any longer - even when computers perform operations that are incredibly more sophisticated than those which sent thrills down spines in the early days. [..] There is a related “Theorem” about progress in AI: once some mental function is programmed, people soon cease to consider it as an essential ingredient of “real thinking”. The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed. This “Theorem” was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: “AI is whatever hasn’t been done yet.” (Gödel 601)

Briefly discussing the etymological background of the word “intelligence” will lead to the explanation and understanding of its real meaning. The word “intelligence” is derived from the Latin word “intellegere”, to understand, to perceive, to recognize and to realize. Dividing the word into its two parts will reveal further details. “Legere”, the second part of the word by itself means to select, to choose and to gather. The first part of the word comes from the prefix “inter-”, which generally means “between”. Interpreting the combination of these words indicates that intelligence is the ability to establish abstract links between details that do not necessarily have any obvious relationship.

What is the real and original definition of intelligence? Intelligence must not be understood as only the ability to solve problems. Knowing all facts and rules, and having access to every piece of information is not sufficient to provide intelligence. The essential part of intelligence, as the Latin word suggests, is the ability to look beyond the simple facts and givens, to understand their connections and dependencies and thus to be able to derive new abstract ideas. A human being does not only utilize its intelligence to solve problems; this is only one field where intelligence is applied. Intelligence is used to coordinate and master a life, it is reflected in our behavior and it motivates us to achieve our aims, which are mainly devised by our intelligence as well.

What Makes the Computer Intelligent?

When people refer to a computer program or algorithm as being intelligent, what is it that makes it appear so? There are certain qualities of the computer that make its actions or responses seem intelligent. Most essentially, a computer is much faster than the human brain when it comes to searching data, number crunching, playing a game, applying rules or finding general solutions to a problem. A computer appears to be intelligent, because only meaningful responses or solutions to a specific question are filtered out and displayed. Due to the speed, it seems as if the algorithm is not even considering any obviously wrong attempts. That, of course, is not true. In most cases the program will consider every possibility, even those that are destined to fail right from the beginning. The essential keyword here is backtracking, meaning that the algorithm, when searching for a solution, will go back and take every possibility into consideration and thus perform an exhaustive search (Bratko 22).

Besides the speed, there are other methods of making the computer seem intelligent. There are algorithms that dissect a certain task into a number of sub tasks, when the task itself is too complex as to be solved right away. These sub tasks may be split into sub tasks of the sub task again, until eventually the solutions to all the sub tasks can be found, and the solution for the initial task can be completed. This method is referred to as recursion. (Hofstadter, Metamagikum)

In this context it is appropriate to introduce so called heuristics, “popularly known as rules of thumb, educated guesses, intuitive judgments or simply common sense. In more precise terms, heuristics stand for strategies using readily accessible though loosely applicable information to control problem-solving processes in human beings and machine.” (Pearl, vii) Heuristics are usually used when time is essential and when a solution is still acceptable, even though it is not necessarily the optimum.

A common way to represent data for an application that involves primarily symbolic processing is to use expert systems. An expert system includes knowledge of one or more experts. If the information is not from experts, the data collection is referred to as a knowledge-based system. Besides containing data, an expert system also utilizes a so called ‘inference machine’ to derive answers based on rules in the information collection. (Noyes 13)

Another theory of providing the computer with intelligence is implementing neural networks. According to Kruse, or any other literature on neural networks, the following can be summarized: a neural network is supposed to resemble parts of the human brain. It consists of neurons that are combined into neuron layers, and combinations of neuron layers construct the neural network. Each neuron can influence the information of another neuron to a certain degree. The idea is to feed an input vector into the network, which propagates through the layers of the network while the input pattern slowly mutates in to the output vector. By comparing the output vector with the desired result, the degree of one neuron influencing another one can be changed according to the error of the result. That would resemble the process of ‘learning’. The innovation brought about by neural networks is that a correct output can be obtained even when feeding an input vector that is ambiguous or not perfectly correct into the network. However, a mathematician would agree that propagating information through a neural network is nothing but performing matrix computations.

So where is the intelligence coming from? Since an algorithm is usually developed for a specific purpose, it is not very difficult for the programmer to foresee and consider all possible situations and demands to the program. The algorithm can be written in such a manner that it will react correctly to every possible situation. Exactly this is meant when referring to a program as being intelligent. Obviously, the programmer makes the computer seem intelligent.

Applications of Intelligent Algorithms

In order to understand the necessity for so called intelligent algorithms, it is helpful to know about the various domains in which such algorithms are used. This section gives a rough overview by summarizing the possible fields of applications (found in Hofstadter, Gödel 601) and provides a very brief algorithmic background.

Mechanical translation: This process involves translating one language into another one. There is a direct and indirect way to do this. The former involves looking up the words in a dictionary and performing minor word rearrangements. The latter performs the translation via an intermediary internal language.

Game playing: One of the most famous application of intelligent algorithms is making the computer play games like checkers, go, bridge, poker, tic-tac-toe, just to mention a few. The most popular example is probably the implementation of the chess game. There are different methods on how to approach the problem. Trying to look for the perfect solution leading to winning the game from the beginning will exhaust most computers, especially in more complex games like chess. For the optimal answer the algorithm would have to perform an exhaustive search utilizing the principle of backtracking. Therefore programmers use heuristics to improve the performance and to directly prune possibilities that will not lead to the desired result.

Applications in mathematics: This field involves proving theorems and symbolically manipulating mathematical expressions. This process is based on performing searches due to rules in expert-systems and deriving one expression from the previous one until the correct way to the desired result is found. It also utilizes problem reduction, dividing one task into more sub tasks, as discussed earlier.

Computer vision: Computer vision is mainly involved in recognizing characters or pictorial elements. Especially when processing characters, programmers in this field usually utilize neural networks to provide the necessary tolerance, since characters written by humans always deviate by some degree from each other. When recognizing pictorial elements, for example, the face of a person, similar methods are applied.

Computer hearing: Here the computer is to recognize spoken words from a limited vocabulary (for example the names of the ten digits) or understanding continuous speech in fixed domains. As in the field of computer vision, programmers are trying to provide fault tolerance, for example, with neural networks.

Understanding and producing natural languages: The aim here is to make the computer capable of parsing complex sentences and producing text. In addition, efforts are made to implement algorithms that paraphrase longer pieces of text, resolve ambiguous references, or using knowledge of the real world to even understand complete passages. All these applications require extensive syntax descriptions of the natural language and a broad vocabulary. Once implemented, these algorithms are rather static since the program itself usually cannot be influenced during runtime. The probably best-known program that has been developed in this field is by Joseph Weizenbaum and it is called Eliza. This program conducts a rather simplified psychiatric interview. It is written in such a manner that it does not require the computer to have too much information on the outside world in order to participate in a meaningful conversation. Although unintended, it made many people believe in the computer as being a real doctor and an intelligent being.

Creating original thoughts or works of art: In this discipline, the computer is to write poetry, write stories, create computer art, or compose music. These processes involve randomly choosing words or elements out of a data collection or vocabulary, and match specific syntactic patterns by choosing the correct type of word or tone for certain position in a sentence, poem or piece of music.

Analogical thinking: This is the field in which computers are made to pass the intelligence test as mentioned before. It is the attempt to recognize geometrical shapes and make inferences on rules hidden within the images. It also involves constructing proofs in one domain of mathematics based on those in related domains. In the former application, objects need to be recognized in more than one picture and the relationship of the objects to each other needs to be examined. The latter involves mimicking the ways in which an existing theorem has been proven and coming up with a new proof in the new domain.

Learning: The most important keyword here is ‘adjusting parameters’. Adjusting parameters refers to the way, for example, neural networks are trained: applying different learning functions, fine tuning thresholds, changing the degree of how much one neuron is influencing another one, etc.

Fundamental Problems of Artificial Intelligence

In 1950, Alan Turing wrote a very prophetic and provocative article on artificial intelligence . It was entitled “Computing Machinery and Intelligence” and appeared in the journal Mind. The paper begins with the sentence: “I propose to consider the question ‘Can machines think?’” (Turing) In order to discuss these “loaded words” he introduces what he calls the “imitation game”; nowadays it is known as the Turing Test. This test is briefly summarized here.

A human interrogator is placed into a room where he or she can communicate to two different entities in a separate room, one of them being a person, the other one being a computer. The communication is established by using communication terminals. The interrogator does not know which of the two is the computer. By asking questions of any type to either of the communication partners, the interrogator is to find out which is which. The computer is allowed to give slow and confusing answers in order to cause the interrogator to make the wrong decision. If the interrogator fails to distinguish between human and computer, the computer is said to have passed the Turing Test. “No computer has yet passed this pragmatic test in its full generality, and some people think none ever will.” (Noyes 532)

To better visualize a very fundamental problem of the Turing Test and artificial intelligence in general, this paragraph summarizes one of the best-known examples designed to demonstrate that a computer does not have the same understanding as a human being. This example was devised by John Searle (Searle) and it is known of as the concept of a Chinese room. Suppose an English-speaking person who does not understand Chinese is locked inside a room, which is completely secluded from the outside. Inside the room, there are several containers of Chinese symbols, together with a rule book, written in English, that specifies the syntactic rules for how these symbols are to be manipulated. This process only requires the recognition of a symbol sequence and its matching to one described in the book. The person is not provided with any semantic rules relating the Chinese symbols to any meaning in English. This person locked inside the room now receives a sequence of Chinese symbols and the rule book contains instructions for passing another sequence of Chinese symbols back out of the room. To the people outside the room the sequences of symbols are known as questions and answers, respectively. The rule book has been written so well that the responses of the English-speaking person are indistinguishable from those of a native speaker.

It is obvious that simply by manipulating the symbols the person inside the room can never understand or learn Chinese. A program inside a computer behaves in the same way as the person inside the room. The rule book corresponds to the instructions of a computer. It is impossible for the program to ever have more than a formal understanding of the symbols. Anybody who is not familiar with the details of the program is led to believe that the computer is acting intelligently. Only the designers of the program know exactly how the algorithm is going to respond.

Another fundamental problem of artificial intelligence is the acquisition of knowledge and data. In this context Joseph Weizenbaum has presented an excellent compilation of thoughts:

First (and least important), the ability of even the most advanced of currently existing computer systems to acquire information by means other than what [Roger C.] Schank called “being spoon-fed” is still extremely limited. [..]

Second, it is not obvious that all human knowledge is encodable in “information structures”, however complex. A human may know, for example, just what kind of emotional impact touching another person’s hand will have both on the other person and on himself. [..]

Third, and the hand-touching example will do here too, there are some things people come to know only as a consequence of having been treated as human beings by other human beings. [..]

Fourth, and finally, even the kinds of knowledge that appear superficially to be communicable from one human being to another in language alone are in fact not altogether so communicable. Claude Shannon showed that even in abstract information theory, the “information content” of a message is not a function of the message alone but depends crucially on the state of knowledge, on the expectations, of the receiver.

(Weizenbaum, Computer Power 208)

Deeper Issues of Artificial Intelligence

It is evident that there are deeper issues one needs to be concerned with when dealing with artificial intelligence. How can a computer acquire the understanding of abstract ideas like feelings or intuition? How will it gain experience? What will motivate the computer to solve a problem or to act intelligently? How is the mood influencing the behavior of the computer and its algorithms. All these terms apply to human psychology and philosophy, but there is no resemblance at all when trying to relate these terms to the computer. “There is an aspect to the human mind, the unconscious, that cannot be explained by the information-processing primitives, the elementary information processes, which we associate with formal thinking, calculation, and systematic rationality.” (Weizenbaum, Computer Power 223)

No doubt, computers are capable of making decision and they are capable of conducting a psychiatric conversation. They could, for instance, flip a coin in a much more sophisticated way than any human being would be able to do. But the question is if they ought to be given such tasks. They may even be able to arrive at correct decisions in some cases - but a human being should always question the output of a computer. Joseph Weizenbaum concludes his discussion of artificial intelligence with a similar thought:

There have been many debates on “Computers and Mind.” What I conclude here is that the relevant issues are neither technological nor even mathematical; they are ethical. They cannot be settled by asking questions beginning with “can.” The limits of the applicability of computers are ultimately stable only in terms of oughts. What emerges as the most elementary insight is that, since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.

(Weizenbaum, Computer Power 227)


There is no way that a computer can ever exceed the state of simply being a data processor, dealing with zeros and ones, doing exactly what it is told to do. Electronic machinery will probably gain even more importance in the future, but it will never reach the point where a machine has a life or could be called intelligent. The most meaning the word intelligence can possibly contain in this context is the ability of computers to make decisions in very technical and simple matters so they require less attendance and involvement of humans. That is the trend in computer technology - more power but less interfacing with the user. Trivialities that would require a human being to decide in a well-defined fashion can be assigned to the computer in order to be more effective. Otherwise computers should only be used for their traditional ability to perform number crunching, their speed, accuracy and precision. Having computers make essential decisions that require wisdom, tact, intuition and experience is like giving kids a gun to play with.

What is it that artificial intelligence fanatics dream to achieve some day? Is the achievement of evolution, having brought about the most sophisticated elements of life, and the natural way of reproduction not good enough? It must be an old and ancient motivation within humans that makes them strive to create machinery that is intelligent, since there have been numerous attempts in history. However, only in recent decades there has been the right tool to superficially mimic intelligence. If the technological and mathematical aspects of intelligence should ever be solved, there is still the question of why there is a need for intelligent machinery. Is it not that intelligence is the essence of existing as a human being? There is no necessity to push our alienation from values and thinking to its limit. But who knows what technological madness mankind is destined to face in the future...

Annotated Bibliography

Hofstadter, Douglas R. Gödel. Escher, Bach: An Eternal Golden Braid.
New York: Vintage Books, 1979, Chapter XVIII.

In this Chapter Hofstadter introduces the reader to Alan Turing, one of the first persons involved in artificial intelligence. Turing devised a test to reveal the intelligence of a computer, called the Turing Test. This test and Turing’s thoughts are discussed. In addition, a brief history of AI is presented. Hofstadter lists the fields where artificial intelligence is applied today. Finally, Hofstadter focuses on different methods and principles essential for AI and gives comprehensive descriptions. For example, he mentions computer chess, computer music, applications to mathematical problems, theorem proving, problem reduction, and knowledge representation.

Searle, John R. “Minds, Brains and Programs.”

Mind Design: Philosophy, Psychology, Artificial Intelligence.
Ed. John Haugeland. Cambridge, MA: The M.I.T. Press, 1981. 281 - 306.

Turing, Alan M. “Computing Machinery and Intelligence.”

Computers and Thought.
Feigenbaum, Edward A. and Feldmann, Julian, eds.
New York, NY: McGraw-Hill, 1963. 11 - 35.
(Originally published in Mind, Vol. 59, October 1950, 433 - 460.)

Weizenbaum, Joseph. Wer erfinfet die Computermythen?

Freiburg, Germany: Herder, 1993.

This reading is written in the form of an interview with Joseph Weizenbaum. He discusses general aspects of the computer and our society: does the computer really save time? Is the computer helpful to humans and does it make life easier? He reveals at what point the computer myths turn into lies. He discusses key words like ‘artificial intelligence’ and ‘virtual reality’. Weizenbaum gives advice to be skeptical, not to take certain myths for granted. Most myths are caused by misunderstandings due to changing meanings of words like, for example, ‘problem’ and ‘virtual’. He is depicting to which degree the computer determines our life, our way of thinking, and our society.

Weizenbaum, Joseph. Computer Power and Human Reason.

San Francisco, California: W. H. Freeman, 1976.

This book first introduces the reader to the computer, focusing on the computer as a tool, examines the sources for its power, summarizes how computers work internally and discusses the significance of the programmer. After those, more introductory sections, the author comes to speak about the theories and models behind the computer. He describes the computer from a philosophical point of view, then discusses the essence of artificial intelligence and applications. Weizenbaum provides valuable and detailed information, and in addition, cites the achievements and arguments of other prominent computer scientists and argues his point of view. He concludes his work with his philosophy on the impact and influences of computers on society and mankind.

More technical writings on artificial intelligence:

Bratko, Ivan. Prolog, programming for artificial intelligence. 2nd ed.

Singapore: Addison-Wesley, 1990.

In this book, Ivan Bratko presents an introduction to the programming language Prolog. This language provides essential built-in features required in order to program for artificial intelligence. Here, the idea and the significance of backtracking, recursion, expert-systems, etc. are discussed. Not recommended for anybody outside the field of computer science.

Hofstadter, Douglas R. “Metamagikum.” Spektrum der Wissenschaft

May 1983: 15 - 19.

This paper is a part of a series of articles surveying the programming language LISP. Hofstadter presents one of the best introductions to the theory of recursion.

Kruse, Hilger, et al. Programmierung Neuronaler Netze.

Bonn, Germany: Addison-Wesley, 1991.

This text focuses on neural networks. It provides a very short comparison of the human neural system and the abstract models related to the computer. The main part consists of the mathematical background and the methods of implementation of different architectures of neural networks. Not recommended for anybody outside the field of computer science or mathematics.

Noyes, James L. Artificial Intelligence with Common Lisp.
Lexington, Massachusetts: D. C. Heath, 1992.

Besides offering a comprehensive introduction to Common Lisp (the American equivalent to the European Prolog), this reading includes valuable information on artificial intelligence in general. Although the book mainly focuses on the programming language, each chapter is accompanied by definitions, general explanations, references and gives a profound overview of the fundamentals of artificial intelligence. Only recommended to some degree for somebody outside the field of computer science.

Pearl, Judea. Heuristics

Reading, Massachusetts: Addison-Wesley, 1984. Vii.

A very technical writing on heuristics and their mathematical models. Not recommended for anybody outside the field of computer science or mathematics.

This page has been created in February 1995 by
Attila Narin <> and was last updated on Fbruary 28, 1995.
Copyright © 1993, 1995 Attila Narin. All Rights Reserved.