In its quest to create intelligent machines, the field of artificial intelligence has split into several different divisions based on opinions about the most promising methods and theories. Rival theories have led researchers to adopt one of two basic approaches: bottom-up or top-down. Bottom-up theorists believe the best way to achieve artificial intelligence is to build electronic replicas of the human brain's complex network of neurons, while the top-down approach involves attempting to mimic the brain's behavior through computer programs.
The human brain is made up of a web of billions of cells called neurons, and understanding its complexities is one of the last frontiers of scientific research. It is the aim of AI researchers who prefer the bottom-up approach to construct electronic circuits that act as neurons in the human brain. Although much of the functioning of the brain remains unknown, complex networks of neurons are what bestow intelligent characteristics upon humans. By itself, a neuron is not intelligent, but grouped together, neurons are able to pass electrical signals through networks.
After completing medical school at Yale, Warren McCulloch, along with Walter Pitts, a mathematician, proposed a hypothesis to explain the fundamentals of how neural networks make the brain work. Based on experiments with neurons, McCulloch and Pitts showed that neurons might be considered devices for processing binary numbers. An important tool of mathematic logic, binary numbers (represented as ones and zeros or "true" and "false" statements) were also the basis of the electronic computer. This concept is also the basis of computer-simulated neural networks, also known as parallel computing.
George Boole theorized the true/false nature of binary numbers in 1854 in his postulates concerning the "Laws of Thought." Boole's principles make up what is known as Boolean algebra, a branch of logic concerning "AND," "OR," and "NOT" operands. For example, according to the Laws of Thought, the statement (for this example, consider all apples red):
- "Apples are red" is True.
- "Apples are red AND oranges are purple" is False.
- "Apples are red OR oranges are purple" is True.
- "Apples are red AND oranges are NOT purple" is also True.
McCulloch and Pitts, using Boole's principles, wrote a paper on neural network theory. Their thesis dealt with how networks of connected neurons perform logical operations. It also stated that on the level of a single neuron, the release of or failure to release an impulse is the basis by which the brain makes true/false decisions. Using what is known as feedback theory, they described the loop that exists between the senses, the brain, and the muscles and concluded that memory could be defined as the signals in a closed loop of neurons. Although we now know that logic in the brain occurs at a level higher than McCulloch and Pitts theorized, their contributions were important to AI because they showed how the firing of signals between connected neurons could cause the brain to make decisions. McCulloch and Pitts' theory is the basis of artificial neural network theory.
Using this theory, McCulloch and Pitts then designed electronic replicas of neural networks to show how electronic networks could generate logical processes. They also stated that neural networks might, in the future, be able to learn and recognize patterns. The results of their research and two of Norbert Wiener's books served to increase enthusiasm, and as a result, laboratories of computer-simulated neurons were set up across the country.
Two major factors have inhibited the development of full-scale neural networks. First of all, it is expensive to construct a machine to simulate neurons. Although the cost of components has decreased, the computer would have to grow thousands of times larger to be on the scale of the human brain. The second factor is current computer architecture. The standard Von Neumann computer, whose architecture is found in nearly all computers, lacks an adequate number of pathways between components. Researchers are now developing alternate architectures for use with neural networks.
Even with these inhibiting factors, artificial neural networks have presented some impressive results. Frank Rosenblatt, experimenting with computer-simulated networks, was able to create a machine that could mimic the human thought process and recognize letters. However, when new top-down methods became popular, parallel computing was put on hold. Now neural networks are making a return, and some researchers believe that with new computer architectures being developed, parallel computing and the bottom-up theory will be driving factors in creating artificial intelligence.
Although AI technology is increasingly becoming integrated into everyday life, AI researchers continue to apply their efforts to automating intelligence, looking for ways to develop systems that perceive and respond to surroundings and use language to converse.