The Great Ai Debate
By: Vika • Research Paper • 3,283 Words • November 17, 2009 • 1,118 Views
Essay title: The Great Ai Debate
The Great AI Debate
(Compare and Contrast Paper)
In a Star Trek episode entitled, "Measure of a Man," a villainous cybernetics researcher obtains authorization to dismantle Data, the android, to learn more about its construction. This action is met by Data's friends with resistance, and a legal battle ensues to determine whether Data is a life form with rights.
Riker, appointed to present the researcher's case, argues that because Data is composed of circuits and wires, he is nothing more than a sophisticated computing machine. (His case seems almost rock-solid when he forcibly switches Data off during the trial.) Later, Data's defense provides testimony to show that because Data has had many human-like experiences, including even an intimate relationship with another crew member, he must therefore be ruled a sentient life form, with all the rights of a human.
Normally, Star Trek has a reputation for portraying the future society as having solved the problems that vex us today. However, "Measure of a Man" raises issues that are still debated in the 20th century. If Star Trek is any reliable predictor of our world's future (hah!), then the issue of whether machines can be alive won't be resolved any time soon.
Akin to the debate of whether machines can live is the debate over whether machines can be intelligent. This is the great Artificial Intelligence debate, one which has not been resolved and probably never will be.
Advocates of a view called "strong AI" such a Marvin Minsky believe that computers are capable of true intelligence. These "optimists" argue that what humans perceive as consciousness is strictly algorithmic, i.e. a program running in a complex, but predictable, system of electro-chemical components (neurons). Although the term "strong AI" has yet to be conclusively defined [Sloman 1992], many supporters of strong AI believe that the computer and the brain have equivalent computing power, and that with sufficient technology, it will someday be possible to create machines that enjoy the same type of consciousness as humans.
Some supporters of strong AI expect that it will some day be possible to represent the brain using formal mathematical constructs [Fischler 1987]. However, strong AI's dramatic reduction of consciousness into an algorithm is difficult for many to accept.
The "weak AI" thesis claims that machines, even if they appear intelligent, can only simulate intelligence [Bringsjord 1998], and will never actually be aware of what they are doing. Some weak AI proponents [Bringsjord 1997, Penrose 1990] believe that human intelligence results from a superior computing mechanism which, while exercised in the brain, will never be present in a Turing-equivalent computer.
To promote the weak AI position, John R. Searle, a prominent and respected scholar in the AI community, offered the "Chinese room parable" [Searle 1980]. This parable, summarized by [Baumgartner 1995], is as follows:
He imagines himself locked in a room, in which there are various slips of paper with doodles on them, a slot through which people can pass slips of paper to him and through which he can pass them out; and a book of rules telling him how to respond to the doodles, which are identified by their shape. One rule, for example, instructs him that when squiggle-squiggle is passed in to him, he should pass squoggle-squoggle out. So far as the person in the room is concerned, the doodles are meaningless. But unbeknownst to him, they are Chinese characters, and the people outside the room, being Chinese, interpret them as such. When the rules happen to be such that the questions are paired with what the Chinese people outside recognize as a sensible answer, they will interpret the Chinese characters as meaningful answers. But the person inside the room knows nothing of this. He is instantiating a computer program -- that is, he is performing purely formal manipulations of uninterpreted patterns; the program is all syntax and has no semantics.
In this parable, Searle demonstrates that although the system may appear intelligent, it in fact is just following orders, without intent or knowledge of what it is accomplishing. He says that machines lack intentionality. Searle's argument has been influential in the AI community and is referenced in much of the literature.
It is tempting for spiritually-inclined people to conclude that the weak AI vs. strong AI debate is about mind-body duality, or the existence of a soul, and whether a phenomenon separate from the body is necessary for intelligence. Far from it, the predominant opinion in the AI community, among both sides of the strong/weak issue, is that the mind is a strictly physical phenomenon [Fischler 1987]. Even Searle,