jump to navigation

Searle’s Chinese Room Argument February 2, 2009

Posted by Kyle in Philosophy of mind.
trackback

Imagine you’re locked inside a room, isolated from the real world. In this room with you are a Chinese keyboard, a monitor, and a rule-book written in English that tells you what sequences of Chinese symbols you should send in response to certain sequences of Chinese symbols that appear on the monitor. It does not provide any word-translations, however. All the book has are syntactical rules for manipulating Chinese symbols. They say nothing about what the Chinese symbols mean or represent. Will you ever be able to learn Chinese while stuck in this room? Didn’t think so.

This is Searle’s proof that true artificial intelligence is impossible. Or is it?

The Chinese Room argument comprises a thought experiment and associated arguments by John Searle (Searle 1980), which attempts to show that a symbol-processing machine like a computer can never be properly described as having a “mind” or “understanding”, regardless of how intelligently it may behave.

Source: Wikipedia

A formalization of his proof runs as follows:

(1) If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
(2) I could run a program for Chinese without thereby coming to understand Chinese.
(3) Therefore Strong AI is false.

Source: Stanford Encyclopedia of Philosophy

First and foremost, there’s a critical flaw in this formalization – specifically the phrasing of premise (2). The fact that you “could” proves nothing. This proof only holds if you couldn’t possibly run a program that did allow you to understand Chinese. But that’s exactly what this proof is meant to prove. So this proof assumes its own conclusion. I’m not sure whether this was Searle’s own formalization or the author of the Stanford article’s, so I won’t spend any more time on it and proceed despite this technicality.

There have been numerous replies to Searle’s Chinese Room argument: the Systems Reply, the Virtual Mind Reply, the Robot Reply, the Brain Simulator Reply, the Other Minds Reply, and the Intuition Reply to name a few. Searle notes that the most common replies are the Systems Reply and others, like the Robot Reply, that build upon the same notion. In short, the Systems Reply suggests that while the man in the room may not himself understand Chinese, the system as a whole, of which the man is but a part, does. The Robot Reply builds on this by suggesting that if you built a highly sophisticated robot with all kinds of sensors for sight, sound, touch, etc. and released it into the real world, allowing it to interact with native Chinese speaking people, the robot might eventually come to actually understand Chinese. I personally hold this position. It seems intuitive that no person locked inside a room would ever come to understand anything, so why should we expect a computer to? To the man in the Chinese Room, even English phrases and sentences are utterly meaningless without all of the actual, real-world experiences that relate to the words forming those phrases. How could anyone ever come to know what a horse is without being shown, and having never heard nor seen any type of animal or living organism whatsoever? Similarly, knowing that 12 is 2 * 6 may be rather trivial and meaningless. But also knowing that 12 is twice as many as 6, that it’s the number of eggs in a dozen, and so on and so forth is not quite as trivial.. Now we’re getting somewhere, now we’re in the realm of understanding.

Another common reply is the Brain Simulator Reply, where an exact copy of a human brain is constructed, neuron by neuron, either with the same elements a brain is made of or with other materials that simulate every single neuron firing that would occur in an actual Chinese-speaker’s brain. This raises an interesting question: At what point does a simulation become ‘the real thing’?

Are artificial hearts simulations of hearts? Or are they functional duplicates of hearts, hearts made from different materials? Walking is a biological phenomenon performed using limbs. Do those with artificial limbs walk? Or do they simulate walking?

Source: Stanford Encyclopedia of Philosophy

Is there even such thing a thing as “true AI”? Or at that point is it just ‘I‘?

Another interesting question arises when we look at the coming to being of consciousness and understanding in terms of their development in nature. Provided that natural selection selects against behavior, then nature does not care whether the entity understands or not.

How do we know other people understand Chinese? By their behavior. So if a computer/robot behaves indistinguishably from an actual, understanding human being (which Searle grants as being possible), and we know humans have understanding, then why should we assume any different for the machine? If the argument is that we know how the machine works, then this seems to merely be an argument from ignorance. Further, there’s a large and rapidly growing body of evidence suggesting that the human brain is a sort of “computational machine” that could, at least in principle, be reconstructed from the ground up. Suppose we did just that. Suppose we literally created a duplicate copy of someone’s brain, atom for atom, using highly advanced technology. Does this new brain have a “mind”? Does it understand? Is it conscious? The obvious answer to me is “yes,” at least if we say any other brain has these qualities. If they’re physically identical, then what reason do we have to think they are any different from naturally-developed human brains? This begs several questions, namely at what point do these things arise, and even what exactly they are.

“The many issues raised by the Chinese Room argument will not be settled until there is a consensus about the nature of meaning, its relation to syntax, and about the nature of consciousness. There continues to be significant disagreement about what processes create meaning, understanding, and consciousness,” and it doesn’t seem there can be any answer to the Chinese Room Argument (as so its truth) until there are answers to these, first (Stanford Encyclopedia of Philosophy).

Advertisements
%d bloggers like this: