In its simplest form the test states that if an interrogator having a conversation over a keyboard (online chat, basically) with a second party cannot distinguish between a real person and a computer program, then the computer (program) could be said to be thinking.
Even though this once thought experiment has now been carried out in reality, (most notably at the Loebner Prize which has been carried out annually since 1991) no-one yet wants to stick their neck out and claim that a machine has definitely passed and ipso facto can think.
HAL becoming operational at the HAL Laboratories in Urbana, Illinois. Now, ten years after the date of the ill fated Discovery mission, HAL still seems a very long way away.
It’s also because Artificial Intelligence seems to be a moving target. Whilst we are still a long way from HAL, you do get the impression that even if intelligent talking computers were constructed, there would be those who would continue to insist that they weren’t “really” thinking, that they were only mimicking human intelligence. There has even been a pre-emptive attempt to debunk the Turing Test.
Like the Turing Test was originally, it's a thought experiment. It is known as the Chinese Room and was devised by philosopher John Searle.
The Chinese Room is a large wooden box in which sits an Experimenter. On his or her desk there is a set of complex instructions – whether these are in the form of data on a computer’s hard disk or a series of handsome leather bound volumes is entirely irrelevant. The Experimenter also has access to a large number of sheets of paper and several pencils.
There is a slot in one wall of the room. Through this slot people post questions in Chinese. The Experimenter doesn’t understand Chinese – but this doesn’t matter. Without needing a translation, he or she can look up the Chinese characters in the data, and by following a long series of steps listed in the instructions, eventually come up with and write down an answer to the question, also in Chinese characters, and post it back through the slot.
However long this might take (which doesn’t matter as it’s a thought experiment), the interrogator on the outside has now received an intelligible answer. If this went on long enough, it would be possible for the Chinese Room to pass the Turing Test.
But, crows the Experimenter, I don’t speak a word of Chinese!
The room has passed the Turing Test but the Experimenter wasn’t thinking about the questions - he or she didn’t even hear them. The Experimenter was able to simulate their half of an intelligent conversation without actually understanding it at all. This proves that machines can’t actually think, whatever the Turing Test might claim.
Or does it?
I would say not. All that the Chinese Room experiment proves is that the Experimenter can follow instructions without understanding Chinese. And just because he or she doesn't understand Chinese, it doesn't mean that an understanding of Chinese isn't going on somewhere...
The notion of the Chinese Room disproving the possibility of artificial intelligence is a demonstration of Cartesian Theatre thinking. By saying that the Experimenter doesn't understand Chinese ergo no understanding of Chinese is going on, the Experimenter is being cast in the role of the homunculus, the soul - the room itself and the instructions themselves being relegated to mere machinery.
In fact the opposite may well be true.
The Experimenter is no more than part of the machine - why should he or she understand Chinese any more than the plywood making up the box's exterior or the pencils with which he uses to draw the Chinese Characters do? Consciousness and understanding is something that arises from a whole system and whilst our Experimenter doesn't understand Chinese him or her self, the system as a whole most demonstrably does understand Chinese. And certainly a knowledge of Chinese would have been required when the instructions were written – when the Chinese Room learned.
Of course it would be nonsense to describe the Chinese Room being discussed here as conscious seeing as all the instructions do is allow responses to be produced very slowly and all it can do is conduct simple conversations in Chinese. It has no inner life and as such I assume would always answer the same question in the same way. Similarly whilst Deep Blue may be a chess champion it had none of the other qualities that made Garry Kasparov a conscious being. However, this is simply a question of scale. A large enough Chinese room (or perhaps a city-sized building consisting of millions of Chinese rooms each with a different purpose) with myriad operators could well be conscious and self aware when viewed as a whole even though this would be on a far different scale from that we are used to.
However, scales of time and space are irrelevant when it comes to the discussion of intelligence and consciousness - as we demonstrated in The Experiment of Thugg 2.0 back in September 2010. All that matters is that the data is processed and the inclusion of an Experimenter who doesn't understand Chinese in the equation is irrelevant and humano-centric. Your eyes don't speak English and yet you are reading and understanding this. A Chinese room would probably be far more efficient with a computer and a printer attached to the input and output and yet would produce exactly the same results as the inclusion of an Experimenter.
The Chinese Room appears to be a remnant of dualistic thinking. By denying the right of machines to be conscious the proponents of such theories are surreptitiously positing the existence of a separate soul as the seat of understanding and consciousness.
I thought we’d got past that.