You’ve just bought the latest in personal-assistant robots. You say to it: “Please put the dirty dishes in the dishwasher, then hoover the lounge, and then take the dog for a walk”. The robot is equipped with a microphone, speech-recognition software, and extensive programming on how to do tasks. It responds to your speech by doing exactly as requested, and ends up taking hold of the dog’s leash and setting off out of the house. All of this is well within current technological capability.
Did the robot understand the instructions?
Roughly half of people asked would answer, “yes of course it did, you’ve just said it did”, and be somewhat baffled by the question. The other half would reply along the lines of, “no, of course the robot did not understand, it was merely following a course determined by its programming and its sensory inputs; its microprocessor was simply shuffling symbols around, but it did not understand”.
Such people — let’s call them Searlites — have an intuition that “understanding” requires more than the “mere” mechanical processing of information, and thus they declare that a mere computer can’t actually “understand”.
The rest of us can’t see the problem. We — let’s call ourselves Dennettites — ask what is missing from the above robot such that it falls short of “understanding”. We point out that our own brains are doing the same sort of information processing in a material network, just to a vastly greater degree. We might suspect the Searlites of hankering after a “soul” or some other form of dualism.
The Searlites reject the charge, and maintain that they fully accept the principles of physical materialism, but then state that it is blatantly obvious that when the brain “understands” something it is doing more than “merely” shuffling symbols around in a computational device. Though they cannot say what. They thus regard the issue as a huge philosophical puzzle that needs to be resolved, and which may even point to the incompleteness of the materialist world-view. Continue reading