On understanding, intuition, and Searle’s Chinese Room

You’ve just bought the latest in personal-assistant robots. You say to it: “Please put the dirty dishes in the dishwasher, then hoover the lounge, and then take the dog for a walk”. The robot is equipped with a microphone, speech-recognition software, and extensive programming on how to do tasks. It responds to your speech by doing exactly as requested, and ends up taking hold of the dog’s leash and setting off out of the house. All of this is well within current technological capability.

Did the robot understand the instructions?

Roughly half of people asked would answer, “yes of course it did, you’ve just said it did”, and be somewhat baffled by the question. The other half would reply along the lines of, “no, of course the robot did not understand, it was merely following a course determined by its programming and its sensory inputs; its microprocessor was simply shuffling symbols around, but it did not understand”.

Such people — let’s call them Searlites — have an intuition that “understanding” requires more than the “mere” mechanical processing of information, and thus they declare that a mere computer can’t actually “understand”.

The rest of us can’t see the problem. We — let’s call ourselves Dennettites — ask what is missing from the above robot such that it falls short of “understanding”. We point out that our own brains are doing the same sort of information processing in a material network, just to a vastly greater degree. We might suspect the Searlites of hankering after a “soul” or some other form of dualism.

The Searlites reject the charge, and maintain that they fully accept the principles of physical materialism, but then state that it is blatantly obvious that when the brain “understands” something it is doing more than “merely” shuffling symbols around in a computational device. Though they cannot say what. They thus regard the issue as a huge philosophical puzzle that needs to be resolved, and which may even point to the incompleteness of the materialist world-view.

The two sides then declare each others’ intuitions to be really weird, and utterly fail to agree.

John Searle, of course, presented his well-known “Chinese Room” thought experiment as a claimed refutation of the idea that mechanical processing of information was sufficient for understanding.[1] The thought experiment supposes that a man who cannot speak Chinese is in a room filled with books of instructions for giving Chinese-language responses to Chinese-language questions. By mechanically following the instructions he can produce Chinese-language responses sufficient to engage in conversation with a native Chinese speaker and pass the Turing Test.

Searle then claims that, because the man doesn’t understand the conversation, that shows that the mechanical processing of information going on in the room is insufficient for understanding.

The “systems reply” points out that the man is included in the scenario as a piece of stage-magician’s trickery, there to divert attention from where the action actually is. The man could be removed and replaced with a mechanical device (the Searlites can hardly object). The room as a whole, though, does understand, assert the Dennettites.

Understanding cartoon

What, then, is the Searlite’s argument that the mechanical room, replete with instructions, does not “understand”? Actually, they don’t have an argument, they just declare it intuitively obvious.

Searle then introduces a ploy to keep the man centre stage, and attempts to rebut the “systems reply” by suggesting that the man memorises the entire instruction set and then walks out of the room. Searle asserts that, since the man has only rote-memorised a whole set of rules, he still does not actually “understand” Chinese.

Let’s examine that claim. According to the thought experiment, the man is then walking around conversing with Chinese speakers and giving them no inkling of any lack of understanding. He must thus be giving sensible responses to (Chinese-language versions of) “What brilliant weather today!”, and “Is the train station that way?” and “Would you be so kind as to carry this heavy shopping bag for me?”.

That means that the man must be able to link strings of Chinese-language phonemes to what he is seeing with his visual system and to other knowledge he has about how the world works, and then use that information in constructing his reply. And yet Searle insists that the man cannot be “understanding” Chinese.

Well, let me tell you about a French boy, born in Paris to French parents but now living in London. He is a native speaker of French, though while at school and with his English-speaking friends he talks English fluently. But he doesn’t actually “understand” English. What he’s done is merely rote learn enough about how to respond to English-language speech with more English-language speech, such that he passes as understanding it. But he doesn’t. He understands French, but he doesn’t understand English; he is merely simulating that understanding in order to get by.

I hope that readers will regard that idea as preposterous. Of course the boy understands English! There is nothing about the concept “understanding” that is not fulfilled by that boy’s capability with the English language. Yet Searlites would have us believe that it would be entirely possible for the boy to have no actual understanding of English, despite being fully fluent in English in the company of his friends.

After all, that is what they are claiming about the man who has memorised the Chinese Room. They even claim this lack of understanding to be obvious! And yet, they never put their finger on what is missing. They don’t give an account of what “understanding” actually is and what is missing in the case of the Chinese Room or of the English-speaking French boy. They given no operational test for the presence of “understanding”, nor any method for distinguishing an English-speaking French boy who does “understand” English from an English-speaking French boy who does not.

This, to me, reveals the utterly absurd position that people are driven to by their initial ideological intuition that understanding must be more than the “mere” shuffling around of symbols in a computational device. I say that quite deliberately, since Searlites accuse Dennettites of exactly the same: being driven to an absurd position by the reverse ideological commitment (for which see below).

The only resort of the Searlites would be to argue that the man, who has memorised and wandered out of the Chinese Room, only passes the Turing Test in a very restricted sense, with a rule banning any reference to anything in the real-world. Indeed, Searle needs to insist that the man “doesn’t know that the story refers to restaurants and hamburgers, etc”.

The problem here is that nearly all speech does refer to the real world. An alien who knew the rules of grammar, but nothing about the world, might reply to “Did you like the music?” with “It was bright orange”, or “not unless Peter starts with a Q”, and thence fail the Turing Test.

If someone asks how many fingers they are holding up, any valid reply would necessitate knowing what the answer meant. Or, if one wants to be facetious, one could simply hold up a hamburger to Searle’s man and ask “what do you call this?”. This shows that Searle’s conception of a competent but non-understanding conversationalist is simply incoherent.

Of course, if the Searlites want to admit that, of course the man could not actually converse in Chinese, because he would not know about the linkages between Chinese-language phonemes and real-world knowledge, and so could not give a valid reply to “what do you call this?” and the multitude of similar questions, then I’d happily grant that, yes, that man does indeed lack understanding of Chinese. The remedy, then, would simply be to program in those missing linkages.

The whole point about speech is that most of the concepts in it are about the real world. Indeed, when constructing AI programs, the hardest bit to get right is not the algorithms for processing information, but the databases containing all the information about how the world works. For example, Google’s software is now pretty good at translating Chinese to English because it uses a dataset of 1.8 trillion tokens gathered from Google Web and News pages. Searle’s conception of a competent language speaker who is incompetent at the real-world linkages of the phonemes is simply self-contradictory.

What this whole argument lacks is an agreement about what “meaning” and “understanding” actually are. Someone like me would give a prosaic and straightforward account. I’d say that the “meaning” of a piece of information is how it relates to other pieces of information. Thus the “meaning” of the word “water” is a whole set of linkages to other concepts, including ”wet”, “transparent”, “rain”, “puddle”, “drink” and many more. For a computational device to “understand” the “meaning” of a symbol is then simply to be able to access these linkages and to use them appropriately in processing information.

Thus, if I say to my iPhone “Siri, Quelle heure est-il?”, and the iPhone correctly responds: “Il est deux heures”, then the iPhone has “understood” the “meaning” of my question, because it has correctly interpreted the strings of phonemes, pattern-matched them in a way that caused it to consult its inbuilt clock, returned that information to a speech processor, and then sent signals for the appropriate stream of sound to its speaker.

The fact that this is all entirely mechanical and deterministic doesn’t change that conclusion. Afterall, what do you think our brains are doing when we understand a question?

If, on the other hand, I were to say to my iPhone, “Siri, cik ir pulkstens?”, then the iPhone would not understand for the simple reason that it has not (yet) been programmed with the Latvian language. Thus it does not “know” (has not been programmed with access to) the linkages between Latvian-language phoneme-strings and other relevant symbols within its memory.

Thus, for now, my iPhone understands French but does not understand Latvian (though that might change in a future update).

That sort of argument, though, causes the Searlites to roll their eyes. They insist that there must be something more to it than that! Such an account could not possibly be all there is to “meaning” and “understanding”, that’s way too easy!

Searle is supposed to have produced an argument that the mere manipulation of symbols (syntax) cannot possibly produce meaning (semantics). I’ve never quite fathomed what this argument actually is, though it seems to involve the claim that the way symbols are shuffled in a computer is unrelated to their “meaning” and thus that the shuffling of symbols alone cannot produce meaning. And yet, again, this claim is made without any account of what “meaning” actually is.

In the account of meaning that I’ve just given, Searle’s claim is just untrue. Manipulating symbols in terms of their meaning (= in terms of their linkages to other symbols) is precisely and exactly what the computer is indeed doing.

I tried the above account of “understanding” on the philosopher Massimo Pigliucci on his website. Dr Pigliucci responded with:

As for my iPhone “understanding” things, I’m simply baffled by your insistence on this. And I do attribute it entirely to your ideological commitment.

But then, interestingly, he added:

As for iPhone intuitions, let me get this straight: are you claiming that my iPhone has the sort of internal mental states, a feeling of understanding, like I do when I read your words? Because if not, then you are simply playing word games.

Now I find that revealing. Dr Pigliucci is suggesting that understanding only counts as “understanding” if it involves a first-person subjective reflection on that understanding. Thus, one needs to consciously know that one understands in order to understand.

Let me clarify that I’m not suggesting anything of the sort regarding an iPhone. Of course an iPhone is not doing a first-person conscious reflection about the tasks that it is doing — it is merely doing those tasks. What I am suggesting, though, is taking the most straightforward and minimalist account of “understanding” that one can, one that fulfils the basic definition but no more.

According to the Oxford English Dictionary what is “meant” by something is what it is “intend to convey or refer to” or what it “signifies”. Further, to “understand” something is merely to “perceive the intended meaning”. Which bit of that is not fulfilled by the above iPhone example, or by the personal-assistant robot?

But, perhaps this explains the mutual bafflement of the two camps. While the Dennettites are taking a minimalist interpretation of the concept “understanding”, the Searlites may be doing the opposite. They are wrapping the concept up with the whole issue of consciousness. That would make the Chinese Room not about “understanding” per se, but about consciousness. (And it would perhaps have been helpful if Searle had said that in his original paper.)

The argument would then be that the mechanically-operating room is not consciousness, and that without being conscious the room cannot be understanding. The man is then a necessary part of the room since he supplies the consciousness, and since that is divorced from the speaking-Chinese function of the room, then the room cannot be “understanding”.

If the two camps are making such fundamentally different interpretations of what the whole discussion is actually about, that would explain their bafflement at each other’s intuitions on the topic.

It seems to me that conflating issues such as “meaning” and “understanding” with consciousness is unhelpful, and is a way to ensure we cannot easily make progress. The classic way to understand any issue is to break it down into component parts and to try to understand the parts. Thus, to make progress, we should seek accounts of “meaning” and “understanding” that are distinct from the issue of consciousness (an issue which is, I will readily concede, much harder to resolve).

One objection needs dealing with. The objector might point out that, in the same way that my personal-assistant robot responds correctly to the instructions issued, a thermostat responds correctly to a fall in temperature, and switches on a heater. Does the thermostat then “understand”?

To answer we need to realise that biological phenomena are very often continua rather than being binary. Properties such as “intelligence”, “awareness”, “meaning”, and “understanding” will always be matters of degree. That follows from the fact that animals possessing such capabilities all developed from a single fertilized egg that lacked them, and then developed gradually. It also follows from the biological absurdity of a child being radically different from its parents. Therefore a parent who lacked “intelligence”, “awareness” and “understanding” could not give birth to a child who did develop those capabilities. Therefore a binary conception of such traits is ruled out and they must be matters of degree.

One can always, then, “turn down the dial” of “intelligence” or “awareness” until all that is left is much simpler behaviour. This is not a problem for the conception of “understanding” that I’ve presented, it is simply how biology works, and is thus a necessary feature of any correct account of the concepts.

We use terms such as “intelligence” and “understanding” only for behaviours above a certain degree of complexity — though the threshold is, of course, merely one of convenience. Certainly, today’s ten-yr-olds have no problem with saying that an iPhone exhibits some degree of “intelligence”, “awareness” and “understanding”, even though those degrees are much less than possessed by a human.

Meanwhile, this whole discussion reveals how much our arguments depend on our intuitions and how, as a result, it is easy to misunderstand others who think differently.

Notes: [1] The original paper by Searle is online here as a pdf.

Advertisements

43 thoughts on “On understanding, intuition, and Searle’s Chinese Room

  1. basicrulesoflife

    Every scientific discussion should start with clear basic notions. Please, could You define what does it mean: to understand? Is there a difference between animals, humans, robots? How can we measure ‘understanding’? Imants Vilks.

    Reply
    1. Coel Post author

      I agree with you, which is why I gave my account of what “understanding is” in the article. I would say there is no difference between animals, humans and robots (or, rather, only in degree, not in the basics of the concept). I define “meaning” in terms of the linkages between pieces of information, and I define “understanding” in terms of being able to access and correctly use those linkages. Thus, in principle, one could “measure” these concepts by a simple count of such linkages.

  2. taurisstar69

    This Chinese room theorizes that a human can memorize a training book on Chinese language to a degree that he could engage in conversation with correct answers. We can’t do that and that is why we understand things. We couldn’t memorize a symbolic language and remember the correct responses by the symbols we have studied, only through understanding the symbols could we actually converse. The Chinese room should tell you that a human must understand the language otherwise the Chinese room experiment is a fabricated myth.
    Humans use emotional driven ideas, not computer processing math.

    Reply
  3. keithnoback

    Going back to the personal assistant robot, if you tell it to go jump in the lake and it is unable to respond, “I don’t understand” in a meaningful way, then it doesn’t understand your other instructions when it acts upon them.
    In that case, when it says “I understand”, it is expressing something synonymous with “I can do that”, which is a statement about its own status rather than a statement about its perspective on the world.
    It lacks a capability to recognize semantic content as such – to put things in context and recognize when they are out of context – rather than just information about the fixed associations between symbols and their meanings.

    Reply
  4. Pingback: Understanding and the Chinese Room | The Heretical Philosopher

    1. Coel Post author

      Hi Neil,

      [Just tried to post this at your site, but it didn’t seem to work; I may try again.]

      Let me illustrate with a thought experiment. […] The difference, I suggest, is that the human messenger understood the instructions. But the robot was just following mechanical rules without any understanding.

      Interesting thought experiment. My reply would be along the lines that the robot did understand the instructions, but that the human had a much broader understanding, and understood more of the surrounding context, and thus was able to correct for the earthquake.

      The robot’s mistake is thus the sort of mistake a child might make, if they had partial understanding but lacked the understanding of an adult. Thus I don’t accept that the thought experiment reveals any fundamental difference between humans and computers.

      I agree with the “intentional” way of looking at things, but it is just a way of looking at things. One could look at both the robot and the human in both the intentional way and the mechanical way.

  5. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    Again, I want to point out that you are hardly doing justice to Searle’s position, though you do appear to be trying to do so.

    As you know, I don’t agree with Searle any more than you do, but the strict materialist Dennettian account really does seem to me to have problems (that I believe can only be rectified with Platonism). Let’s leave that for now though and I’ll try to highlight where you’re doing Searle’s argument a disservice.

    We should agree to begin with that the guy in the room (Searle, say), is not just memorising a Chinese/English dictionary. Ex hypothesi, Searle is just manipulating symbols the meaning of which are completely opaque to him. He could do so and still carry on a conversation in Chinese if he were for example running a low level simulation of the brain of a Chinese speaker (and I guess some wrapper code that translated the brain’s inputs and outputs into Chinese characters).

    As I’ve brought up to you before, Searle may not be able to express in English what he has just said in Chinese. He may be expressing opinions in Chinese which he vehemently disagrees with in English and answer to the name of Guo Minfang rather than John Searle. Ex hypothesi, therefore, the Chinese mind really is not simply the same as the mind we would normally consider to be his own. His own mind, (the mind that can speak English, the mind he identifies with and is directly conscious of) really does not understand Chinese.

    Where I part from Searle is that while I agree with Searle that he does not understand Chinese, I agree with you that there is still understanding going on. But it really is problematic to say that it is Searle doing the understanding, as long as we identify (properly, I feel) Searle with his mind and his subjective experience and not just his physical body. I think you’re glossing over that. If not Searle, then what is it that is understanding Chinese? Searle says that Searle is all there is when he memorises the algorithm and leaves the room. Perhaps you agree with him. I would say that there also exists the algorithm to simulate a Chinese brain, but that is a problem for you because you don’t want to grant (I believe) that an algorithm is a real thing.

    I would also recommend this talk by Mark Bishop which I think excellently illustrates another apparent problem with the Dennettian view (which, again, I think is resolved with Platonism).

    http://tedxtalks.ted.com/video/Dancing-with-Pixies-Professor-M

    Reply
    1. Coel Post author

      Hi DM,

      Ex hypothesi, Searle is just manipulating symbols the meaning of which are completely opaque to him. He could do so and still carry on a conversation in Chinese if he were for example running a low level simulation of the brain of a Chinese speaker (and I guess some wrapper code that translated the brain’s inputs and outputs into Chinese characters).

      Let’s suppose Searle is conversing competently in Chinese, using this virtual brain. Thus, someone could ask (in Chinese) “how many fingers am I holding up?”, and Searle can give a correct answer. That means that the virtual brain has access to Searle’s visual system, ears and vocal chords. But, we presume that Searle’s own-brain also has access to the visual system, ears and vocal chords. Therefore the own-brain can notice the correlation between the number of fingers and the Chinese phoneme-string that he utters.

      Further, since the own-brain has memorised the virtual-brain, the own-brain can interrogate any part of the virtual-brain. Thus, own-brain can ask itself, “what would be the difference in {output phoneme-string} given ({input-phoneme-string} + {seeing 2 fingers}) versus ({input-phoneme-string} + {seeing 3 fingers}).

      Further, to be competent at all normal Chinese questions, the above must apply to any normal question a Chinese speaker could ask, including the wide range of questions that relate to the external world and require knowledge of the external world.

      Thus, I don’t agree that it is possible for the Chinese to be “completely opaque” to Searle’ own-brain. That could only be the case if one also duplicated the visual system and the auditory system, and gave Searle’s own-brain no access to the visual and auditory systems being used by the virtual-brain. If you did that, then Searle’s own-brain could indeed lack understanding of the Chinese, since it would have no access to any of the real-world linkages of any of the phoneme-strings.

      Note that the above process by which I am asserting that Searle would come to understand Chinese — noticing correlations between external-world sensory input and utterances of phoneme-strings — is exactly how children learn their first language.

      Essentially, each of us *is* a “Searle”, someone who has memorised a whole set of linkages between phoneme-strings and external-world sensory inputs. I assert that it is nonsensical to suggest that any of us could go through this process to Turing-Test competence without “understanding”.

      The only way round this argument, it seems to me, is to give the own-brain no access to what is going on in the virtual-brain (including no access to the visual system, auditory system, etc). If you do that, then the two brains are essentially decoupled, and I’ll entirely grant that the decoupled brain does not “understand”.

      I would say that there also exists the algorithm to simulate a Chinese brain, but that is a problem for you because you don’t want to grant (I believe) that an algorithm is a real thing.

      No, I grant that the algorithm would be real, since it would be implemented. As a result of Searle memorising the CR, the algorithm would be manifest as real-world patterns of neural connections in Searle’s brain. I also grant that the “virtual-brain” part of Searle’s brain would be a key part of the understanding.

      What I am saying is that, given the amount of access that Searle’s rest-of-brain has to that virtual brain and how it links to the senses, it is absurd to suggest that it has not developed “understanding” through the process of memorising and developing competence. I regard that as ludicrous as the idea that the English-speaking French boy lacks understanding of English.

    2. Disagreeable Me (@Disagreeable_I)

      Hi Coel,

      > we presume that Searle’s own-brain also has access to the visual system, ears and vocal chords.

      I meant to address this, but I wonder whether you realise you’re changing Searle’s thought experiment to suit you a little bit. Searle never imagines that he is walking around interacting with the real physical world. He always imagines he is interacting only with meaningless (to him) Chinese symbols. Even when he leaves the room, he is not walking around talking to people and describing what he sees, he is just being handed Chinese squiggles and writing Chinese squiggles in response. In Searle’s thought experiment, the Chinese mind does not have access to Searle’s eyes, ears, vocal chords and so on. Only to the characters, which (we can assume) the Chinese Room algorithm translates into a sensory stimulus of some kind for the virtual mind, whether visual or aural.

      > the own-brain can interrogate any part of the virtual-brain

      Not really. The algorithm he has only knows how to process input symbols. He can’t really short circuit that process and interrogate parts of the brain because he doesn’t have a procedure for doing that, and in a complex system like this, that is an impossible task. All he can do is pass in any Chinese symbols he likes and get some Chinese symbols out after doing a lot of processing.

      > Thus, own-brain can ask itself, “what would be the difference in {output phoneme-string} given ({input-phoneme-string} + {seeing 2 fingers}) versus ({input-phoneme-string} + {seeing 3 fingers}).

      To do that (or something analogous, given that there is no provision for passing images of fingers to the CR algorithm) he would need to have candidate input symbol strings. But since he can’t write Chinese, I don’t know how you can expect him to generate these candidate input strings.

      But, yes, I will readily concede that, given enough time and near infinite patience and ingenuity, Searle could potentially use the CR algorithm as a kind of Rosetta stone and decipher the meaning of the symbols eventually. But he can happily converse in Chinese using the algorithm long long long before such understanding could come about. Similarly, if we use your modified thought experiment where sensory stimulus is catered for, Searle is more or less in the position of gradually learning Chinese by following a Chinese speaker about and observing what they say in different situations. But such understanding only comes slowly. So competence in Chinese is not at all the same thing as understanding Chinese for Searle.

      > is exactly how children learn their first language.

      Indeed! But, again, you’re ignoring the fact that this takes time. By the time Searle learned Chinese in this way, he would have already been conversing in Chinese for thousands if not millions of sentences. If competence precedes understanding, then competence is not the same as understanding.

      > If you do that, then the two brains are essentially decoupled, and I’ll entirely grant that the decoupled brain does not “understand”.

      Well, yes. That’s the thought experiment in a nutshell. That’s Searle’s whole point, that one can be competent without understanding. So we shouldn’t mistake competent machines for exhibiting human understanding.

      > it is absurd to suggest that it has not developed “understanding” through the process of memorising and developing competence

      But you’ve only argued that given a large amount of time and effort, Searle could maybe gradually learn Chinese by using the algorithm. Searle would probably happily agree with you, because he never says otherwise. He is only arguing that initially one would be competent without understanding.

    3. Coel Post author

      Hi DM,

      Searle never imagines that he is walking around interacting with the real physical world. He always imagines he is interacting only with meaningless (to him) Chinese symbols.

      Suppose someone hands him Chinese squiggles that amount to “how many fingers am I holding up?”, while holding up three fingers? Can Searle answer? According to the thought experiment, he is giving answers indistinguishable from those of a native speaker. Therefore the virtual-mind **needs** access to the visual system, et cetera. From there the rest of my response follows.

      But, if the person cannot answer, then, yes, agreed, he does not understand Chinese. Or, rather, he has very limited understanding. He knows about linkages *internal* to the Chinese grammar, but does not know how these squiggles or phonemes link to the real world. Thus his “understanding” is vastly limited and impoverished.

      The solution, however, would simply be to supply those linkages to real-world items. Thus, give the virtual-mind access to the visual system, etc. The thing that would be missing in “Searle”, is not anything mysterious about syntax vs semantics, but simply those linkages between information symbols and the external world.

      This is no different from an iPhone either knowing or not knowing the linkages between consulting a clock and Latvian phonemes for asking the time.

      He can’t really short circuit that process and interrogate parts of the brain because he doesn’t have a procedure for doing that, …

      Yes he does, he has memorised the entire thing. That means he can ask himself, about any part of the process: “if the input at this stage were {particular-squiggle-string} then what would the output squiggle-string from that stage be?”.

      He can ask that about every “CPU cycle” or equivalent. Indeed, he *must* be able to answer such a question, because that is exactly the process he is undertaking when cranking the handle on the virtual-mind.

      To do that (or something analogous, given that there is no provision for passing images of fingers to the CR algorithm) …

      Then blatantly he fails the test of giving answers indistinguishable from those of a native speaker, and blatantly he does indeed not understand.

      Similarly, if we use your modified thought experiment where sensory stimulus is catered for, Searle is more or less in the position of gradually learning Chinese by following a Chinese speaker about and observing what they say in different situations. But such understanding only comes slowly.

      I’d say that such understanding would come at exactly the same rate as the competence. If he can link the phonemes to real-world items then he is both competent and understands; if he can’t then he is both not competent and not understanding.

      By the time Searle learned Chinese in this way, he would have already been conversing in Chinese for thousands if not millions of sentences.

      Really? How? If the first such sentence was “how many fingers am I holding up?” then he cannot answer it until he has linked the Chinese phonemes for numbers to the concept of numbers. Again, competence and understanding really are the same thing here.

    4. Disagreeable Me (@Disagreeable_I)

      Hi Coel,

      > Suppose someone hands him Chinese squiggles that amount to “how many fingers am I holding up?”, while holding up three fingers? Can Searle answer?

      Well, the answer would be something like “How would I know, I can’t see you” in Chinese, because Searle doesn’t conceive of the Chinese Room as having input apart from language.

      > According to the thought experiment, he is giving answers indistinguishable from those of a native speaker.

      Yeah, but only in a Turing Test kind of scenario, where the native speaker is being interviewed through a text interface.

      > He knows about linkages *internal* to the Chinese grammar, but does not know how these squiggles or phonemes link to the real world.

      Right. Searle doesn’t understand how the squiggles relate to the real world, but the CR does. You can ask it any question you like about the real world, and as long as you ask in Chinese the system will be able to answer as a human would through a text interface. My view is that this understanding is not impoverished at all, but is just as rich as my own. It can’t answer how many fingers you are holding up only because it can’t see you, not because it has a limited understanding. You can, for instance, tell it that all the fingers from your pinky to your ring finger (inclusive) are up and then ask it how many fingers are up. But this rich understanding is hidden from Searle. He can only see symbols.

      > He can ask that about every “CPU cycle” or equivalent. Indeed, he *must* be able to answer such a question, because that is exactly the process he is undertaking when cranking the handle on the virtual-mind.

      Right, he can crank the handle with any input, or given any state of the machine. But I don’t think that’s the same as having insight into any particular sub-part of the brain. I think we’re possibly just speaking at cross-purposes here so it’s probably a dead end.

      > This is no different from an iPhone either knowing or not knowing the linkages between consulting a clock and Latvian phonemes for asking the time.

      But all the linkages are there (the system understands the real world perfectly well), it’s just that they’re hidden from Searle (because the system is only communicating with a text interface at the moment).

      > Then blatantly he fails the test of giving answers indistinguishable from those of a native speaker, and blatantly he does indeed not understand.

      No, because nowhere does Searle say indistinguishable from a native speaker who is currently looking at somebody holding fingers up. Again, think native speaker in a Turing Test scenario. Searle is talking about identical textual conversations, not identical physical setups. One other obvious difference would be that it would take the CR thousands of years (presumably) to produce answers to questions if all calculations were done manually. But the transcript of the conversation would pass the Turing Test.

      > I’d say that such understanding would come at exactly the same rate as the competence.

      Nope. Because right from the word go, he is competent. He can answer questions in Chinese immediately, just by following his algorithm. But he only figures out what it all means much later, by observing what was said in response to what sensory stimulus.

      > Really? How? If the first such sentence was “how many fingers am I holding up?” then he cannot answer it until he has linked the Chinese phonemes for numbers to the concept of numbers.

      He answers it by following the algorithm! He has no idea that the question was even about numbers. He only knows the question involved these phonemes and this visual stimulus and resulted in these other phonemes being output by the algorithm (which does indeed understand numbers and the linkages to phonemes and, in your version of the thought experiment, how to interpret the visual stimulus). Searle may guess that the question was “How many fingers am I holding up?”, but at this stage, that’s all it is: a guess. Over time, he might indeed come to understand the meaning of the symbols, but at the outset he is in the position of a foreigner thrown into a fully Chinese environment trying to figure out what is being said around him.

    5. Coel Post author

      Hi DM,

      Well, the answer would be something like “How would I know, I can’t see you” in Chinese, because Searle doesn’t conceive of the Chinese Room as having input apart from language.

      In that case the Searle would not be a competent speaker of Chinese since there would be vast swathes of conversation in which he could not participate competently.

      But, if the above is envisaged then: (1) yes, that person lacks vast swathes of understanding, and (2) the thing that is missing is not anything mysterious, but merely the linkages to real-world information.

      Yeah, but only in a Turing Test kind of scenario, where the native speaker is being interviewed through a text interface.

      So, he memorised the CR last week. Then an interlocutor asks something that requires new information. Say: “How do you think Putin will respond to being accused of murder by a British Judge”, or anything else in the news. He then cannot answer in the same way that a native speaker could. He would lack both competence and understanding. In such ways, the whole idea of competence without such understanding is incoherent.

      Right. Searle doesn’t understand how the squiggles relate to the real world, but the CR does. You can ask it any question you like about the real world, and as long as you ask in Chinese the system will be able to answer as a human would through a text interface.

      So the CR has a real-world database? OK, but it then cannot respond to anything requiring new or real-time information. Yes, there is indeed less understanding because of that; there is also less competence because of that.

      But all the linkages are there (the system understands the real world perfectly well), it’s just that they’re hidden from Searle (because the system is only communicating with a text interface at the moment).

      So make the linkages unhidden, and bingo! Yes, the lack of linkages reduces understanding. That’s fully in accord with my view here. Again, there is nothing mysterious about “understanding”, it is just the access to those linkages.

      Nope. Because right from the word go, he is competent. He can answer questions in Chinese immediately, just by following his algorithm.

      No he can’t. You’ve admitted that he cannot answer the “How many fingers am I holding up?” question with the same competence as a native speaker. The “I can’t see you” answer would be distinctly weird from a person who had walked out of the CR and was walking around.

      But, again, if you restrict the situation to remove those linkages then, yes, he then lacks both competence and understanding.

    6. Disagreeable Me (@Disagreeable_I)

      Hi Coel,

      > He then cannot answer in the same way that a native speaker could.

      Of course it could. It could answer “Oh, really? I hadn’t heard that. You’re talking about that Litvinenko thing, I’m guessing. So that inquiry is finished, eh?”.

      You’re right that the information in the CR algorithm will be getting more and more out of date, and so in that sense the CR will not be indistinguishable to a well-informed native speaker who follows recent developments. It will only be indistinguishable from a native speaker contemporary with its design era.

      But I really think this is grasping at straws in what seems to be a desperate attempt to raise irrelevant problems. If you restrict yourself to topics of conversation that are not out of date, or add the caveat that it is only supposed to be indistinguishable from a native speaker of a certain specific era, then Searle’s thought experiment stands.

      > He would lack both competence and understanding.

      Searle is competent enough in Chinese to answer the question, in this case to say “I hadn’t heard that.” He doesn’t understand what the question is about though. However, the System does. It understands the concepts in the question and may even know about Putin and the whole Litvinenko affair. It understands that it is being asked to report an attitude on recent developments. It may even have such an attitude. It will update its knowledge base with this news and will know about it in future interactions.

      However, if Searle had not heard about the result of the inquiry, he would still be ignorant, because he didn’t understand the exchange. Searle may be competent to respond, but he has no understanding of what was said.

      > So the CR has a real-world database?

      As much as any of us do. I conceive of the CR as a brain simulation. Encoded within it will be information about the real world in much the same way as in a brain.

      > Yes, there is indeed less understanding because of that; there is also less competence because of that.

      OK, but not at all at the same level. Searle is as competent as a native speaker of a certain era to answer any questions, however unlike a native speaker of a certain era he has no idea what he is actually saying.

      > So make the linkages unhidden, and bingo!

      You’re missing the point. Sure, it is possible to make Searle understand Chinese. That’s trivial. We could just teach him Chinese. Searle’s point is that it is possible to be competent in Chinese without understanding Chinese. If he is right then you are wrong to conflate competence and understanding, and you can’t answer the point by changing the thought experiment to add information to help Searle understand what he is saying.

      > The “I can’t see you” answer would be distinctly weird from a person who had walked out of the CR and was walking around.

      No, no no! You’re not following the idea. You’re confusing the physical Searle with the native speaker the system is supposed to be indistinguishable from. The person we are comparing the CR to is not walking around and it is not Searle. It is a Chinese person sitting in a void (or simulated environment) communicating through a text interface. Again, think Turing Test. You can’t show the Chinese speaker your fingers because he or she is not physically present. From your perspective, Searle is just an interface through which you can communicate with him or her. The indistinguishability is about whether you can tell that Searle is using an algorithm to respond or engaging in some sort of telepathy with a remote Chinese person. (And we need to set aside the practical question of how long it takes him to answer or whether he screws up his face in concentration.)

    7. Coel Post author

      Hi DM,

      You’re right that the information in the CR algorithm will be getting more and more out of date, and so in that sense the CR will not be indistinguishable to a well-informed native speaker who follows recent developments.

      Thus you’re accepting that, in order to be on-going competent in a *general* sense, a Chinese speaker needs an ongoing inflow of real-world information, and thus access to the senses?

      But I really think this is grasping at straws in what seems to be a desperate attempt to raise irrelevant problems.

      Well no, I fully accept that, if you create a virtual mind with access only to its own internal database, and with no linkages between that and the outside world, then that virtual mind will lack “understanding”. It will lack understanding precisely because it has no access to any linkages between its internal information and the external world. (It does, though, have a limited understanding, in the sense of knowing about *internal* linkages.) The thing that is missing, then, is not anything mysterious, but simply the linkages to the external-world information.

      Searle’s point is that it is possible to be competent in Chinese without understanding Chinese.

      In the above situation, the person is competent about linkages *internal* to the CR, and incompetent about linkages between that and the external world. He “understands” the linkages internal to the CR, but does not “understand” how these relate to the external world.

      Again, I don’t see a problem with my conception here; again, competence and understanding are the same thing.

    8. Disagreeable Me (@Disagreeable_I)

      Hi Coel,

      > Thus you’re accepting that, in order to be on-going competent in a *general* sense, a Chinese speaker needs an ongoing inflow of real-world information, and thus access to the senses?

      The only competence that is relevant is competence in speaking Chinese. We’re not talking about competence as a political pundit or whatever. In order to be generally competent in speaking Chinese, no, the system does not really need an ongoing flow of real-world information. The system presumably needs to have some understanding of *a* world in order to have something to talk about, but that’s about it. It needs to know about the real world only insofar as we want it to pass the Turing Test as a short hand for saying it is about as intelligent and capable as a typical human being. It could just as easily be a simulation of a 14th century peasant or a denizen of a fantasy world and that would be fine. 14th century peasants presumably had genuine understanding, so that will do for Searle’s purposes, because he would say that he would not have even the understanding of a 14th century peasant by virtue of running the algorithm.

      > The thing that is missing, then, is not anything mysterious, but simply the linkages to the external-world information.

      But these linkages are not missing, they’re just not active right now. The simulated CR guy can’t see you, because he’s talking to you through a text interface, but that doesn’t mean he has no understanding of what fingers look like or what the question means. If you asked him the question and then we hacked the algorithm to allow an image of you to be sent in to the simulated terminal afterwards, the guy could then answer your question, even though when it was asked those linkages were dormant. They’re there, they’re just hidden from Searle because they’re not active, just as you know what an elephant is even when there is no elephant anywhere near you and so no way for a non-native English speaker to guess you are talking about large African mammals when you say “elephant”.

      Furthermore, he is not precluded from having current causal linkages with the real world just because he is limited to a text interface. If he told about the Litvinenko inquiry, for instance, current affairs in the real world are causing his mental model to be updated accordingly. He can come back with more questions about this and so on, learning more and perhaps even sharing knowledge the CR has on the subject. There can be a real causal interaction there such that the symbols in the CR brain are about as connected with the real world as yours and mine are as we discuss it. But this exchange is happening in the medium of Chinese symbols so Searle has no idea what is being said.

      > and incompetent about linkages between that and the external world.

      He’s perfectly competent about the real world, at least the real world as he knows it (which will get out of date over time). It’s just that he is restricted to interactions via textual communication. Just as we can have a textual discussion about elephants even without interacting with elephants, and just as our knowledge of the real world can get out of date as events transpire. He knows about as much about the real world as a typical person (of a given era) and can be quizzed about it in as much detail as you like.

      The question is whether he understands what he is saying. Searle clearly doesn’t: he cannot tell you in his native English what the conversation was about. The conversation could even have been a plot to kill Searle and he would be clueless of what was to befall him. Searle doesn’t understand, but perhaps he is invoking something else that does.

    9. Coel Post author

      Hi DM,

      But these linkages are not missing, they’re just not active right now.

      Which means that Searle has no access to them, and for *that* reason lacks understanding.

      The virtual-mind has access to linkages within the CR, and thus “understands” *within* the CR. The virtual mind has no access to linkages between that and the external world, so lacks understanding in that respect.

      “Searle”, running the CR-simulation, also has no access to linkages between the CR-simulation and the external world, and so also lacks that understanding. The thing missing is simply those linkages.

      Searle does, of course, have access to linkages between his English-language mind and the external world. So Searle understands: (1) within the English-speaking module; (2) linkages between the English-speaking module and the external world; (3) *within* the Chinese-speaking module; BUT NOT (4) between the CR module and the external world.

    10. richardwein

      Hi guys,

      I agree with DM on this particular point. Part of the problem is that, when we stipulate that the CR algorithm is a fully detailed simulation of a brain (of a Chinese person called Lee, say), we usually omit to specify how Lee’s sensory inputs are being simulated. The real Lee would be responding to real events in his environment. The simulated Lee must be responding to simulated events (real or imaginary). In addition to simulating Lee’s brain, we could stipulate that Lee-sim simulates enough of the brain’s environment (including Lee’s body) so that Lee-sim is getting just the same inputs as Lee. But then Lee-sim would be responding to simulations of the events in Lee’s environment (in China) and not to the CR questions. We have to imagine some kind of “wrapper” software that makes the CR questions available to Lee-sim.

      “It is a Chinese person sitting in a void (or simulated environment) communicating through a text interface.”

      Exactly. We could invent a variety of different scenarios. But the simplest is probably to imagine that Lee-sim is a simulation of Lee sitting in a dark room with nothing to see but a computer screen running a chat program. The real Lee knows nothing about the CR room and its questions. But in the case of Lee-sim, the wrapper software adds in information about the CR questions. Searle somehow inputs the questions into the program. (How this happens is something else that is never clearly specified.) And the wrapper software creates a simulation of those questions on the simulated screen that simulated Lee is looking at. The Chinese answers that Searle outputs are the same ones that the real Lee _would_ give if he were placed in that imaginary dark room.

      So the only information that Lee-sim gets from the CR is the questions. (More precisely, I would call it the input from the Chinese-speaking interlocutor outside the CR. It needn’t consist just of questions.)

  6. Disagreeable Me (@Disagreeable_I)

    Or, to take another tack…

    I think you’re conflating two different things. When Searlians talk of understanding, they mean more than competence. And there certainly is a distinction between understanding and competence.

    For example, person A is competent in the physics and dynamics of basketballs in earth gravity and atmosphere, while person B understands the same. A can reliably score baskets from a distance, but cannot give an account of the mathematics required to calculate the velocities, trajectories or spin of the initial impulse given to the ball. B, on the other hand, cannot score points but is extremely well versed in the mathematics.

    In English, Searle is both competent in conversation and understands what he is saying. In Chinese, he is only competent and has no understanding. So, Searle, if you ask him in English, cannot tell you what “Hen gaoxing renshi ni” means. He has no idea. But if you say “Hen gaoxing renshi ni”, he will, while running the algorithm, respond in an appropriate fashion. If there is understanding happening (and I think there is), it is not Searle that understands but something or someone else.

    Reply
    1. Coel Post author

      A can reliably score baskets from a distance, but cannot give an account of the mathematics required to calculate the velocities, trajectories or spin of the initial impulse given to the ball.

      I would not make a big distinction between “competence” and “understanding” there. Rather, I’d say that there are different aspects to that scenario, and that one can understand some aspects while not understanding others. Thus, A would indeed have a good understanding of trajectories and spin, but not of the mathematical formalism that could be applied to the situation.

      I think that a lot of “problems” surrounding the Chinese Room are created by the idea that one must apply one of the two binary labels “understands” or “doesn’t understand”. Once one allows “partially understands” or “understands some aspects but not others”, then the problems resolve.

    2. richardwein

      I think that a lot of “problems” surrounding the Chinese Room are created by the idea that one must apply one of the two binary labels “understands” or “doesn’t understand”.

      I quite agree. I’d also add that such words are context-dependent. We need to ask just what distinction is being made when someone asks “does he understand?”, and it may be a different distinction in different contexts. If we ask of Siri (the AI program), “Does she understand that question?”, what we’re interested in is whether she can respond to that question in an appropriate way. We’re not asking the sort of philosophical question that Searle is asking. We’re not asking whether she/it is conscious. Searle thinks that if we’re not talking about consciousness we must be speaking “metaphorically”, and not talking about “real” understanding. He doesn’t see that words can be used in different ways. Or that his own way of using the word is an unhelpful one. If he wants to ask about consciousness, he should say “is it conscious?”, not “does it understand?”.

  7. Liam Ubert

    Hi Coel,

    “Did the robot understand the instructions?” No, since the robot is not human, it can not perform human acts, only simulations. We anthropomorphize what we observe.

    I would think that the sine qua non for human understanding is human consciousness. We assume others are conscious by projecting onto them our subjective experiences of what it feels like to be conscious. We come to this fairly reasonable conclusion because when we get together we talk to each other and get the distinct impression that we understand each other.

    All the definitions of understanding in my OED assume that only human beings are involved. However, the new gizmos are so ‘smart’ that we have projected things like intelligence or understanding onto them. I do think, however, that very few adults are confused about the difference between dead machines and living human beings.

    No one is seriously suggesting here that an abacus or slide rule exhibits understanding, nor an electronic calculator. If these human intellectual aids have any ‘intelligence’ it is because we built it into the instrument. At this point computers are nothing more than powerful calculators, ‘running’ programs that ‘process information’. We are anthropomorphizing our phones!

    The unwary are being seduced into thinking that human features are emerging from ‘calculating’ machines. I am open to the idea of machine intelligence and machine consciousness, but these should never be confused with their human counterparts. We need a distinct vocabulary that properly identifies this new domain.

    Reply
    1. Coel Post author

      Hi Liam,

      I would think that the sine qua non for human understanding is human consciousness.

      That is an entirely reasonable position to take. It also might explain why there is such ongoing disagreement over the Chinese Room: some people are assuming a conception of “understanding” that requires consciousness, while others are taking a different conception that does not. The disagreement is then that we mean radically different things by “understanding”.

      It would be helpful and clarifiying, though, if Searle and his supporters were to actually state the requirement for consciousness explicitly. The original paper does not make this clear. Further, I wrote this after discussing the video by Massimo Pigliucci and Daniel Kaufman on the Chinese Room, both of whom support Searle’s position, and at no point did they mention that they regarded first-person reflective consciousness as a necessary aspect of “understanding”.

  8. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    > Which means that Searle has no access to them, and for *that* reason lacks understanding.

    This is wrong or missing the point on two fronts.

    Firstly, because even if those linkages were put in place, that understanding would not be automatic, it would just become possible with a lot of ingenuity and effort and study. The linkages would act as a Rosetta stone, giving him a key, but he wouldn’t have understanding automatically any more than you would if you had been observing a Chinese person for five minutes. And yet he would still be competent.

    Secondly, the question is not why Searle doesn’t understand or how he can be made to understand, but whether it is possible to have competence without understanding. Even without the linkages, the system is competent in Chinese conversation, and yet Searle does not understand.

    > The virtual mind has no access to linkages between that and the external world, so lacks understanding in that respect.

    If so, that is the same respect in which you lack understanding of elephants whenever you are not physically interacting with an elephant! Just because there is no active physical linkages doesn’t mean there is no understanding.

    > The thing missing is simply those linkages.

    Not at all. The understanding of the CR and the understanding of Searle are on completely different levels. The CR has gained information and appears to understand what was said. It can describe the conversation (in Chinese of course) and give opinions about it. It can for instance be incredulous or pleased. But Searle cannot say (in English) any more than that symbols were exchanged and some processing done.

    Your approach seems to be to point to some minor inadequancy in the understanding of the CR (being out of date or out of physical contact with the real world) and leaping from this to Searle’s complete lack of any understanding whatsoever of what was said. There are orders of magnitude between the two. Searle comprehends nothing of what was said. The CR has near perfect comprehension (despite being out of date and out of contact).

    In your last paragraph you are getting a little closer to my view, but where you talk of a Chinese speaking module I would speak of an entirely different mind, existing at a different level of abstraction. The Chinese “module” is not a peer of the English “module”, it sits on top of it, using the English “module” as a substrate. Furthermore, it’s not merely a language module, allowing Searle to communicate his thoughts in an additional language, it is rather a whole mind with an identity, knowledge and opinions of its own. But if you can get on board with that then you’re reasonably in line with my thinking. However it means accepting a conclusion that Searle regards as preposterous (so really the CR argument could be seen as a reductio ad absurdum). That conclusion is that two minds could exist within the same brain or that one mind could act as the substrate for another.

    That’s the rub of the issue, not the failure of Searle to see that competence and understanding are the same thing (which they aren’t, really).

    Reply
    1. Coel Post author

      Hi DM,

      Firstly, because even if those linkages were put in place, that understanding would not be automatic, it would just become possible with a lot of ingenuity and effort and study.

      No, those linkages would *be* the understanding. Thus the linkage between the Chinese phoneme-string for “three” and the external-world concept “three” would *be* the understanding, likewise with the linkage between the Chinese phoneme-string for “hamburger” and the external-world concept “hamburger”.

      … but he wouldn’t have understanding automatically any more than you would if you had been observing a Chinese person for five minutes.

      Or, rather, the linkages would develop gradually, by observing the Chinese person, just a child learns the linkages by observing the world and correlating it with speech.

      Even without the linkages, the system is competent in Chinese conversation, …

      In other words, it is competent at everything within its CR virtual world; and it understands everything *within* that CR virtual world. The thing that is lacking is the linkages between that CR and the wider external-world understanding that Searle has.

      If so, that is the same respect in which you lack understanding of elephants whenever you are not physically interacting with an elephant!

      I’m not thinking of the linkages as physical interactions. I’m merely thinking about them as things the mind knows (has access to). Examples of linkages are the linkage between the Chinese phoneme-string for “three” and the external-world concept “three”, and the linkage between the Chinese phoneme-string for “hamburger” and the external-world concept “hamburger”.

      The understanding of the CR and the understanding of Searle are on completely different levels. The CR has gained information and appears to understand what was said. It can describe the conversation (in Chinese of course) and give opinions about it. It can for instance be incredulous or pleased. But Searle cannot say (in English) any more than that symbols were exchanged and some processing done.

      The understanding of the CR by the CR is actually exactly the same as the understanding of the CR by Searle. The CR can respond to Chinese-language queries about itself, in Chinese. But then Searle can respond to Chinese-language queries about the CR, in Chinese! Searle cannot, of course, translate those answers into English, but then neither can the CR.

      There are orders of magnitude between the two. Searle comprehends nothing of what was said. The CR has near perfect comprehension (despite being out of date and out of contact).

      What does the CR have near perfect comprehension of? Ex hypothesi, the CR has *zero* understanding of how anything it is saying links to anything in the external world. You have insisted on pruning all of those away! Suppose there were an error in the CR programming (suppose the phoneme-strings for “3” and “4” were erroneously switched). The CR would have no way of correcting it.

      Thus, the only thing the CR “understands” is linkages and relations *within* its own virtual-mind. Well, ex hypothesi, Searle also has access to all of those linkages and relations *within* the CR (he’s memorised them! He must have, in order to be able to crank the handle).

    2. Disagreeable Me (@Disagreeable_I)

      Hi Coel,

      > Thus the linkage between the Chinese phoneme-string for “three” and the external-world concept “three” would *be* the understanding,

      I don’t think so, although they may provide a basis for giving an account of semantic reference. But merely observing a Chinese person saying sān while holding three fingers up is not itself understanding that “sān” means three. If these linkages were present in the sense of being active, Searle would have more data he could use to make inferences, but he would not automatically understand the conversation.

      > Or, rather, the linkages would develop gradually, by observing the Chinese person

      OK, this seems to be a source of some confusion. I feel you are being a bit loose with whether you are talking about linkages between the real world and symbols within the simulated Chinese brain or the real world and symbols within Searle’s own native brain. The linkages in the simulated brain are there from the word go, even if they are not active in the sense of being physically connected right now. This is how I think the CR itself understands. Yes, Searle would develop his own linkages as he comes to understand Chinese, but he is competent long before these linkages develop and before he understands anything of what he says.

      > it is competent at everything within its CR virtual world

      What virtual world? If the topic of discussion is the real world, then it is competent at dicussing the real world just as you and I are. The world is only virtual in the sense of being a mental model, but then the world I know about is virtual in this way too. I don’t have direct access to the real world, only to my model of it. The CR is no different. It is both competent in discussing and understanding the real world in precisely the same sense that I am. But Searle doesn’t understand the Chinese conversations even as he engages in them competently by virtue of running the CR algorithm.

      > Examples of linkages are the linkage between the Chinese phoneme-string for “three” and the external-world concept “three”, and the linkage between the Chinese phoneme-string for “hamburger” and the external-world concept “hamburger”.

      OK, so I’m saying the CR has all these linkages. Searle doesn’t. I can eat a hamburger and describe it in Chinese to Searle, who can relay it to the CR. The CR can reply in Chinese (via Searle) that now it wants to eat a hamburger too. This is really no different from a conversation I might have with you via email (especially if you don’t have immediate access to a physical hamburger). There is a real object (hamburger) which is being referred to by both of us. But the medium of exchange, Searle or an SMTP server, can relay the messages without having any understanding of what they are about. The difference is that Searle is not only the medium of exchange but a substrate for the mind that is doing the understanding. As such he is competent to carry out the conversation even though he himself doesn’t understand it.

      > Searle cannot, of course, translate those answers into English, but then neither can the CR.

      Sure. But “being able to translate into English” is not the criterion for understanding. “Being able to explain in your native language” is a little closer. The native language for Searle is English and the native language for the CR is Chinese. Anything Searle understands he can explain in English. If he can’t explain it in English, he doesn’t understand it. Furthermore there are facts the CR may know that he does not, and it may have a different personality and beliefs and opinions than he. It’s a different mind, in other words.

      > Ex hypothesi, the CR has *zero* understanding of how anything it is saying links to anything in the external world.

      No, that’s not the hypothesis at all. The hypothesis is instead simply that the CR is just communicating via text. It’s in the same position as a person sitting at a terminal in a locked room. It can still discuss the external world via that interface and understands that world perfectly well. It’s just not so easy for an observer who doesn’t speak Chinese to figure out what is being referred to because they are lacking the Rosetta stone of physical interaction to give them clues.

      > The CR would have no way of correcting it.

      Well, it could correct it by having it pointed out to it in conversation that it seems a little confused when it counts “1, 2, 4, 3”. The CR is not cut off from the external world, after all. It’s just limited to a text interface. And that’s really just because it makes the thought experiment clearer.

      > Well, ex hypothesi, Searle also has access to all of those linkages and relations *within* the CR

      No he doesn’t. All he has is an algorithm. Whatever linkages and relations exist in the Chinese room exist at many levels of abstraction above the algorithm. The algorithm is like having a map of a brain. Having a map of a brain doesn’t tell you how to interpret it and read someone’s mind. That may be possible in principle, but it’s nigh impossible in practice — even harder than just simulating the brain would be as in the CR thought experiment.

    3. Coel Post author

      Hi DM,

      But merely observing a Chinese person saying sān while holding three fingers up is not itself understanding that “sān” means three.

      I think you’re misunderstanding what I mean by “linkages”. The linkage is not the observation that he said “sān” while holding up three fingers, the “linkage” is the knowledge that the Chinese phoneme-string “” refers to the concept “three”. Those linkages are *learned* from the sorts of observations you suggest (or they can be programmed into an AI device), but the linkages are the end-product of that learning.

      The linkages in the simulated brain are there from the word go, even if they are not active in the sense of being physically connected right now.

      No, the linkages from the virtual mind to the external world are *not* there. If they were, “Searle” could access.

      What the virtual mind has are linkages between its own internal concept “” and its own internal concept “”. Ex hypothesi you have given no linkages between the virtual mind and the external world.

      Now, it might happen to be the case that the virtual-mind internal symbols happen to the be same as those that Chinese speakers use to refer to the external world, but that is merely a coincidence. That is merely because the thought experiment has been set up to trick people that way. But, those virtual-mind internal symbols actually don’t have linkages to the external world. The virtual-mind only has competence internally, and only has understanding internally.

      One can see this from my thought-experiment of programming in errors — differences between the programmed Chinese phoneme-strings and how they are actually used to refer to the external world. If every instance of the symbol “elephant” were switched with that if “lion”, the virtual mind would have no way of knowing about or correcting the switch, since it has no linkages to the external world.

      What virtual world? If the topic of discussion is the real world, then it is competent at dicussing the real world just as you and I are.

      No actually it’s not. It can only discuss what is in its programming and database. It may be that — as it happens — the programming copies the external world, but that’s simply because someone has made it like that. It is a coincidence. That someone could have programmed it differently. Thus the virtual-mind only has competence about its own virtual-mind programming. It has no way of telling how that relates to the external world, since you’ve forbidden sensory devices. Any similarity with the external world is thus a coincidence, dependent on the whim of the programmer.

      I don’t have direct access to the real world, only to my model of it. The CR is no different.

      You have your sensory organs. Your model is about that sensory data, and thus is about the external world. The CR’s model is merely about whatever the programmer has given it.

      You could notice what sort of animal people were pointing at when they said “elephant”. The CR could not.

      It [the CR] is both competent in discussing and understanding the real world in precisely the same sense that I am.

      Absolutely not! It has competence and understanding only with regard to its programmed virtual world. Whether that is similar to anything else is coincidental.

      Examples of linkages are the linkage between the Chinese phoneme-string for “three” and the external-world concept “three”, and the linkage between the Chinese phoneme-string for “hamburger” and the external-world concept “hamburger”.

      OK, so I’m saying the CR has all these linkages.

      No you are not! You are saying that the CR has linkages between the Chinese phoneme-string for “three” and the VIRTUAL-WORLD concept “three”. You are saying that it does NOT have linkages to the EXTERNAL-WORLD concept “three”.

      It would have those linkages if you allowed it a video camera that could visualise the external world, and count the number of fingers held up, but you are forbidding such links.

      But “being able to translate into English” is not the criterion for understanding. “Being able to explain in your native language” is a little closer.

      I’m dubious about that idea, since it makes “understanding” necessarily different between the CR and Searle, when that is the whole thing we’re arguing about.

      Given my stance that understanding is merely about linkages between bits of information, it wouldn’t matter which language it was in, or whether it was the first, second or third language you learned.

      No he doesn’t. All he has is an algorithm. Whatever linkages and relations exist in the Chinese room exist at many levels of abstraction above the algorithm.

      He does have access in the sense that, for each sub-sub-sub-module of the CR he can ask himself: if sqiggle input to it, what would the output be? That allows him all the *within*-CR understanding that the CR itself has.

  9. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    > No, the linkages from the virtual mind to the external world are *not* there. If they were, “Searle” could access.

    How? What form would you expect these linkages to take, that Searle can just read them off with ease? If you had complete access to the workings of a real Chinese brain with real linkages, down to the molecule, would you be able to read off the linkages so easily?

    My view is that the linkages in either case (CR and actual brain) are the same. All any of us really has direct access to is our own world model, the “virtual world” you are talking about. There is no such thing as “true” reference. There is no physical or magical arrow pointing from brain states to objects in the world. There is only a vague heuristic link we appreciate in virtue of certain correlations arising because of a sort of approximate structural similarity between mental models and real things.

    > What the virtual mind has are linkages between its own internal concept “” and its own internal concept “”

    Right. Just like a real person (in my view). If you think there is something more objectively physically real about the references in my brain or yours then you’re going to have to defend that a bit. You’re actually coming across as a bit of a Searlite now, because you’re basically saying that biological references are real whereas computational references are only internal linkages. That’s pretty much the biological naturalist (Searlite) view. Massimo would be pleased!

    Your defense of this view so far is just that our mental models arose out of interaction with the natural world and the CR’s are just the whim of a programmer. But that’s not necessarily so. It’s possible that a biological organism could be raised in a VR environment and so have references that are the whim of a designer. It’s possible that the CR algorithm was produced by scanning a real brain or by having a robot learn its way around an environment (the algorithm arising from this later being transcribed to paper and memorised by Searle). In these kinds of cases it’s not a coincidence that the virtual world corresponds to the real world and there is a real causal connection between objects in the real world and their eventual representation in the CR.

    > You are saying that the CR has linkages between the Chinese phoneme-string for “three” and the VIRTUAL-WORLD concept “three”. You are saying that it does NOT have linkages to the EXTERNAL-WORLD concept “three”.

    Honestly this makes no sense to me at all. “Three” is an abstract mathematical concept. It belongs neither to the virtual world or the external world. You might make your argument better by referring to a concrete object such as a tree (but even then I would disagree with you, because I disagree with your account of how semantics works).

    > It would have those linkages if you allowed it a video camera that could visualise the external world, and count the number of fingers held up, but you are forbidding such links.

    You could have that sort of stuff (video cameras and so on) involved in the production and training of the algorithm, but at the point that Searle is running it the algorithm is simulating a guy sitting at a terminal who doesn’t *currently* have access to multimedia about the external world. He is therefore in the same situation as an actual guy locked in a room. He is restricted to a text interface right now, but that doesn’t mean this data structure was not influenced by other sensory inputs in the past (or, if the wrapper to the algorithm is patched, in the future).

    But anyway, let’s review a minute, because you seem to me to be agreeing with Searle and mistaking his point. The CR is a simulation of a Chinese guy in the room. The simulated person appears to know about the real world (would pass the Turing Test), but you say it doesn’t *really* know about the real world. Any appearance that it does is a coincidence arising from the whims of the programmer. It can converse convincingly in Chinese but it doesn’t have any real understanding because it lacks the linkages to the physical world.

    Now, what, exactly, is the difference between this view and Searle’s? You seem to be pushed into the rather absurd position that even though the CR can converse ably in Chinese about a variety of subjects, it isn’t actually competent to speak Chinese. It just has the appearance of competence. This seems to be a desperate attempt to stick to your guns in insisting that competence and understanding are the same thing. If it doesn’t understand, it can’t be competent either. But this is absurd. Being competent in Chinese is just being able to converse in Chinese (i.e. pass the Turing Test). So if it doesn’t really understand the real world (due to not having linkages), then it doesn’t understand and understanding cannot be the same as competence.

    Reply
    1. Coel Post author

      Hi DM,

      My view is that the linkages in either case (CR and actual brain) are the same.

      Yes, to a large extent. An actual-brain has access to internal concepts of (say) “hamburger”. So does the CR. But, the actual-brain can link the internal concepts to sensory information. Thus the internal concept can be *about* sensory information and is thus anchored in the real world. The CR cannot do that, since you’ve forbidden it access to sensory information. Its linkages are solely to its own internal concepts, whatever it has been programmed with.

      You’re actually coming across as a bit of a Searlite now, because you’re basically saying that biological references are real whereas computational references are only internal linkages.

      Does the above paragraph explain the difference as I see it? Both have internal linkages within their brain; in one case there are also external linkages to the external world. All of these are real linkages, but in the case with the external sensors the linkages are more extensive (and therefore, as I see it, the “understanding” is greater).

      It’s possible that a biological organism could be raised in a VR environment and so have references that are the whim of a designer.

      If that were the case then I’d say that the organism would “understand” about the VR environment it had been raised in.

      It’s possible that the CR algorithm was produced by scanning a real brain or by having a robot learn its way around an environment (the algorithm arising from this later being transcribed to paper and memorised by Searle).

      In which case I’d say that the CR would “understand” what it had been programmed with. But, again, I’d assert that Searle could not both memorise this and be competent at it unless he also knew the external-world references. Again, if someone held up a hamburger and asked what it was, Searle could not reply without those.

      “Three” is an abstract mathematical concept. It belongs neither to the virtual world or the external world.

      You are a mathematical Platonist, so that makes sense to you. To me it doesn’t. To me a “concept” needs to be instantiated to “be” a concept. Thus the number “3” is either instantiated in the virtual mind as a neural pattern, or the number “3” is instantiated as a pattern in the external world (as, say, 3 seagulls sitting on a wire). The CR would have access to the internal-world concept, but would have no way of linking that to anything in the external world.

      Searle is running it the algorithm is simulating a guy sitting at a terminal who doesn’t *currently* have access to multimedia about the external world.

      And if he is only competent at Chinese text characters, but not at the ensemble {Chinese characters plus sensory information} then I fully accept that his understanding is impoverished. So is his competence. Note that the virtual-mind he is running is also impoverished in both understanding and competence — it can only access its own internal self.

      The CR is a simulation of a Chinese guy in the room. The simulated person appears to know about the real world (would pass the Turing Test), but you say it doesn’t *really* know about the real world.

      Correct. The CR only *appears* to know about the external world. But it doesn’t, it knows only about its internal world. Anything about the external world that was different from its own internal world, it could not cope with. It only passes the Turing Test in the restricted sense that you’re only allowed to ask it things about its internal world.

      Now, what, exactly, is the difference between this view and Searle’s?

      Searle is (I think) asserting that the CR has full competence in Chinese but only limited (or zero) understanding. I think Searle is mistaken in that assertion of full competence in Chinese. He’s deliberately restricted so that it only has partial competence (e.g. it could not answer the “what is this?” question to someone holding up a hamburger). Since it only has partial competence, I can readily agree with Searle that it only has partial understanding (or, rather, Searle would assert that it has zero understanding, but all he’s shown is that it does not have full understanding).

      You seem to be pushed into the rather absurd position that even though the CR can converse ably in Chinese about a variety of subjects, it isn’t actually competent to speak Chinese.

      It is not fully competent at the ensemble {Chinese characters plus real-world sensory information}. To the extent that it has competence it has understanding. To the extent that it lacks competence it lacks understanding.

  10. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    From this and from private correspondence it seems that we’ve got to the point where you agree with Searle in the specific case where the system is fed only Chinese symbols. Your only disagreement in this case is the semantic point of how we ought to define “understand”, but since even you don’t think the system really understands the real world, it’s perhaps not a point worth getting fussed about. Nevertheless, since the aim of the CR is primarily to argue against the naive computationalist view that a computer could ever really understand the world as we do simply by being programmed the right way, you should congratulate Searle on proving his point. You should no longer be mystified about why the CR is taken so seriously, and perhaps you should even count yourself amongst those who support it.

    But perhaps you think a computer could be conscious (or at least understand in the same sense that we do) if it had real “linkages” with sensory data. If that is your position, then I hate to break it to you, but you’re not endorsing the Systems Reply but another common (and I feel, incorrect) reply, the Robot Reply.

    https://en.wikipedia.org/wiki/Chinese_room#Robot_and_semantics_replies:_finding_the_meaning

    This reply is incorrect because, as Searle points out, some of the symbols Searle is manipulating could be encoded sensory information. Searle cannot see what the CR is perceiving, because he is only privy to data (akin to impulses along the optic nerve) and not sensory qualia themselves (the image of a red ball). Furthermore this sensory data need not be about some virtual simulated world, but could be real data coming from real sensors in the real world. Searle has no way of knowing how to interpret the data so as to render a picture he can understand. All he can do is run the algorithm. The CR would have the appearance of understanding the real world just as much as any robot, yet Searle himself understands nothing of what it is doing or perceiving.

    I have also tried to show the problem with your view by going the other route — considering a biological mind that is currently cut off from the real object of its intentions (e.g. I have a reasonable understanding of elephants even while not in the presence of an elephant). We can go further and imagine someone in a sensory deprivation tank or blind and deaf and with locked in syndrome. I think such a person still understands the world. I wager you do too.

    You would probably justify this by appeal to past or potential future causal interactions. Even if I’m not looking at an elephant right now, my understanding of elephants (and the underlying brain state) has been shaped by past interactions where I saw either real elephants or faithful images of real elephants.

    Well, okay, but the same can be said for the original CR thought experiment. Thought it is communicating with Chinese characters only right now, that doesn’t mean it couldn’t have past or potential future causal interactions with sensory data of the real world. It is no different from a biological brain in this respect. Allow me to elaborate with a thought experiment.

    I previously gave examples of how the CR algorithm could have been developed by having a real robot interacting with the real world (looking at elephants and so on), and said that the state of the resulting artificial neural network could be transcribed to an algorithm on paper and memorised by Searle. At this point, you think Searle’s CR no longer understands the real world, because those linkages are broken.

    But consider the following horrific thought experiment. Imagine with advances in neurology, we can tell which neurons are involved in encoding particular Chinese characters. Imagine we can attach electrodes to a brain such that we can stimulate those neurons and it seems to the subject that these characters just pop into mind. Also imagine that we can have the subject concentrate on these characters one after another and we can tell which characters he or she is thinking of. In this way it is possible to have what amounts to a telepathic conversation with the subject in the medium of Chinese characters. So far, this is not that far fetched, I feel.

    But now imagine we have the ability to cut all the nerves passing sensory information to the brain and sending motor output to the muscles while keeping the brain alive. Imagine even that we can filter the blood to remove any endocrine signals. Our poor subject is now completely cut off from all sensory data. The linkages are cut. Our subject is no different from the CR in this respect.

    I’m assuming all this can be done while preserving the ability of the brain to host conscious thought. Sure, this kind of cruel torture would probably lead to madness in the fullness of time, but not right away. It ought to be no different from being in a sensory deprivation tank, or being medically anaesthetised to feeling while remaining awake.

    Would such a brain not still understand the world even after its linkages have been cut? What’s the difference between this brain and a transcribed artificial neural network that learned by interacting with the real world? In each case there are past causal connections between the internal symbols and objects in the real world. In each case there are no present linkages between symbols and objects in the real world.

    Looking forward to your reply!

    Reply
    1. Coel Post author

      Hi DM,

      it seems that we’ve got to the point where you agree with Searle in the specific case where the system is fed only Chinese symbols.

      There is a big difference between Searle and myself. Searle is asserting that the CR lacks any sort of understanding. I assert that it does understand (my concept of understanding is merely linkages between information). I thus assert that the CR understands everything that it has been programmed with (that directly contradicts Searle). To the extent that the programming matches the external world, the CR understands the external world. To the extent that the programming differs from the external world, the CR does not understand the external world.

      I would also assert that the CR only has competence with its own internal programmed world. About anything to do with the external world that is not duplicated in the internal programmed world, the CR would lack competence. Thus I dispute Searle’s claim that the Chinese-text-input-only CR has general competence about the external world.

      But perhaps you think a computer could be conscious (or at least understand in the same sense that we do) if it had real “linkages” with sensory data.

      Whether the CR would be conscious is, to me, a very different question from whether it understands. By hypothesis, the CR has been programmed with *competence* in replying to Chinese-text questions. Nothing about the hypothesis says that it has been programmed with self-reflective consciousness, and thus I’d presume it would not have that. (I could, though, envisage a variant in which the CR has been programmed with consciousness.)

      This reply is incorrect because, as Searle points out, some of the symbols Searle is manipulating could be encoded sensory information.

      Really? How? By hypothesis, the CR is fed Chinese-text symbols and the CR is programmed to respond to them only as Chinese-text symbols. (Again, we’re not told it has any other form of competence.) Thus it could not be fed an image or any sensory information. (Again, we could envisage a variant in which it is fed sensory information.)

      Searle has no way of knowing how to interpret the data so as to render a picture he can understand.

      And nor would the CR, if it had only been programmed to deal with Chinese text symbols.

      We can go further and imagine someone in a sensory deprivation tank or blind and deaf and with locked in syndrome. I think such a person still understands the world. I wager you do too.

      I don’t think we should attribute a binary yes/no to “understanding the world”. Such understanding is surely a matter of degree. I could well envisage that such a person had less understanding.

      Thought [the CR] is communicating with Chinese characters only right now, that doesn’t mean it couldn’t have past or potential future causal interactions with sensory data of the real world.

      And, again, I’d reply that, to the extent that this internal model (the product of those past interactions) still matches the external world, the CR has competence and understanding about the external world.

      At this point, you think Searle’s CR no longer understands the real world, because those linkages are broken.

      Well, what I’d say is that the CR has direct understanding of it’s internal-world model, and, to the extent that that still matches the external world, it has indirect competence and understanding about the external world.

      Would such a brain not still understand the world even after its linkages have been cut?

      Again, I’d say that such a brain directly understands its own internal-world model, and then repeat the previous statement.

  11. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    Your position is slipping out of my grasp. Sometimes you seem to agree with Searle about whether the original CR really understands as a human does, differing with him only in whether you prefer to call what it is doing “understanding”. But sometimes you seem to disagree with him, believing what the computer to be doing when it understands to be pretty much the same kind of thing humans do when we understand, albeit with a quantitative difference in degree. This makes it quite difficult to discuss, because when I push you on the apparent inconsistencies in one position, you seem to retreat to the other position and vice versa.

    For convenience, let’s use “understanding” for how Searle uses the term and let’s use “competence” for how you use the term, as there does seem to be a difference in how the term is used. You may feel that is a difference in degree, and so be it, but sometimes differences in degree are important.

    If you are competent in a certain field or task, you can function well in that field or task.

    If on the other hand you understand a certain field or task, you can discuss that field or task (in your usual language), extrapolate and invent and consider counterfactuals and reflect on what you are doing and all that stuff. You can potentially connect it to other fields of expertise. If you understand mathematical functions you are more likely to understand the concept of functions in programming. An understanding of economics might influence your understanding of politics and how you vote. If you understand Chinese and English you can translate from Chinese to English and vice versa. With understanding, we don’t generally have compartmentalised silos of expertise that can’t talk to each other. Understanding is a whole. Understanding is therefore competence plus the ability to reflect and connect that understanding to other areas.

    So a bacterium is competent at asexual reproduction without understanding it. A microbiologist understands asexual reproduction, while not being competent at it..

    I will agree that the borders between competence and understanding are of course blurry. I’m not sure whether I should say that a lion understands how to hunt or is just competent at hunting.

    Now, the problem is that in Searle’s original CR scenario, the CR is competent in Chinese while Searle only understands English. If he understood Chinese he would be able to reflect on what was said, translate from Chinese to English, use information learned in the Chinese conversation to his advantage and so on, but he can do none of these things. We’re nowhere near the blurry border between competence and understanding. Searle has nothing like understanding of Chinese even while he is perfectly competent.

    Now, perhaps the CR has a somewhat impoverished understanding simply because it has no sensory input, in the same way that a disembodied brain (with life support) would. But a disembodied brain which understood both Chinese and English would have no problem translating or reflecting or what have you. Even if impoverished, a disembodied brain could clearly understand to some degree, which is not true in the case of Searle with respect to Chinese. This simply isn’t the problem.

    > Again, we could envisage a variant in which it is fed sensory information.

    Indeed. This is a variant.

    > And nor would the CR, if it had only been programmed to deal with Chinese text symbols.

    In the variant it could deal with sensory data just as the brain does with nerve impulses. But with access only to a symbolic representation of nerve impulses Searle would not necessarily be able to discern images or audio or what have you.

    > I don’t think we should attribute a binary yes/no to “understanding the world”.

    We don’t need to. Even if it’s a continuum (and it is, I’ll grant), we can see that Searle is nowhere near the ballpark of what we could call understanding Chinese given how he conceives of understanding.

    Reply
    1. Coel Post author

      Hi DM,

      For convenience, let’s use “understanding” for how Searle uses the term and let’s use “competence” for how you use the term, …

      OK, though I might refer to this as S-understanding for explicitness.

      Understanding is therefore competence plus the ability to reflect and connect that understanding to other areas.

      OK, or, for explicitness, that is S-understanding.

      I would then say that neither Searle nor the CR has S-understanding. Under that definition, I agree with him. I’ve been not agreeing with him simply because “S-understanding” is not what I would think of as “understanding”.

      Or, rather, S-understanding is a superset of minimal understanding, adding in additional competences beyond the minimum needed for mere “understanding”. Under the CR thought experiment, the CR has not been programmed with those additional competences.

  12. Disagreeable Me (@Disagreeable_I)

    The thing is, the CR has every appearance of having all the S-understanding of a native Chinese speaker (or at least a disembodied Chinese brain on life support). It can reflect on what was said in Chinese and connect the information learned with all the knowledge in its knowledge base about the world. It’s just that this stuff is all closed off from Searle’s native intelligence. It is as though there are two people living in his head, Searle and the CR. Searle doesn’t S-understand Chinese and the CR doesn’t S-understand English, but the system is competent in both..

    The computationalist view is actually that the CR actually does have this S-understanding (somehow). Searle’s view is that it does not. This is what the debate is actually about and what the CR argument is attempting to resolve, not the proper definition of the term “understand”.

    What is your view?

    Reply
    1. Coel Post author

      Hi DM,

      The thing is, the CR has every appearance of having all the S-understanding of a native Chinese speaker …

      My problem is that I’m not clear what competences the CR is supposed to have. It can only pass the Turing Test through a text interface (as opposed to having the general functionality of a human being).

      This matters for S-understanding, given that that includes self-reflective consciousness. I can envisage a robot passing a text-only Turing Test without being conscious, and thus without having S-understanding. However, I would deny that a robot could emulate the fully general functions of a human being without having consciousness and thus without having S-understanding.

      I also admit that I don’t have a good conception of how consciousness works, and thus am unsure what competencies would demand consciousness and which would not; somewhere between the stripped down, text-interface-only, Chinese-characters-only competence, and the full functionality of a human, it would — I assert — need to have degrees of consciousness, but I admit that I’m unsure about anything more specific than that.

      Having said that, I am convinced that consciousness and thus S-understanding are things programmable in material neural networks, and thus (I would presume, though don’t know for sure) in other computer architectures also. Thus I’d simply reverse the question and ask: ok, you’ve told us that the CR has been programmed with text-only Turing-Test competence in Chinese, but has it also been programmed with self-reflection on that competence? I can envisage both yes or no variants to that question. I can envisage an iPhone 26 being CR-competent. Whether they also program in self-reflection consciousness, or the degree to which they do that, is another issue.

    2. richardwein

      Hi Coel,

      > My problem is that I’m not clear what competences the CR is supposed to have.

      That’s why it’s good to stipulate, as I’ve always done, that the program includes a full-brain simulation of a Chinese person. So, from a computational point of view, everything that goes on in a real person’s brain also goes on in the CR.

      To be more specific, imagine three large black boxes whose only connection with the outside world is a wire carrying a two-way Chinese-character-based chat. Inside Box A is a Chinese person called Lee, communicating in Chinese with an outside interlocutor using a chat device (any device with a screen and keyboard that can run a chat program).

      Inside Box B is a computer running a program that constitutes a detailed simulation of everything in Box A, including Lee’s brain. I call this simulation program (and the corresponding running system) “Lee-sim”. As each character of chat comes into Box B along the wire, Lee-sim simulates that character appearing on the screen of the chat device, and then simulates Lee observing the character and reacting to it. Each character that simulated Lee types in reply is output along the wire.

      Box C contains Searle and a chat device, so that Searle can chat with the same outside Chinese-speaking interlocutor. Searle has also memorised the entire Lee-sim program, and can run that program in his head. He inputs the incoming Chinese charcaters into the program, runs the program, and from time to time the program produces an output Chinese character, which Searle types on the chat device. Box C is the Chinese Room (although this version uses an electronic chat device in place of characters written on paper).

  13. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    I think some of your problems with the CR thought experiment can be answered by consideration of what the thought experiment is trying to achieve. Searle is trying to present a knock down argument against computationalism by showing that even the most sophisticated computer cannot be conscious or understand. A machine that can pass the Turing Test in Chinese is taken as an example of a very sophisticated computer. As such, keeping the point of the thought experiment in mind, if in doubt we must always lean towards the assumption that the algorithm is doing things in a sophisticated or human-analogous way. For instance, it is safe to regard the CR as a simulation of the disembodied Chinese biological brain I raised before — a brain that is conscious and has S-understanding but is currently in a state of sensory deprivation due to having its nerves cut or chemically disabled.

    Next, as to why the CR communicates only through a Chinese interface: this is purely to help people see the argument more clearly and is not really crucial to the idea. If Searle could himself speak Chinese, he would understand what he was passing in or out, but he doesn’t understand what he is passing in or out so he doesn’t understand Chinese. If this is really a sticking point for you it’s OK to alter the scenario (as Searle himself suggests) and have the inputs be sensory data from a remote robot and the outputs be motor commands to that robot. (Searle of course can’t see the robot as this would open a window for cheating by observing the robot in action and allowing inferences to be made about the meaning of the input and output).

    > I can envisage a robot passing a text-only Turing Test without being conscious,

    Well, I don’t think I can. Not a really rigorous Turing Test at least, and especially if the algorithm used to pass it is based on the human brain and not just using a clever set of tricks like ELIZA or what have you. Such a Turing Test would probe the machine’s ability to learn, reflect and introspect. I think (and most computationalists and functionalists think) that consciousness is simply what we call the aggregate of these abilities. If you disagree then you’re not really a functionalist/computationalist like Dennett and are instead more of an agnostic.

    I also don’t see that having sensory data right now is at all crucial for consciousness because people don’t lose consciousness when placed in sensory deprivation. If the CR is modeled on a human brain, it would for instance have the ability to recall or imagine sensory data (or at least believe it has this ability) so even if it is currently communicating only with Chinese characters that doesn’t mean it cannot understand what “red” means.

    > and thus am unsure what competencies would demand consciousness and which would not

    Right. So if in doubt, assume that the CR has all the competencies of a human brain. If it is restricted to characters only right now, that’s because its inputs and outputs have been cut off, not because the algorithm itself could not process sensory data. It’s like that Chinese brain that has had its nerves cut. It has competencies regarding sensory data that are dormant only because it is not currently receiving sensory data input.

    > ok, you’ve told us that the CR has been programmed with text-only Turing-Test competence in Chinese, but has it also been programmed with self-reflection on that competence?

    Yes. If in doubt, always err on the side of sophistication.

    So, given this maximally sophisticated algorithm which appears to have S-understanding of the Chinese language in any way we could possibly detect empirically, but also given that Searle himself manifestly does not have S-understanding of Chinese (since he cannot translate Chinese to English), does the apparent S-understanding of the CR constitute real S-understanding or just the illusion of it?

    (If you get hung up on the Chinese characters thing, consider the sensory/motor data variant.)

    Searle’s view is that it does not actually have S-understanding. Dennett’s view is that it does.

    (I should point out that I disagree with Dennett’s response to the CR, which is to say that the CR depends on pushing human intuitions far beyond their usefulness. We may think that Searle would not understand Chinese but if he were smart enough to internalise the algorithm then he is some sort of intellectual God and who are we to say what he can understand? I disagree because I think our intuitions are correct and Searle would not understand Chinese. I won’t get into my reasons for this just now.)

    You appear to be an agnostic, and if so then you should probably appreciate the value of Searle’s argument because it seems you would otherwise be naturally inclined to assume Dennett was right.

    Reply
    1. Coel Post author

      I also don’t see that having sensory data right now is at all crucial for consciousness because people don’t lose consciousness when placed in sensory deprivation.

      Appreciation of sensory data does seem to be a large part of consciousness. Even when sensory deprived, we still save some sense information, even if just things like awareness of our body. I can imagine stripping that down, perhaps with some Buddhist meditation, to very little sensory information, but I’d suggest that that is, at the least, a rather altered state of consciousness.

  14. richardwein

    Hi DM,

    > You appear to be an agnostic, and if so then you should probably appreciate the value of Searle’s argument because it seems you would otherwise be naturally inclined to assume Dennett was right.

    Surely not! You seem to be saying that Coel should appreciate the value of Searle’s argument just because he’s (partly) sympathetic to Searle’s conclusion. That’s not good critical thinking. The argument should be assessed on its own merits. And the argument is vacuous nonsense regardless of whether Searle’s conclusion happens to be true!

    Reply
    1. Disagreeable Me (@Disagreeable_I)

      Hi Richard,

      > Surely not! You seem to be saying that Coel should appreciate the value of Searle’s argument just because he’s (partly) sympathetic to Searle’s conclusion.

      Not really. What I’m saying is that seeing as the CR has apparently persuaded Coel that there may be some issues with naive computationalism (he hasn’t admitted this, but this is how it looks to me), then he ought to acknowledge it as having value. Any argument that changes your thinking in this way has merit. Even if the argument is wrong, if it encourages to refine your thinking then it deserves some respect.

      I personally don’t think the argument is vacuous because it shows something I regard as important, namely that it is incorrect to attribute consciousness or understanding to hardware. This is a trap a lot of physicalist computationalists fall into. Whatever it is that understands, it is not a physical artifact, because the physical entity known as Searle exhibits understanding of both English and Chinese but cannot translate between them. Within this one physical body there appear to be two minds, so it is wrong to equate mind with body or mind with brain. Similarly we could imagine two conscious AIs talking to each other but hosted on a single physical machine, or one AI with its processing distributed among multiple machines.

      What is conscious would instead appear to be some kind of pattern, essentially an abstract structure — abstract in the sense that although it is physically realised, its existence is independent of a specific physical realisation. We can for instance imagine Searle passing the processing of the algorithm back and forth with a colleague as each work in shifts — the CR is completely unaffected by which philosopher is currently acting as the CPU.

      I think this is as true of biological consciousness as of machine consciousness, which means that whatever I am, I am not simply my brain. This has implications for the possibility in principle of mind uploading or identity duplication. This is not a conclusion which is welcomed by people who think abstract structures are inventions or projections of the mind — after all the mind cannot be an abstract structure if an abstract structure only exists in the imagination of a mind.

      This is why I think that consistency demands that computationalism go hand in hand with Platonism. Searle rejects computationalism and apparently or presumably Platonism, so in one sense I think his position is more defensible than that of the anti-Platonist computationalists.

    2. Coel Post author

      Hi DM,

      I’ve been mulling over things since our Skype conversation. I admit that consciousness puzzles me and I don’t see any account that seems convincing to me (which is partly why, in the article, I was trying to take a stripped down account of “understanding”).

      … it is incorrect to attribute consciousness or understanding to hardware. This is a trap a lot of physicalist computationalists fall into.

      I would want to insist on a hardware implementation of anything that is conscious. (Sorry, but your Platonism is just too weird for me, though I acknowledge that that might be a limitation of my intuition.) But, I wouldn’t necessarily insist on one item of hardware mapping to one conscious mind. I can cope with a hardware device running disjoint consciousnesses, and I can cope with a single consciousness running on a distribution of more than one device.

    3. Coel Post author

      By the way, I can also cope with multiple different possible physical instantiations of the same consciousness.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s