First Draft: Not for Quotation
Harnad, S. (2001) What's Wrong and Right About Searle's Chinese Room Argument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.

MINDS, MACHINES AND SEARLE 2:
WHAT'S RIGHT AND WRONG ABOUT THE CHINESE ROOM ARGUMENT

Stevan Harnad
Professor of Cognitive Science
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ UNITED KINGDOM
harnad@cogsci.soton.ac.uk
http://www.cogsci.soton.ac.uk/~harnad/
phone: +44 23-80 592-582

When in 1979 Zenon Pylyshyn, associate editor of Behavioral and Brain Sciences (BBS, a peer commentary journal which I edit) informed me that he had secured a paper by John Searle with the unprepossessing title of [XXXX], I cannot say that I was especially impressed; nor did a quick reading of the brief manuscript -- which seemed to be yet another tedious "Granny Objection"[1] about why/how we are not computers -- do anything to upgrade that impression.

--- Footnote 1: See: http://cogsci.ecs.soton.ac.uk/~harnad/Hypermail/Explaining.Mind96/ ---

The paper pointed out that a "Chinese-Understanding" computer program would not really understand Chinese because someone who did not understand Chinese (e.g., Searle himself) could execute the same program while still not understanding Chinese; hence the computer executing the program would not be understanding Chinese either. The paper rebutted various prima facie counterarguments against this (mostly variants on the theme that it would not be Searle but "the system," of which Searle would only be a part, that would indeed be understanding Chinese when the program was being executed), but all of this seemed trivial to me: Yes, of course an inert program alone could not understand anything (so Searle was right about that), but surely an executing program might be part of what an understanding "system" like ourselves really does and is (so the "System Reply" was right too).

The paper was refereed (favorably), and was accepted under the revised title "Minds, Brains, and Programs," circulated to 100 potential commentators across disciplines and around the world, and then co-published in 1980 in BBS with 27 commentaries and Searle's Response. Across the ensuing years, further commentaries and responses continued to flow as, much to my surprise, Searle's paper became BBS's most influential target article (and still is, to the present day) as well as something of a classic in cognitive science [2]. (At the Rochester Conference on Cognitive Curricula (Lucas & Hayes 1982), Pat Hayes went so far as to define cognitive science as "the ongoing research program of showing Searle's Chinese Room Argument to be false" -- "and silly," I believe he added at the time).

--- Footnote 2: See: http://cogsci.umn.edu/millennium/final.html

As the arguments and counterarguments kept surging across the years I chafed at being the only one on the planet not entitled (ex officio, being the umpire) to have a go, even though I felt that I could settle Searle's wagon if I had a chance, and put an end to the rather repetitious and unresolved controversy. In the late 80's I was preparing a critique of my own, called "Minds, Machines and Searle" (after "Minds, Machines, and Goedel," by Lucas [1961], another philosopher arguing that we are not computers), though not sure where to publish it (BBS being out of the question). One of the charges that had been laid against Searle by his critics had been that his wrong-headed critique had squelched funding for Artificial Intelligence (AI), so the newly founded Journal of Experimental and Theoretical Artificial Intelligence (JETAI) seemed a reasonable locus for my own critique of Searle, which accordingly appeared there in 1989.

I never heard anything from Searle about my JETAI critique, even though we were still interacting regularly in connection with the unabating Continuing Commentary on his Chinese Room Argument (CRA) in BBS, as well as a brand new BBS target article (Searle 1990a) that he wrote specifically to mark the 10th anniversary of the CRA. This inability to enter the fray would have been a good deal more frustrating to me had not a radically new medium for Open Peer Commentary been opening up at the same time: It had been drawn to my attention that since the early 80's the CRA had been a prime topic on "comp.ai", a discussion group on Usenet. (That Global Graffiti Board for Trivial Pursuit was to have multiple influences on both me and BBS, and on the future course of Learned Inquiry and Learned Publication, but that is all another story [Harnad 1990a, 1991b; Hayes et al. 1992]; here we are only concerned with its influence on the Searle saga.)

Tuning in to comp.ai in the mid-late 80's with the intention of trying to resolve the debate with my own somewhat ecumenical critique of Searle (Searle's right that an executing program cannot be ALL there is to being an understanding system, but wrong that an executing program cannot be PART of an understanding system), to my surprise, I found comp.ai choked with such a litany of unspeakably bad anti-Searle arguments that I found I had to spend all my air-time defending Searle against these non-starters instead of burying him, as I had intended to do. (Searle did take notice this time, for apparently he too tuned in to comp.ai in those days, encouraging me [off-line] to keep fighting the good fight -- which puzzled me, as I was convinced we were on opposite sides.)

I never did get around to burying Searle, for when, after months of never getting past trying to clear the air by rebutting the bad rebuttals to the CRA, I begged Searle [off-line] to read my Minds, Machines and Searle and know that we were adversaries rather than comrades-at-arms, despite contrary appearances on comp.ai. He wrote back to say that although my paper contains points on which reasonable men might agree to disagree, on the ESSENTIAL point, the one everyone else was busy disputing, I in fact agree with him -- so why don't I just come out and say so?

It was then that the token dropped. For there was something about the Chinese Room Argument that had just been OBVIOUSLY right to me all along, and hence I had quite taken that part for granted, focusing instead on where I thought Searle was wrong; yet that essential point of agreement was the very one that everybody was contesting! And make no mistake about it, if you took a poll -- in the first round of BBS Commentary, in the Continuing Commentary, on comp.ai, or in the secondary literature about the Chinese Room Argument that has been accumulating across both decades to the present day (and culminating in the present book) -- the overwhelming majority still think the Chinese Room Argument is dead wrong, even among those who agree that computers can't understand! In fact (I am open to correction on this), it is my impression that, apart from myself, the only ones who profess to accept the validity of the CRA seem to be those who are equally persuaded by what I called "Granny Objections" earlier -- the kinds of soft-headed friends that do even more mischief to one's case than one's foes.

So what is this CRA then, and what is right and wrong about it? Searle is certainly partly to blame for the two decades of misunderstandings about his argument about understanding. He did not always state things in the most perspicuous fashion. To begin with, he baptized as his target a position that no one was quite ready to own to be his own: "Strong AI."

What on earth is "Strong AI"? As distilled from various various successive incarnations of the CRA (oral and written: Searle 1980b, 1982, 1987, 1990b), proponents of Strong AI are those who believe three propositions:

(1*) The mind is a computer program.

(2*) The brain is irrelevant.

(3*) The Turing Test is decisive.

It was this trio of tenets that the CRA was intended to refute. (But of course all it could refute was their conjunction. Some of them could still be true even if the CRA was valid.) I will now reformulate (1*) - (3*) so that they are the recognizable tenets of COMPUTATIONALISM, a position (unlike "Strong AI") that is actually held my many thinkers, and hence one worth refuting, if it is wrong (Newell 1980; Pylyshyn 1984; Dietrich 1990).

Computationalism is the theory that cognition is computation, that mental states are just computational states. In fact, that is what tenet (1) should have been:

(1) Mental states are just implementations of (the right) computer program(s). (Otherwise put: Mental states are just computational states.)

If (1*) had been formulated in this way in the first place, it would have pre-empted objections about inert code not being a mind: Of COURSE the symbols on a piece of paper or on a disk are not mental states. The code -- the RIGHT code (assuming it exists) -- has to be EXECUTED in the form of a dynamical system if it is to be a mental state.

The second tenet has led to even more misunderstanding. How can the brain be irrelevant to mental states (especially its own!)? Are we to believe that if we remove the brain, its mental states somehow perdure somewhere, like the Cheshire Cat's grin? What Searle meant, of course, was just the bog-standard hardware/software distinction: A computational state is implementation-independent. Have we just contradicted tenet (1)?

(2) Computational states are implementation-independent. (Software is hardware-independent.)

If we combine (1) and (2) we get: Mental states are just implementation-independent implementations of computer programs. This is not self-contradictory. The computer program has to be physically implemented as a dynamical system in order to become the corresponding computational state, but the physical details of the implementation are irrelevant to the computational state that they implement -- except that there has to be SOME form of physical implementation. Radically different physical systems can all be implementing one and the same computational system.

Implementation-independence is indeed a part of both the letter and the spirit of computationalism. There was even a time when computationalists thought that the hardware/software distinction cast some light on (if it did not outright solve) the mind/body problem: The reason we have that long-standing problem in understanding how on earth mental states could be just physical states is that they are NOT! Mental states are just computational states, and computational states are implementation-independent. They have to be physically implemented, to be sure, but don't look for the mentality in the matter (the hardware): it's the software (the computer program) that matters.

If Searle had formulated the second tenet of computationalism in this explicit way, not only would most computationalists of the day have had to recognise themselves as his rightful target, not only would it have fended off red herrings about the irrelevance of brains to their own mental states, or about there being no need for a physical implementation at all, but it would have exposed clearly the soft underbelly of computationalism, and hence the real target of Searle's CRA: For it is precisely on the strength of implementation-independence that computationalism will stand or fall.

The critical property is transitivity: If all physical implementations of one and the same computational system are indeed equivalent, then when any one of them has (or lacks) a given computational property, it follows that they all do (and, by tenet (1), being a mental state is just a computational property). We will return to this. It is what I have dubbed "Searle's Periscope" on the normally impenetrable "other-minds" barrier (Harnad 1991a); it is also that soft underbelly of computationalism. But first we must fix tenet (3*).

Actually, verbatim, tenet (3*) is not so much misleading (in the way (1*) and (2*) were misleading) as it is incomplete. It should have read:

(3) There is no stronger empirical test for the presence of mental states than Turing-Indistinguishability; hence the Turing Test is the decisive test for a computationalist theory of mental states.

This does not imply that passing the Turing Test (TT) is a guarantor of having a mind or that failing it is a guarantor of lacking one. It just means that we cannot do any BETTER than the TT, empirically speaking. Whatever cognition actually turns out to be -- whether just computation, or something more, or something else -- cognitive science can only ever be a form of "reverse engineering" (Harnad 1994a) and reverse-engineering has only two kinds of empirical data to go by: structure and function (the latter including all performance capacities). Because of tenet (2), computationalism has eschewed structure; that leaves only function. And the TT simply calls for functional equivalence (indeed, total functional indistinguishability) between the reverse-engineered candidate and the real thing.

Consider reverse-engineering a duck: A reverse-engineered duck would have to be indistinguishable from a real duck both structurally and functionally: It would not only have to walk, swim and quack (etc.) exactly like a duck, but it would also have to look exactly like a duck, both externally and internally. No one could quarrel with a successfully reverse-engineered candidate like that; no one could deny that a complete understanding of how that candidate works would also amount to a complete understanding of how a real duck works. Indeed, no one could ask for more.

But one could ask for LESS, and a functionalist might settle for only the walking, the swimming and the quacking (etc., including everything else that a duck can DO), but ignoring the structure, i.e., what it looks like, on the inside or the outside, what material it is made of, etc. Let us call the first kind of reverse-engineered duck, the one that is completely indistinguishable from a real duck, both structurally and functionally, D4, and the one that is indistinguishable only functionally, D3.

Note, though, that even for D3 not all the structural details would be irrelevant: To walk like a duck, something roughly like two waddly appendages are needed, and to swim like one, they'd better be something like webbed ones too. But even with these structure/function coupling constraints, aiming for functional equivalence alone still leaves a lot of structural degrees of freedom open.

(Those degrees of freedom would shrink still further if we became more minute about function -- moulting, mating, digestion, immunity, reproduction -- especially as we approached the level of cellular and subcellular function. So there is really a microfunctional continuum between D3 and D4; but let us leave that aside for now, and stick with D3 macrofunction, mostly in the form of performance capacities.)

Is the Turing Test just the human equivalent of D3? Actually, the "pen-pal" version of the TT as Turing (1950) originally formulated it, was even more macrofunctional than that -- it was the equivalent of D2, requiring the duck only to quack. But in the human case, "quacking" is a rather more powerful and general performance capacity, and some consider its full expressive power to be equivalent to, or at least to draw upon, our full cognitive capacity (Fodor 1975; Harnad 1996a).

So let us call the pen-pal version of the Turing Test T2. To pass T2, a reverse-engineered candidate must be Turing-indistinguishable from a real pen-pal. Searle's tenet (3) for computationalism is again a bit equivocal here, for it states that TT is the decisive test, but does that mean T2?

This is the point where reasonable men could begin to disagree. But let us take it to be T2 for now, partly because that is the version that Turing described, and partly because it is the one that computationalists have proved ready to defend. Note that T2 covers all cognitive capacities that can be tested by paper/pencil tests (reasoning, problem-solving, etc.); only sensorimotor (i.e., robotic) capacities (T3) are left out. And the pen-pal capacities are both life-size and life-long: the candidate must be able to deploy them with anyone, indefinitely, just as a real pen-pal could; we are not talking about one-night party-tricks (Harnad 1992) here but real, human-scale performance capacity, indistinguishable from our own.

We now reformulate Searle's Chinese Room Argument in these new terms: SUPPOSE that computationalism is true, that is, that mental states, such as understanding, are really just implementation-independent implementations of computational states, and hence that a T2-passing computer would (among other things) understand.

Note that there are many ways to reject this premise, but resorting to any of them is tantamount to accepting Searle's conclusion, which is that a T2-passing computer would NOT understand. (His conclusion is actually stronger than that -- too strong, in fact -- but we will return to that as another of the points on which reasonable men can diasgree.) So if one rejects the premise that a computer could ever pass T2, one plays into Searle's hands, as one does if one holds that T2 is not a strong enough test, or that implementational details DO matter.

So let us accept the premise and see how Searle arrives at his conclusion. This, after all, is where most of the heat of the past 20 years has been generated. Searle goes straight for computationalism's soft underbelly: implementation-independence (tenet (2)). Because of (2), any and every implementation of that T2-passing program must have the mental states in question, if they are truly just computational states. In particular, each of them must understand. Fair enough. But now Searle brings out his intuition pump, adding that we are to imagine this computer as passing T2 in Chinese; and we are asked to believe (because it is true) that Searle himself does not understand Chinese. It remains only to note that if Searle himself were executing the computer program, he would still not be understanding Chinese. Hence (by (2)) neither would the computer, executing the very same program. Q.E.D. Computationalism is false.

Now just as it is no refutation (but rather an affirmation) of the CRA to deny that T2 is a strong enough test, or to deny that a computer could ever pass it, it is merely special pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that implementational details DO matter after all, and that the computer's is the "right" kind of implementation, whereas Searle's is the "wrong" kind. This just amounts to conceding that tenet (2) is false after all.

By the same token, it is no use trying to save computationalism by holding that Searle would be too slow or inept to implement the T2-passing program. That's not a problem in principle, so it's not an escape-clause for computationalism. Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental (Churchland 1990). It should be clear that this is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of "complexity").

On comp.ai (and even in the original 1980 commentary on Searle), some of these ad hoc counterarguments were faintly voiced, but by far the most dogged of the would-be rebuttals were variants on the System Reply, to the effect that it was unreasonable to suppose that Searle should be understanding under these conditions; he would be only a PART of the implementing system, whereas it would be the system as a WHOLE that would be doing the understanding.

Again, it is unfortunate that in the original formulation of the CRA Searle described implementing the T2-passing program in a room with the help of symbols and symbol-manipulation rules written all over the walls, for that opened the door to the System Reply. He did offer a pre-emptive rebuttal, in which he suggested to the Systematists that if they were really ready to believe that whereas he alone would not be understanding under those conditions, the "room" as a whole, consisting of him and the symbol-strewn walls, WOULD be understanding, then they should just assume that he had memorized all the symbols on the walls; then Searle himself would be all there was to the system.

This decisive variant did not stop some Systematists from resorting to the even more ad hoc counterargument that even inside Searle there would be a system, consisting of a different configuration of parts of Searle, and that that system would indeed be understanding. This was tantamount to conjecturing that, as a result of memorizing and manipulating very many meaningless symbols, Chinese-understanding would be induced either consciously in Searle, or, multiple-personality-style, in another, conscious Chinese-understanding entity inside his head of which Searle was unaware.

I will not dwell on any of these heroics; suffice it to say that even Creationism could be saved by ad hoc speculations of this order. (They show only that the CRA is not a proof; yet it remains the only plausible prediction based on what we know.) A more interesting gambit was to concede that no conscious understanding would be going on under these conditions, but that UNconscious understanding would be, in virtue of the computations.

This last is not an arbitrary speculation, but a revised notion of understanding. Searle really has no defense against it, because, as we shall see (although he does not explicitly admit it), the force of his CRA depends completely on understanding's being a CONSCIOUS mental state, one whose presence or absence one can consciously (and hence truthfully) ascertain and attest to (Searle's Periscope). But Searle also needs no defense against this revised notion of understanding, for it only makes sense to speak of unconscious MENTAL states (if it makes sense at all) in an otherwise conscious entity. (Searle was edging toward this position ten years later in 1990a.)

Unconscious states in nonconscious entities (like toasters) are no kind of MENTAL state at all. And even in conscious entities unconscious mental states had better be brief! We're ready to believe that we "know" a phone number when, unable to recall it consciously, we find we can nevertheless dial it when we let our fingers do the walking. But finding oneself able to exchange inscrutable letters for a lifetime with a pen-pal in this way would be rather more like sleep-walking, or speaking in tongues (even the neurological syndrome of "automatic writing" is nothing like this; Luria 1972). It's definitely not what we mean by "understanding a language," which surely means CONSCIOUS understanding.

The synonymy of the "conscious" and the "mental" is at the heart of the CRA (even if Searle is not yet fully conscious of it -- and even if he obscured it by persistently using the weasel-word "intentional" in its place!): Normally, if someone claims that an entity -- any entity -- is in a mental state (has a mind), there is no way I can confirm or disconfirm it. This is the "other minds" problem. We "solve" it with one another and with animal species that are sufficiently like us through what has come to be called "mind-reading" (Heyes 1998) in the literature since it was first introduced in BBS two years before Searle's article (Premack & Woodruff 1978). But of course mind-reading is not really telepathy at all, but Turing-Testing -- biologically prepared inferences and empathy based on similarities to our own appearance, performance, and experiences. But the TT is of course no guarantee; it does not yield anything like the Cartesian certainty we have about our own mental states.

Can we ever experience another entity's mental states directly? Not unless we have a way of actually BECOMING that other entity, and that appears to be impossible -- with one very special exception, namely, that soft underbelly of computationalism: For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it. This is Searle's Periscope, and a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather NONconscious) state -- nothing to do with the mind.

Computationalism was very reluctant to give up on either of these; the first would have amounted to converting from computationalism to "implementationalism" to save the mental -- and that would simply be to rejoin the material world of dynamical systems, from which computational had hoped to abstract away. The second would have amounted to giving up on the mental altogether.

But there is also a sense in which the System Reply is right, for although the CRA shows that cognition cannot be ALL just computational, it certainly does not show that it cannot be computational AT ALL. Here Searle seems to have drawn stronger conclusions than the CRA warranted. (There was no need: showing that mental states cannot be just computational was strong enough!) But he thought he had shown more:

Searle thought that the CRA had invalidated the Turing Test as an indicator of mental states. But we always knew that the TT was fallible; like the CRA, it is not a proof. Moreover, it is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate. The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle BE the entire system; Searle's Periscope would fail. Not that Systematists should take heart from this, for if cognition is hybrid, computationalism is still false.

Searle was also over-reaching in concluding that the CRA redirects our line of inquiry from computation to brain function: There are still plenty of degrees of freedom in both hybrid and noncomputational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4). So cognitive neuroscience cannot take heart from the CRA either. It is only one very narrow approach that has been discredited: pure computationalism.

Has Searle's contribution been only negative? In showing that the purely computational road would not lead to London, did he leave us as uncertain as before about where the RIGHT road to London might be? I think not, for his critique has helped open up the vistas that are now called "embodied cognition" and "situated robotics," and they have certainly impelled me toward the hybrid road of grounding symbol systems in the sensorimotor (T3) world with neural nets.

And Granny has been given a much harder-headed reason to believe what she has known all along: That we are not (just) computers.

REFERENCES

Cangelosi, A. & Harnad, S. (2000) The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories. Evolution of Communication (Special Issue on Grounding) http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.language.html

Churchland, P.A. (1990) Could a machine think? Scientific American 262: 32 - 37.

Dietrich, E. (1990) Computationalism. Social Epistemology 4: 135 - 154.

Fodor, J. A. (1975) The language of thought New York: Thomas Y. Crowell

Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive architecture: A critical appraisal. Cognition. 28: 3 - 71.

Greco, A., Cangelosi, A., & Harnad, S. (2000) A Connectionist Model for Categorical Perception and Symbol Grounding. Connection Science. ftp://gracco.irmkant.rm.cnr.it/pub/angelo/cangelosi-evocom.ps

Harnad, S. (1982a) Neoconstructivism: A unifying theme for the cognitive sciences. In: Language, mind and brain (T. Simon & R. Scholes, eds., Hillsdale NJ: Erlbaum), 1 - 11. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad82.neoconst.html

Harnad, S. (1982b) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad82.consciousness.html

Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad87.categorization.html

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25. ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad89.searle.html

Harnad, S. (1990a) The Symbol Grounding Problem. Physica D 42: 335-346. ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad90.sgproblem.html

Harnad, S. (1990b) Against Computational Hermeneutics. (Invited commentary on Eric Dietrich's Computationalism) Social Epistemology 4: 167-172. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.dietrich.crit.html

Harnad, S. (1990c) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.dyer.crit.html

Harnad, S. (1990d) Scholarly Skywriting and the Prepublication Continuum of Scientific Inquiry. Psychological Science 1: 342 - 343 (reprinted in Current Contents 45: 9-13, November 11 1991). http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.skywriting.html

Harnad, S. (1991a) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.otherminds.html

Harnad, S. (1991b) Post-Gutenberg Galaxy: The Fourth Revolution in the Means of Production of Knowledge. Public-Access Computer Systems Review 2 (1): 39 - 53 http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.postgutenberg.html

Harnad, S. (1992) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4) (October) 9 - 10. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html

Harnad, S. (1993) Artificial Life: Synthetic Versus Virtual. Artificial Life III. Proceedings, Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad93.artlife.html

Harnad, S. (1994a) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial Life 1(3): 293-301. Reprinted in: C.G. Langton (Ed.). Artifial Life: An Overview. MIT Press 1995. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.artlife2.html

Harnad, S. (1994b) Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't. Special Issue on "What Is Computation" Minds and Machines 4:379-390 ftp://ftp.princeton.edu/pub/harnad/Harnad/HTML/harnad94.computation.cognition.html

Harnad, S. (1995b) Why and How We Are Not Zombies. Journal of Consciousness Studies 1: 164-167. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad95.zombies.html

Harnad, S. (1996) The Origin of Words: A Psychophysical Hypothesis In Velichkovsky B & Rumbaugh, D. (Eds.) "Communicating Meaning: Evolution and Development of Language. NJ: Erlbaum: pp 27-44. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad96.word.origin.html

Harnad, S. (2000) Turing Indistinguishability and the Blind Watchmaker. In: Mulhauser, G. (ed.) "Evolving Consciousness" Amsterdam: John Benjamins (in press) http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad98.turing.evol.html

Harnad, S. (2001) Turing on Reverse-Engineering the Mind. Journal of Logic, Language, and Information (JoLLI) special issue on "Alan Turing and Artificial Intelligence" (in press) http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html

Harnad, S. Hanson, S.J. & Lubin, J. (1995) Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding. In: V. Honavar & L. Uhr (eds) Symbol Processors and Connectionist Network Models in Artificial Intelligence and Cognitive Modelling: Steps Toward Principled Integration. Academic Press. pp. 191-206. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad95.cpnets.html

Harnad, S., Steklis, H. D. & Lancaster, J. B. (eds.) (1976) Origins and Evolution of Language and Speech. Annals of the New York Academy of Sciences 280.

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on Virtual Mind. Minds and Machines 2: 217-238. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.virtualmind.html

Heyes, C. M. (1998). Theory of mind in nonhuman primates. Behavioral and Brain Sciences 21 (1): 101-134. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.heyes.html

Hexter, J. H. (1979) Reappraisals in History. Chicago: University of Chicago Press.

Lucas, J. R. (1961) Minds, machines and Goedel. Philosophy 36: 112-117. http://cogprints.soton.ac.uk/abs/phil/199807022

Lucas, M.M. & Hayes, P.J. (Eds.) (1982) Proceedings of the Cognitive Curriculum Conference. University of Rochester.

Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4: 135 - 83.

Luria, A. R. (1972) The Man with a Shattered World. NY: Basic Books

Premack, D. & Woodruff, G. 1978. "Does the chimpanzee have a theory of mind?" Behavioral & Brain Sciences 1: 515-526.

Pylyshyn, Z. W. (1980) Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences 3: 111-169.

Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: MIT/Bradford

Pylyshyn, Z. W. (Ed.) (1987) The robot's dilemma: The frame problem in artificial intelligence. Norwood NJ: Ablex

Searle, John. R. (1980a) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457 http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

Searle, John. R. (1980b) Intrinsic Intentionality. Behavioral and Brain Sciences 3:

Searle, John. R. (1982) The Chinese Room Revisited. Behavioral and Brain Sciences, Vol. 5:

Searle, John R. (1984) Minds, brains, and science / John Searle. Cambridge, Mass.: Harvard University Press.

Searle, John. R. (1987) Minds and Brains without Programs," Mindwaves, C. Blakemore & S. Greenfield (eds.), Oxford: Basil Blackwell.

Searle, John R. (1990a) Explanatory inversion and cognitive science. Behavioral and Brain Sciences 13: 585-595.

Searle, John. R. (1990b) Is the Brain's Mind a Computer Program?", Scientific American, January 1990.

Steklis, H.D. & Harnad, S. (1976) From hand to mouth: Some critical stages in the evolution of language. In: Harnad et al. 1976, 445 - 455.

Turing, A.M. (1950) Computing Machinery and Intelligence. Mind 49 433-460 [Reprinted in Minds and machines. A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall, 1964.] http://cogprints.soton.ac.uk/abs/comp/199807017

Wittgenstein, L. (1953) Philosophical investigations. New York: Macmillan