Contents

           Fast Thinking

Fast Thinking

(Chapter 9 of Daniel Dennett's The Intentional Stance, Cambridge: MIT Press 1987, pages 323-337).

One last time let us reconsider John Searle's Chinese Room Argument (Searle 1980 and forthcoming). This argument purports to show the futility of "strong Al," the view that "the appropriately programmed digital computer with the right inputs and outputs would thereby have a mind in exactly the sense that human beings have minds." (Searle, forthcoming) His argument, he keeps insisting, is "very simple"; one gathers that only a fool or a fanatic could fail to be persuaded by it.1

I think it might be fruitful to approach these oft-debated claims from a substantially different point of view. There is no point in reviewing, yet another time, the story of the Chinese Room and the competing diagnoses of what is going on in it. (The uninitiated can find Searle's original article, reprinted correctly in its entirety, followed by what is still the definitive diagnosis of its workings, in Hofstadter and Dennett 19, pp. 353-82.) The Chinese Room is not itselt the argument, in any case, but rather just an intuition pump, as Searle acknowledges: "The point of the parable of the Chinese room is simply to remind us of the truth of this rather obvious point: the man in the room has all the syntax we can give him, but he does not thereby acquire the relevant semantics." (Searle, forthcoming)

Here is Searle's very simple argument, verbatim:

Proposition 1. Programs are purely formal (i.e., syntactical).

Proposition 2. Syntax is neither equivalent to nor sufficient by itself for semantics.

Proposition 3. Minds have mental contents (i.e., semantic contents).

Conclusion 1. Having a programany program by itself-is neither sufficient for nor equivalent to having a mind.

Searle challenges his opponents to show explicitly what they think is wrong with the argument, and I will do just that, concentrating first on the conclusion, which, for all its apparent simplicity and straightforwardness, is subtly ambiguous. I start with the conclusion because I have learned that many of Searle's supporters are much surer of his conclusion than they are of the path by which he arrives at it, so they tend to view criticisms of the steps as mere academic caviling. Once we have seen what is wrong with the conclusion, we can go back to diagnose the missteps that led Searle there.

Why are some people so sure of the conclusion? Perhaps partly, I gather, because they so intensely want it to be true. (One of the few aspects of the prolonged debate about Searle's thought experiment that has fascinated me is the intensity of feeling with which many- lay people, scientists, philosophers-embrace Searle's conclusion.) But also, perhaps, because they are mistaking it for a much more defensible near neighbor, with which it is easily confused. One might well suppose the following two propositions came to much the same thing.

(S)     No computer program by itself could ever be sufficient to produce what an organic human brain, with its particular causal powers, demonstrably can produce: mental phenomena with intentional content.

(D)     There is no way an electronic digital computer could be programmed so that it could produce what an organic human brain, with its particular causal powers, demonstrably can produce: control of the swift, intelligent, intentional activity exhibited by normal human beings.

As the initials suggest, Searle has endorsed proposition (S), as a version of his conclusion, and I am about to present an argument for proposition (D), which will be sharply distinguishable from Searle's version only after we see how the argument runs. I think that proposition (S), given what Searle means by it, is incoherent-for reasons I will explain in due course. I am not convinced that proposition (D) is true, but I take it to be a coherent empirical claim for which there is something interesting to be said. I am certain,moreover, that (D) is not at all what Searle is claiming in (S)-and this Searle has confirmed to me in personal correspondence-and that my defense of (D) is consistent with my defense of strong AI.

So anyone who thinks that no believer in strong Al could accept (D), or who thinks (S) and (D) are equivalent, or who thinks that (D) follows from (S) (or vice versa), should be interested to see how one can argue for one without the other. The crucial difference is that while both Searle and I are impressed by the causal powers of the human brain, we disagree completely about which causal powers matter and why. So my task is ultimately to isolate Searle's supposed causal powers of the brain and to show how strange-how ultimately incoherent-they are.

First we must clear up a minor confusion about what Searle means by a "computer program by itself." There is a sense in which it is perfectly obvious that no program by itself can produce either of the effects mentioned in (S) and (D): no computer program lying unim- plemented on the shelf, a mere abstract sequence of symbols, can cause anything By itself (in this sense) no computer program can even add 2 and 2 and get 4; in this sense, no computer program by itself can cause word processing to occur, let alone produce mental phenomena with intentional content.

Perhaps some of the conviction that Searle has generated to the effect that it is just obvious that no computer program "by itself" could "produce intentionality" actually derives from confusing this obvious (and irrelevant) claim with something more substantive-and dubious: that no concretely implemented, running computer program could "produce intentionality." But only the latter claim is a challenge to Al, so let us assume that Searle, at least, is utterly unconfused about this and thinks that he has shown that no running, material embodiment of a "formal" computer program could "produce intentionality" or be capable of "causing mental phenomena" (Searle 1982) purely in virtue of its being an embodiment of such a formal program.

Searle's view, then, comes to this: take a material object (any mate- rial object) that does not have the power of causing mental phe- nomena; you cannot turn it into an object that does have the power of producing mental phenomena simply by programming it- reorganizing the conditional dependencies of the transitions between its states. The crucial causal powers of brains have nothingto do with the programs they might be said to be running, so "giving something the right program" could not be a way of giving it a mind.

My view, to the contrary, is that such programming, such redesign of an object's state transition regularities, is precisely what could give something a mind (in the only sense that makes sense)-but that, in fact, it is empirically unlikely that the right sorts of programs can be run on anything but organic, human brains! To see why this might be so, let us consider a series of inconclusive arguments, each of which gives ground (while extracting a price).

Edwin A. Abbott's amusing fantasy Flatland: A Romance in Many Dimensions (1884) tells the story of intelligent beings living in a two-dimensional world. Some spoilsport whose name I have fortunately forgotten once objected that the Flatland story could not be true (who ever thought otherwise?) because there could not be an intelligent being in only two dimensions. In order to be intelligent, this skeptic argued, one needs a richly interconnected brain (or nervous system or some kind of complex, highly interconnected control system) and in only two dimensions you cannot wire together even so few as five things each to each other-at least one wire must cross another wire, which will require a third dimension.

This is plausible, but false. John von Neumann proved years ago that a universal Turing machine could be realized in two dimensions, and Conway has actually constructed a universal Turing machine in his two-dimensional Life world. Crossovers are indeed desirable, but there are several ways of doing without them in a computer or in a brain (Dewdney 1984). For instance, there is the way crossovers are often eliminated in highway systems: by "stoplight" intersections, where isolated parcels of information (or whatever) can take turns crossing each other's path. The price one pays, here as on the high- way, is speed of transaction. But in principle (that is, if time were no object) an entity with a nervous system as interconnected as you please can be realized in two dimensions.

Speed, however, is "of the essence" for intelligence. If you can't figure out the relevant portions of the changing environment fast enough to fend for yourself, you are not practically intelligent, how- ever complex you are. Of course all this shows is that relative speed is important. In a universe in which the environmental events that mattered unfolded a hundred times slower, an intelligence could slow down by the same factor without any loss; but transported back to our universe it would be worse than moronic. (Are victims ofParkinson's disease with "orthostatic hypotension" demented, or are their brains-as some have suggested-just terribly slowed down but otherwise normal? The condition is none the less crippling if it is "merely" a change of pace.)

It is thus no accident that our brains make use of all three spatial dimensions. This gives us a modest but well-supported empirical conclusion: nothing that wasn't three-dimensional could produce control of the swift, intelligent, intentional activity exhibited by normal human beings.

Digital computers are three-dimensional, but they are-almost all of them-fundamentally linear in a certain way. They are von Neumann machines: serial, not parallel, in their architecture and thus capable of doing just one thing at a time. It has become a commonplace these days that although a von Neumann machine, like the universal Turing machine it is descended from, can in principle compute anything any computer can compute, many interesting computations-especially in such important cognitive areas as pattern recognition and memory searching-cannot be done in reasonably short lengths of time by them, even if the hardware runs at the absolute speed limit: the speed of light, with microscopic distances between elements. The only way of accomplishing these computations in realistic amounts of real time is to use massively parallel processing hardware. That indeed is why such hardware is now being designed and built in many AI laboratories.

It is no news that the brain gives every evidence of having a mas- sively parallel architecture-millions if not billions of channels wide, all capable of simultaneous activity. This too is no accident, presumably. So the causal powers required to control the swift, intelligent, intentional activity exhibited by normal human beings can be achieved only in a massive parallel processor-such as a human brain. (Note that I have not attempted an a priori proof of this; I am content to settle for scientific likelihood.) Still, it may well seem, there is no reason why one's massive parallel processor must be made of organic materials. In fact, transmission speeds in electronic systems are orders of magnitude greater than transmission speeds in nerve fibres, so an electronic parallel system could be thousands of times faster (and more reliable) than any organic system. Perhaps, but then again perhaps not. An illuminating discussion of the relative speed of human brains and existing and projected hardware computers is given by Sejnowski(forthcoming), who calculates the average processing rate of the brain at 1015 operations per second, which is five orders of magnitude faster than even the current crop of electronic massive parallel processors. Since the ratio of computation speed to cost has decreased by an order of magnitude five times in the last thirty-five years, one might extrapolate that in a few decades we will have affordable, buildable, hardware that can match the brain in speed, but, Sejnowski surmises, this goal cannot be achieved with existing electronic technology. Perhaps, he thinks, a shift to optical computing will provide the breakthrough.

Even if optical computing can provide 10 [to the] 15[th] operations per second for a reasonable cost, that may not be nearly enough speed. Sejnowski's calculations could prove to underestimate the requirements by orders of magnitude if the brain makes maximal use of its materials. We might need to switch to organic computing to get the necessary speed. (Here is where the argument for proposition (D) turns highly speculative and controversial.) Suppose-and this is not very likely, but hardly disproved-that the information- processing prowess of any single neuron (its relevant input-output function) depends on features or activities in subcellular organic molecules. Suppose, that is, that information processed at the enzyme level (say) played a critical role in modulating the switching or information processing of individual neurons-each neuron a tiny computer using its macromolecular innards to compute its very elaborate or "compute-intensive" input-output function. Then it might in fact not be possible to make a model or simulation of a neuron's behavior that could duplicate the neuron's information- processing feats in real time.

This would be because your computer model would indeed be tiny and swift, but not so tiny (and hence not so swift) as the individual molecules being modeled. Even with the speed advantage of elec- tronics (or optics) over electrochemical transmission in axonal branches, it might thus turn out that microchips were unable to keep pace with neuronal intracellular operations on the task of determin- ing just how to modulate those ponderously slow output spikings.

A version of this idea is presented by Jacques Monod, who speaks of the "'cybernetic' (i.e., teleonomic) power at the disposal of a cell equipped with hundreds or thousands of these microscopic entities, all far more clever than the Maxwell-Szilard-Brillouin demon." (Monod 1971, p.69) It is an intriguing idea, but on the other hand thecomplexity of molecular activity in neuron cell bodies may very well have only local significance, unconnected to the information- processing tasks of the neurons, in which case the point lapses.

Some think there are more decisive grounds for dismissing Monod's possibility. Rodolfo Llinas has claimed to me in conversation that there is no way for a neuron to harness the lightning speed and "cybernetic power" of its molecules. Although the individual molecules can perform swift information processing, they cannot be made to propagate and amplify these effects swiftly. The spiking events in neural axons that they would have to modulate are orders of magnitude larger and more powerful than their own "output" state-changes, and the slow process of diffusion and amplification of their "signal" would squander all the time gained through miniaturization. Other neuroscientists with whom I have talked have been less confident that relatively slow diffusion is the only mechanism available for intracellular communication, but they have offered no positive models of alternatives. This Monod-inspired line of argument for the inescapable biologicality of mental powers, then, is inconclusive at best, and very likely forlorn.

Still, it is mildly instructive. It is far from established that the nodes in one's massively parallel system must be neurons with the right stuff inside them, but nevertheless this might be the case. There are other ways in which it might prove to be the case that the inorganic reproduction of the cognitively essential information- processing functions of the human brain would have to run slower than their real-time inspirations. After all, we have discovered many complex processes-such as the weather-that cannot be accurately simulated in real time (in time for useful weather prediction, for instance) by even the fastest, largest supercomputers currently in existence. (It is not that the equations governing the transitions are not understood. Even using what we know now is impossible.) Brainstorms may well prove just as hard to simulate and hence predict. If they are, then since speed of operation really is critical to intelligence, merely having the right program is not enough, unless by "right program" we understand a program which can run at the right speed to deal with the inputs and outputs as they present themselves. (Imagine someone who defended the feasibility of Reagan's SDI project by insisting that the requisite control software-the "right program"-could definitely be written but would run a thousand times too slow to be of any use in intercepting missiles!)So if the information processing in the brain does in fact fully avail itself of the speed of molecular-level computational activity, then the price (in speed) paid by any substitutions of material or architecture will prove too costly. Consider again, in this light, proposition (D):

(D)     There is no way an electronic digital computer could be programmed so that it could produce what an organic human brain, with its particular causal powers, demonstrably can produce: control of the swift, intelligent, intentional activity exhibited by normal human beings.

(D) might be true for the entirely nonmysterious reason that no such electronic digital computer could run the "right program" fast enough to reproduce the brain's real-time virtuosity. Hardly a knock- down argument, but the important point is that it would be foolish to bet against it, since it might turn out to be true.

But isn't this just the wager that AI has made? Not quite, although some AI enthusiasts have no doubt committed themselves to it. First of all, insofar as we consider AI to be science, concerned to develop and confirm theories about the nature of intelligence or the mind, the prospect that actual digital computers might not run fast enough to be usable in our world as genuine intelligences would be of minor and indirect importance, a serious limitation on investigators' ability to conduct realistic experiments testing their theories, but nothing that would undercut their claim that they had uncovered the essence of mentality. Insofar as we consider AI to be practical engineering, on the other hand, this prospect would be crushing to those who have their hearts set on actually creating a digital- computer-controlled humanoid intelligence, but that feat is as theoretically irrelevant as the stunt of constructing a gall bladder out of atoms. Our inability to achieve these technological goals is scientifically and philosophically uninteresting.

But this quite legitimate way for AI to shrug off the prospect that (D) might be true glosses over a more interesting reason people in AI might reasonably hope that my biochemical speculations are re- soundingly falsified. Like any effort at scientific modeling, AI modeling has been attempted in a spirit of opportunisticoversimplification. Things that are horribly complicated may be usefully and revealingly approximated by partitionings, averagings, idealizations, and other deliberate oversimplifications, in the hope that some molar behavior of the complex phenomenon will prove to be relatively independent of all the myriad micro-details, and hence will be reproduced in a model that glosses over those micro-details. For instance, suppose an AI model of, say, action planning requires at some point that a vision subsystem be consulted for information about the layout of the environment. Rather than attempt to model the entire visual system, whose operation is no doubt massively parallel and whose outputs are no doubt voluminously informative, the system designers insert a sort of cheap stand-in: a vision "oracle" that can provide the supersystem with, say, any one of only 256 different "reports" on the relevant layout of the environment. The designers are betting that they can design an action-planning system that will approximate the target competence (perhaps the competence of a five-year-old or a dog, not a mature adult) while availing itself of only eight bits of visual information on environmental layout. Is this a good bet? Perhaps and perhaps not. There is plenty of evidence that human beings simplify their information-handling tasks and avail themselves of only a tiny fraction of the information obtainable by their senses; if this particular oversimplification turns out to be a bad bet, it will only mean that we should search for some other oversimplification.

It is by no means obvious that any united combination of the sorts of simplified models and subsystems developed so far in AI can ap- proximate the perspicuous behavior of a normal human being-in real time or even orders of magnitude slower-but that still does not impeach the research methodology of Al, any more than their in- capacity to predict real-world weather accurately impeaches all meteorological oversimplifications as scientific models. If AI models have to model "all the way down" to the neuronal or subneuronal level to achieve good results, this will be a serious blow to some of the traditional AI aspirations to steal a march in the campaign to understand how the mind works; but other schools in Al, such as the New Connectionists or Parallel Distributed Processing groups, themselves suggest that such low-level detail will be required in order to produce significant practical intelligence in artificial minds. This division of opinion within AI is radical and important. The New Connectionists, for instance, fall so clearly outside the boundaries of the traditional school that Haugeland, in Artificial Intelligence: The Very Idea (1985), isobliged to invent an acronym, GOFAI (Good Old Fashioned Artificial Intelligence), for the traditional view, with which his book is largely concerned.

Have I now quietly switched in effect to a defense of "weak Al"-the mere modeling or simulation of psychological or mental phenomena by computer, as opposed to the creation of genuine (but artificial) mental phenomena by computer? Searle has no brief against what he calls weak Al: "Perhaps this is a good place to express my enthusiasm for the prospects of weak Al, the use of the computer as a tool in the study of the mind. "(Searle 1982, p.57) What he is opposed to is the "strong AI belief that the appropriately programmed computer literally has a mind, and its antibiological claim that the specific neurophysiology of the brain is irrelevant to the study of the mind." (p.57) There are several ways to interpret this characterization of strong Al. I think the following version would meet the approval of most of the partisans.

The only relevance of "the specific neurophysiology of the brain" is in providing the right sort of hardware engineering for real-time intelligence. If it turns out that we can get enough speed out of parallel silicon microchip architectures, then neurophysiology will be truly inessential, though cer- tainly valuable for the hints it can provide about architecture.

Consider two different implementations of the same program- that is, consider two different physical systems, the transitions of each of which are accurately and appropriately describable in the terms of a single ' 'formal" program, but one of which runs six orders of magnitude (about a million times) slower than the other. (Borrowing Searle's favorite example, we can imagine the slow one is made of beer cans tied together with string.) In one sense both implementations have the same capabilities-they both "compute the same function"- but in virtue of nothing but its greater speed, one of them will have "causal powers" the other lacks: namely the causal control powers to guide a locomoting body through the real world. We may for this very reason claim that the fast one was "literally a mind" while withholding that honorific from its slow twin. It is not that sheer speed ("intrinsic" speed?) above some critical level creates some mysterious emergent effect, but that relative speed is crucial in enabling the right sorts ofenvironmentorganism sequences of interaction to occur. The same effect could be produced by "slowing down the outside world" sufficiently-if that made sense. An appropriately programmed computer-provided only that it is fast enough to interface with the sensory transducers and motor effectors of a "body" (robot or organic)-literally has a mind, whatever its material instantiation, organic or inorganic.

This, I claim, is all that strong AI is committed to, and Searle has offered no reason to doubt it. We can see how it might still turn out to be true, as proposition (D) proclaims, that there is only one way to skin the mental cat after all, and that is with real, organic neural tissue. It might seem, then, that the issue separating Searle from strong AI and its defenders is a rather trifling difference of opinion about the precise role to be played by details of neurophysiology, but this is not so.

A dramatic difference in implication between propositions (S) and (D) is revealed in a pair of concessions Searle has often made. First, he grants that "just about any system has a level of description where you can describe it as a digital computer. You can describe it as instantiating a formal program. So in that sense, I suppose, all of our brains are digital computers." (Searle 1984, p.153) Second, he has often conceded that one could, for all he knows, create a brain- like device out of silicon chips (or other Al-approved hardware) that perfectly mimics the real-time input-output behavior of a human brain. (We have just given reasons for doubting what Searle concedes here.)

But even if such a device had exactly the same description at the program level or digital-computer level as the brain whose input- output behavior it mimicked (in real time), this would give us no reason-according to Searle-to suppose that it, like the organic brain, could actually "produce intentionality." If that perfect mimicry of the brain's control functions didn't establish that the hardware device was (or "caused" or "produced") a mind, what could shed light on the issue, in Searle's view? He says it is an empirical question, but he doesn't say how, even in principle, he would go about investigating it.

This is a puzzling claim. Although many (both critics and supporters) have misinterpreted him, Searle insists that he has never claimed toshow that an organic brain is essential for intentionality. You know (by some sort of immediate acquaintance, apparently) that your brain "produces intentionality," whatever it is made of. Nothing in your direct experience of intentionality could tell you your brain is not made of silicon chips, for "simply imagine right now that your head is opened up and inside is found not neurons but something else, say, silicon chips. There are no purely logical constraints that exclude any particular type of substance in advance." (forthcoming, ms. p.1) It is an empirical question, Searle insists, whether silicon chips produce your intentionality, and such a surgical discovery would settle for you that silicon chips could indeed produce intentionality, but it wouldn't settle it for anyone else. What if we opened up some third party's head and found silicon chips in it? The fact that they perfectly mimicked the real-time control powers of a human brain would give us no reason at all, on Searle's view, to suppose that the third partyhad a mind, since "control powers by themselves are irrelevant." (personal communication)

It is just a very obvious empirical fact, Searle insists, that organic brains can produce intentionality. But one wonders how he can have established this empirical fact. Perhaps only some organic brains produce intentionality! Perhaps left-handers brains, for instance, only mimic the control powers of brains that produce genuine intentionality! (Cf. Hofstadter and Dennett, p.377.) Asking the lefthanders if they have minds is no help, of course, since their brains may just be Chinese rooms.

Surely it is a strange kind of empirical question that is system- atically bereft of all intersubjective empirical evidence. So Searle's position on the importance of neurophysiology is that although it is important, indeed all-important, its crucial contribution might be entirely undetectable from the outside. A human body without a real mind, without genuine intentionality, could fend for itself in the real world just as well as a human body with a real mind.

My position, on the other hand, as a supporter of proposition (D), is that neurophysiology is (probably) so important that if ever I see any entity gadding about in the world with the real-time cleverness of, say, C3PO in Star Wars, I will be prepared to wager a considerable sum that it is controlled-locally or remotely-by an organic brain. Nothing else (I bet) can control such clever behavior in real time.

That makes me a "behaviorist" in Searle's eyes, and this sort of behaviorism lies at the heart of the disagreement between AI andSearle. But this is the bland "behaviorism" of the physical sciences in general, not any narrow Skinnerian or Watsonian (or Rylean) dogma. Behavior, in this bland sense, includes all intersubjectively observable internal processes and events (such as the behavior of your gut or your RNA). No one complains that models in science only account for the "behavior" of hurricanes or gall bladders or solar systems. What else is there about these phenomena for science to account for? This is what makes the causal powers Searle imagines so mysterious: they have, by his own admission, no telltale effect on behavior (internal or external)-unlike the causal powers I take so seriously: the powers required to guidc a body through life, seeing, hearing, acting, talking, deciding, investigating, and so on. It is at least misleading to call such a thoroughly cognitivist and (for example) anti-Skinnerian doctrine as mine behaviorism, but Searle insists on using the term in this way.

Let us review the situation. Searle criticizes AI for not taking neurophysiology and biochemistry seriously. I have suggested a way in which the biochemistry of the brain might indeed play a critical role: by providing the operating speed for fast thinking. But this is not the kind of biochemical causal power Searle has in mind. He sup- poses there is a "clear distinction between the causal powers of the brain to produce mental states and the causal powers of the brain (together with the rest of the nervous system) to produce input- output relations." (forthcoming, ms. p.4) The former he calls "bot- tom-up causal powers of brains," and it takes "but a moment's reflection" to see the falsehood in the idea that the latter are what matters to mentality: "The presence of input-output causation that would enable a robot to function in the world implies nothing whatever about bottom-up causation that would produce mental states." The successful robot "might be a total zombie." (forthcoming, ms. p.5)

So that is the crux for Searle: consciousness, not "semantics." His view rests not, as he says, on "the modern conception of computation and indeed our modern scientific world view" but on the idea, which he thinks is confirmable by anyone with a spare moment in which to reflect, that strong AI would fail to distinguish between a "total zombie" and a being with real, intrinsic intentionality. Introspective consciousness, what it is like to be you (and to understand Chinese), is Searle's real topic. In spite of his insistence that his very simpleargument is the centerpiece of his view, and that the Chinese Room "parable" is just a vivid reminder of the truth of his second premise, his case actually depends on the "first-person point of view" of the fellow in the Chinese room.

Searle has apparently confused a claim about the underivability of semantics from syntax with a claim about the underivability of the consciousness of semantics from syntax. For Searle, the idea of genuine understanding, genuine "semanticity" as he often calls it, is inextricable from the idea of consciousness. He does not so much as consider the possibility of unconscious semanticity.

The problems of consciousness are serious and perplexing, for AI and for everyone else. The question of whether a machine could be conscious is one I have addressed at length before (Brainstorms, chapters 8-11; Hofstadter and Dennett 1981; Dennett 1982b, 1985a, forthcoming e) and will address in more detail in the future. This is not the time or place for a full-scale discussion. For the moment, let us just note that Searle's case, such as it is, does not hinge at all on the very simple argument about the formality of programs and the underivability of semantics from syntax but on deep-seated intuitions most people have about consciousness and its apparent unrealizability in machines.

Searle's treafment of that case, moreover, invites us to regress to a Cartesian vantage point. (Searle's fury is never fiercer than when a critic calls him a dualist, for he insists that he is a thoroughly modern materialist; but among his chief supporters, who take themselves to be agreeing with him, are modern-day Cartesians such as Fccles and Puccetti.) Searle proclaims that somehow-and he has nothing to say about the details-the biochemistry of the human brain ensures that no human beings are zombies. This is reassuring, but mystifying. How does the biochemistry create such a happy effect? By a wondrous causal power indeed; it is the very same causal power Descartes imputed to immaterial souls, and Searle has made it no less wondrous or mysterious-or incoherent in the end-by assuring us that it is all somehow just a matter of biochemistry.

Finally, to respond for the record to Searle's challenge: What do I think is wrong with Searle's very simple argument, aside from its being a red herring? Consider once more his         Proposition 2. Syntax is neither equivalent to nor sufficient by         itself for semantics.

This may still be held true, if we make the simple mistake of talking about syntax on the shelf, an unimplemented program. But embodied, running syntax-the "right program" on a suitably fast machine-is sufficient for derived intentionality, and that is the only kind of semantics there is, as I argued in chapter 8 (see also the discussion of syntactic and semantic engines in chapter 3). So I reject, with arguments, Searle's proposition 2.

In fact, the same considerations show that there is also something amiss with his proposition 1: Programs are purely formal (i.e., syn- tactic). Whether a program is to be identified by its purely formal characteristics is a hotly contested issue in the law these days. Can you patent a program, or merely copyright it? A host of interesting lawsuits swarm around the question of whether programs that do the same things in the world count as the same program even though they are, at some level of description, syntactically different. If details of "embodiment" are included in the specification of a program, and are considered essential to it, then the program is not a purely formal object at all (and is arguably eligible for patent protection), and without some details of embodiment being fixed-by the internal semantics of the machine language in which the program is ultimately written-a program is not even a syntactic object, but just a pattern of marks as inert as wallpaper.

Finally, an implication of the arguments in chapter 8 is that Searle's proposition 3 is false, given what he means by "minds have mental contents." There is no such thing as intrinsic intentionality- especially if this is viewed, as Searle can now be seen to require, as a property to which the subject has conscious, privileged access.

FOOTNOTES

Earlier versions of ideas in this chapter appeared in "The Role of the Computer Metaphor in Understanding the Mind" (1984e) and portions are drawn from "The Myth of Original Intentionality," in W. NewtonSmith and R. Viale, eds., Modelling fhe Mind (Oxford: Oxford University Press, forthcoming) and reprinted with permission.

1. It can no longer be doubted that the classical conception of Al, the view that I have called strong Al, is pretty much obviously false and rests on very simple mistakes." (Searle, forthcoming, ms. p.5)

[CONVERTED BY MYRMIDON]