Tuesday, October 28, 2008

Return of the Zombie

Please see my previous post for a little background on the urgent philosophical question of whether zombies can beat zoombies and shombies in a ping pong match. At least we know that they can all beat Sarah Palin in a debate.

I readily acknowledge both my tardiness and my wordiness (the two not being unrelated) in replying to Richard Brown. The world, or at least my path through it, is unfortunately so configured that blogging often has to take a back seat to things that I consider mundane and relatively dull. Oh well. The present issue came to life when Richard, on his blog, offered some ideas about creatures
(zoombies) that are complete non-physical duplicates of normal law-abiding citizens like you and me, but fail to be conscious; and those that are physical duplicates, have no non-physical properties, and yet are conscious (shombies). Both of these beings are conceivable, according to Richard, or at least as conceivable as zombies, which are physical duplicates of ourselves that lack consciousness. The conceivability of zombies is supposed to support the argument that physicalism is wrong, because if we can conceive of a creature exactly like us but not conscious, it follows from this that it is not logically necessary that physical systems like ours must be conscious; and from this it follows that we cannot reduce consciousness to some equivalent physical description. So if zombies are conceivable, materialism is wrong. But according to Richard, the conceivability of his two new creatures equally suggest that dualism is wrong. And according to me, the proliferation of these things suggests that we had all better run.

Richard eventually put his thoughts into a form appropriate to the hallowed environment of a philosophy conference (that of the Long Island Philosophical Society), and I responded in similarly civilized fashion. And now that we've got that over with we can proceed to thrash about and flame each other on the Internet. (Just kidding - I think.) I will take up as many of Richard's responses to my reply as I can, while conceding in advance that he will probably outlast me (if not outwit me) in any blog debate. And given that Brown is the name he chose for his online identity I shall now revert to that appelation, while wondering aloud how a name like "one more Brown" gets to be a rigid designator.

Brown's response to my critique begins with my defense of the idea that zombies are indeed conceivable. I suggested that I can imagine a being that is physically identical to me but unaware of the blue tint of the light in the room, and I can expand on that concept to conceive of a zombie (who is unaware of not only the bluish tint but everything else). Brown's response is:

"What we need is to imagine me being in the very same brain state and not being conscious of the blueish tint. This is exactly what is in question –that is, whether this is something that can be imagined– and so this is at best question begging."
David Chalmers, you will recall, was said to be begging questions by ruling out the possibility that "mind" is just a popular term for a physical system; if so, according to Brown, the nonexistence of zombies is a necessary truth and zombies are therefore unimaginable. Now I am allegedly begging questions by assuming that I can imagine being in the same brain state whether aware or unaware of a bluish tint. But I think this is a misuse of the term "question-begging". Brown seems to think the (hidden) form of the argument is,
1. Let's assume physicalism is wrong.
2. If physicalism is wrong, then I can imagine that we have physical duplicates that are not mental duplicates.
3. If I can imagine that we have physical duplicates that are not mental duplicates then the mental does not logically supervene on the physical.

4. Therefore physicalism is wrong.
But the second premise does not depend on the assumption that physicalism is wrong. It is an appeal to intuition, pure and simple. According to Brown, Kripkean semantics prohibit the assumption that this intuition is possible until we have first checked to see if physicalism might be correct. I am actually tempted to hand him this point because it would be the proverbial pyrrhic victory. For if I give him that, he equally has to give me the point that he cannot assume that zombies are not conceivable until we have already established what we are currently attempting to discuss. And with this stalemate at hand, we can proceed to lose our ticket to any intelligent discussion of issues which might eventually be decided by some empirical discovery. So it will be question-begging, for example, to say that the following worlds are conceivable: that in which there is no being to whom gave Moses the ten commandments; the one where large manlike creature called 'bigfoot' are nothing but a hoax; and the imaginary space in which Loch Ness is devoid of living creatures larger than a lake trout. These are question-begging in roughly the same sense that it is "question-begging" to say that a world in which there is no physicalist reduction of consciousness is conceivable, and thus that I can conceive of a world in which there is a being physically identical to myself but lacking consciousness. In all these cases, it may, as far as science is concerned, turn out that these names or definite descriptions ("god", "bigfoot", "Loch Ness monster" and "the physical facts that constitute consciousness") identify actual entities, and if we allow that, we cannot say we conceive of the worlds in question.

If this isn't a spurious argument I'll eat my copy of Naming and Necessity. Does Kripke say that we can't conceive of the mind as non-physical? Quite the opposite. Does Putnam say I can't conceive of water as XYZ? Quite the opposite. Here's Putnam: "My concept of an elm tree is exactly the same as my concept of a beech tree... (This shows that the identifcation of meaning 'in the sense of intension' with concept cannot be correct...)" (Mind, Language and Reality, Phil. Papers V.2, p.226) What's the point? I can conceive of things that are necessarily false, e.g., "Beeches are just like elms". Not "I believe [falsely] that I can conceive of a world in which beeches are just like elms" but I conceive of such a world, plain and simple. (Or I imagine it if you like, but conceiving does not have to include mental imagery.)

Brown should get off this begging-the-question kick. Nothing about what I can or can't conceive today depends on what science discovers tomorrow. If I can't conceive of zombies once I have studied the physical reduction of consciousness (which has been added to Psych 101 texts in the year 2525) then fine, I can't do it. But to bring in a posteriori necessity to show that I can't conceive today what might turn out to be false tomorrow is really cuckoo, a curious technical trick at best. If that were really the implication of the theory, it would be a reductio of Kripkean semantics. But that is not what the theory implies.

There is another problem with Brown's methodology, which is captured in his statement that "This is exactly what is in question –that is, whether this is something that can be imagined." Look, an artist covers a canvas in black paint and says, "This depicts a zombie". You are confused, no doubt, but what exactly can you say? "How? Why can't I see the zombie's shape? Is there anything else in the picture? Were you on drugs when you painted it?" These might be legitimate questions; what is not legitimate is to say, "No it isn't; I'm looking right at it and there is no zombie there." Does the artist even need to reply to this? She can laugh, because the statement is nonsense in this context; or she can say, "When you learn to see the world the way an artist sees it, you will perhaps see a zombie there; and if you don't, I can't help you." (In Goodman's terms, not every picture that represents a zombie is a zombie-picture.) The same holds true for mental pictures, conceptions, imaginings, etc. I know what a zombie is, I am not a hallucinating schizophrenic, I am an honest guy and I believe I am conceiving of a zombie. So I am conceiving of a zombie. Once the basic psychosocial background is given, my claim goes through automatically. It's not corrigible. It doesn't depend on facts or on Kripke. And it especially does not depend on some inspection (per impossible) of my conception to compare it in fine detail with the putative physical correlate that will be discovered some time hence. The details of a conception are stipulated, not set in place like clockwork. Otherwise it has to be said that I cannot really conceive of an automobile, since I haven't the foggiest idea what goes on inside a transmission (though I doubt it is little men turning cranks).

Last point, which came up in a discussion session at the conference: the point of the zombie argument is to deny the claim on logical supervenience, the idea that the mental logically supervenes on the physical. "Logical" here is the same as conceptual; the point is to show that the mental is not conceptually identical to some physical substratum (see Chalmers, p,35). Brown, as far as I can tell, seems to think "logical supervience" is just materialism, but I doubt that. The target is not the brand of materialism that says that once the physical facts are known, the facts about consciousness can be scientifically deduced; the target is the brand that says that once the physical facts are known, the facts about concsiousness are logically entailed; they simply fall out of a correct description of the brain. As Kripke says, a consistent materialist would have to hold that a complete physical description of the world is a complete description tout court; once we have it, it should just be obvious where consciousness lies in it, though it might not be called by that name. That is a logical supervenience position, and it is quite different from physicalism in general. Chalmers and I are both physicalists of a sort; we think that at some level, in the world as it is, consciousness is dependent on brain chemistry and structure. The zombie argument is not directed against this belief, and would not be effective against it. It is meant to show that we need not believe that consciousness is going to just "be there" when we announce the result of the ultimate brain scan. Scan all you want; at the end of the day you will still have to have some other kind of explanation for consciousness. The situation is (not coincidentally) somewhat like Kripke's view of rule-following: state every empirical fact you can find about the system, you will not find the rule there. Nor consciousness, if you proceed in that manner. So there is no entailment of consciousness by physical facts, and that is what logical supervenience is, and what the zombie argument is meant to cast doubt on.

The next point in Brown's response refers to my comment that in cases of aspect-change no physical difference takes place, although a mental difference does:

Alterman goes one to cite, as evidence, his convixtion (sic) that he has no reason tot hink that there is a microphysical change in his brain when he is looking at an ambiguous stimulus (like the duck-rabbit, or the Necker cube), but this is rather naive. There is evidence in both Humans and primates that there are changes in brain activation that correlate to the change in perception in these kinds of cases.
Let's keep in mind what we are talking about here. I used the duck-rabbit example to support the point that we can conceive of a zombie by enlarging on the intuitive idea that changes in mental state can occur without a change in the physical description of the system. When I observe the duck and then notice the rabbit it seems that no change takes place in the physical description of the system. Brown is arguing that this is an illusion, for brain scans show some "brain activation that correlate to the change in perception". I think there is less here than meets the eye. It stands to reason that some stimulation occurs when anything like perception, recognition, concentration, etc. takes place. Nobody disputes that, so it can't be the issue. The issue is whether it is conceivable that a being physically identical to myself could exist without conscious activity. And since it is certainly conceivable that no change takes place when I switch from one to the other, it is by enlargement conceivable that some being never undergoes such changes.

But I am not inclined to leave it at that. For the "change" that Brown points to is nothing more than an indication of an increase in blood flow (or possibly electrical activity) to some area involved with perception. (Roughly the same areas are often involved in both external perception and recognition of mental images.) So what does that show? It certainly is a long way from suggesting that some brain activity is identical with the percept "there's a rabbit in this picture"! In fact, though I do not know which particular bit of research Richard has in mind, I would be willing to bet him lunch that it shows only that the act of searching in the picture for the new image (like the achievement of stereoscopic vision, to take another example) involves some brain activity; no way it can show that there is any difference in the organism while it perceives a duck vs. a rabbit.
But I am even willing to grant that such a difference might be found; for example, it might be shown that certan vectors activated in one case have a historical (causal) relation to vectors activated in the perception of actual ducks, and the other in the perception of actual rabbits (or of realistic duck or rabbit pictures - it doesn't really matter which). So let it be the case that for every individual, nerve cell activation occurs in the duck-rabbit picture specifically in relation to the history for that individual of previous perceptions of the appropriate form. Unfortunately, the physicalist is still in need of an identity much stronger than this. The burden on the physicalist is to give a brain specification that just is the cognition of rabbit-shape (or blue-tintedness) or a strong reason why it is likely that such a specification will be found. The burden on the anti-physicalist is just to give an intuitive reason why that is unlikely to happen. Which I did, but I am more than willing to go a step further, and put it like this: there is no reason to think anyone will ever find a neurological specification that is, so to speak, the transcendental condition guaranteeing the truth of the utterance "he sees a rabbit-picture" or "he sees a duck-picture". And if that won't happen, the fact that some blood flows to the area that manages changes in perception is of little interest.

Brown next takes on another example I used to demonstrate the conceivability of zombies, that of sleepwalkers and blindsight. These people, he insists, are in states "which obviously include a physical difference" from ordinary conscious states. Once again, that is not really relevant to the point of the example. We are talking about conceivability; the example is meant to bolster the plausibility of the claim that zombies are conceivable (to provide "evidence" for conceivability, in the only intelligible sense of Brown's demand for it), and if it does that, it has the effect it is intended to have. It is in no way intended to show that people in such states are in physically identical brain states to non-sleeping, non-brain-damaged individuals who might perform the same actions. To show that might be sufficient to prove the conceivability of zombies, but it is far from necessary. I don't think I need to belabor this any more.

I will have to skip over Brown's next few responses because I think they amount to sticking by the line that Kripkean semantics require us to not assume zombies are conceivable just because we think we can conceive them, and I have already responded to this in sufficient detail. So I move on to his response to what he calls my "stunning claim" that no theory of consciousness has even begun to offer a reductive program for phenomenal experience, such as color vision. Actually I was under the impression that no one would find this even interesting, much less "stunning", because it seems that even materialists have practically written off the effort, generally claiming that qualia are mere illusion and beneath the dignity of a physical theory to explain, while anti-materialists have been saying it consistently since Nagel (whose seminal article is almost entirely an exposition of this very point). So what is Brown's answer to my "stunning claim"? HOT! Yes, of all things, he points to David Rosenthal's (or someone's, in any case) "higher-order thought" theory of consciousness as a program for the physicalist reduction of phenomenal consciousness! Talk about stunning - I thought the very reason that HOT has not attracted many followers is precisely that it offers no hope of explaining phenomenal consciousness. But maybe Brown has been having private sessions with POMAL types who think otherwise.

So what is the response of HOT to my request for
"a program for explaining conscious experience, or even the function of consciousness, as an outcome of... biophysical research"? According to Rosenthal, at least, a conscious thought has a qualitative character because the HOT that accompanies it is in some quality-space. That not being very enlightening (even compared with the outright abandonment of attempts to deal with qualia in more hardnosed materialist theories like those of Churchland, Dennett, or Crick) Rosenthal goes on to explain why the HOT has the qualitative it has: it tracks the "similarities and difference" in perceptual space. That's it, the putative program in a nutshell. As for the function of consciousness, Rosenthal's view is that it doesn't really have one; we could get along quite well without it. (Apparently Rosenthal can conceive of zombies; indeed, one could interpret what he says about the function of consciousness to suggest that it is no more than an evolutionary accident that we are not zombies.) In spite of a great deal more verbiage (see Rosenthal's "Sensory Qualities, Consciousness and Perception" in his book, Consciousness and Mind) there is not a whole lot more to this response to what I said was missing.
As Brown characterizes the HOT view of why red objects appear red and not green,
"they do so because we are conscious of ourselves as seeing red not green. You may not like this answer but it certainly does what Alterman says we we don’t have a clue about doing."

Actually, it is not so match a matter of whether one likes the answer as whether one finds it to be an "answer" to anything. It seems to me that this is as far from materialist dreams of a perfect theory as one is going to get. In spite of Rosenthal's often expressed sympathy for materialist analyses of non-conscious thoughts, what he is doing is, broadly speaking, traditional philosophy of mind and language. He offers something like a conceptual analysis of conscious awareness, and gives a defense of it in terms of performance conditions and other standard POMAL ideas. Quite a distance from anything that is going on in the reductive programs that comprise the materialist discourse. I stand by my "stunning claim" - there ain't nothin' happening, in any branch of philosophy or cognitive science, that begins to shed light on how or why we experience reality largely as a succession of qualitative states.

Brown states that he never questioned that conceivability entails possibility, as I said he did in my response. But he presents the main line on which his paper is based, the Kripkean semantics of natural kinds, as being "the typical argument that conceivability doesn't entail possibility".
I grant that he never explicitly says that he agrees with this use of Kripkean semantics; he employs it in another way, to question whether zombies are conceivable. On the other hand, he never disputes the first use; indeed he says a number of things which suggest it, e.g., "it cannot be the case that intuitions about zombies are evidence for or against any theory of consciousness". I was reading this as implying that we could grant the possibility of zombies without the dualist gaining any ground. But I am happy to let Brown be the final arbiter of his own intentions, and leave that portion of my reply as a side-issue directed to those who use the Kripke line in the first way. (It does strike me as ironic that there would be two separate arguments against dualism based on a theory of Kripke's which he employs against materialism, but never mind. Since I don't agree with much that Kripke says about Wittgenstein I am not going to appeal to his authority in this case.)

Brown's next point is that Chalmers, contrary to me, is indeed
"claiming that there is a necessary link between our non-physical qualities and consciousness". I am not going to go through Chalmers' book to verify that this claim is never made, but it seems to me that the basis for Richard's statement is once again the Kripkean view that if "water" refers to H2O in this world, it does so in all worlds; so if "consciousness" refers to a non-physical property in this world, it does so in all worlds, and its non-physicality is therefore a necessary truth. There are various ways of responding to this. The simplest is to say that Chalmers' argument only leads to the point that it could be a necessary truth that consciousness is a non-physical property. Another is that Chalmers simply does not think that consciousness is a non-physical property in every possible world; he thinks that it is contingently non-physical in this world. A more technical response would involve Chalmers' two-dimensional semantics and the "primary" versus "secondary" intensions of natural kind terms, but I can tell from Brown's latest post that this is only going to lead to a brand new debate. I would rather just refer readers to parenthetical remark which constitutes the last paragraph of p.59 in Chapter 2 of The Conscious Mind, which to my mind offers an adequate reply to the basic premise of Richard's paper. (The reason it is adequate is because it spells out in the technical terms of two-dimensional semantics what I have been saying in more straightforward language throughout my comments: that it simply cannot be the case that we can't conceive of certain possibilities until someone has determined whether some empirical fact about the actual world is true.)

A not terribly important side-issue regarding Brown's view is whether it makes any sense to postulate beings that are similar to me with respect to "all non-physical qualities", or beings that are "completely physical" and are conscious. Suffice it to say that I cannot find a way to allow either of these examples without thinking that the answer to whether physicalism is correct is already built in to the description. Brown seems to think that that doesn't matter, because it is just parallel to what the zombie theorist does. But I think it is not parallel, because the zombie example makes no theoretical assumptions and simply depends on intuition, while Brown's claim that it is question-begging is theory-driven, and the theory is used in a counterintuitive way that most of the disputants do not agree with.

At the end of his remarks, Brown says that he can live with the limited goal I attribute to the zombie argument, that of establishing that there is no conceptual link between physics and consciousness. Hmmmm, I thought that that was what the whole debate was about. Chalmers himself believes that consciousness physically supervenes on brain states, and only argues that it is not the case in all logically possible worlds that this is so. In his book, he presents not only the zombie argument but four other arguments (none of which, I believe, are original, though the presentation is) to the same effect. Why should we be so concerned with this? I am concerned with it because I don't think reductive programs are the way to go. I think a lot will be found out about how consciousness is connected with the biological structures of the brain - 40 Hz waves or whatever - but if the relationship between any particular physical instantiation and consciousness is contingent, we will learn more about consciousness through other methods - perhaps what we might call traditional philosophical analysis, perhaps some of what goes by the name of clinical psychology, perhaps aesthetics. Consciousness, in my view if not in Chalmers', has been most usefully explored in the work of Kant, Wittgenstein, Husserl, James, Freud, Jung, Kohler, and other writers of that nature, as well as in literature of great merit from Homer to Joyce. The whole tradition of cognitive science is at this point nothing but a footnote to those insights at this point. In my opinion, it never will be much more than that as far as this question is concerned.



Sunday, October 19, 2008

Zombie, Schmombie - Richard Brown's Efforts to Ressurect Materialism

The indefatigable POMAL blogger and Richard Brown has posted a reply to comments on his Zoombies and Shombies paper, "The Reverse-Zombie Argument Against Dualism" (find a link here), made by a certain "Alderman". Unfortunately, I must object to the egregious act of plagiarism that said Alderman has performed on the comments I sent to Prof. Brown only a few days ago, copying them more or less word for word (how he got hold of them I can only imagine). Should I sue? Actually you can't sue for plagiarism, and I'm not sure what the copyright value of my comments would be, so I have a better solution: Dr. Brown should simply change the "d's" in "Alderman" to "t's" and everything will be alright.

Brown (whose name is quite difficult to misspell, though I tried) certainly outdoes me by a country mile in posting to his blog, an admirable quality that is underrated in the philosophical community. Blogging is I think more in the spirit of philosophy in the Socratic tradition than the institutional control exercised by professional journals and presses. (Anybody who has received the typically biased and ignorant comments
on a rejected article from journal reviewers will probably agree wholheartedly with the title of Brown's blog, Philosophy Sucks!) In the future, I will try to do better than the, hmmmm,,, 10 month gap between this and my last post. (Which is a bit less than the gap in my arts blog. Yikes.) In any case, kudos to Dr. Brown for his blogging efforts - not to mention his Cel-ray tonic. (Jeez, names really do get confusing, don't they? Maybe someone should do some philosophical work on this topic.)

What follows is the complete text of my comments on Brown's paper, delivered yesterday (10/18/08) at the conference of the
Long Island Philosophical Society. The papers and replies will eventually be published in Calipso, the LIPS online journal, at which point I may remove it from here and put in a link. In the next post I will reply to Brown's replies to my reply to his paper. (And perhaps to some of the replies to his replies to my reply to his reply to Chalmers - which can be found on his blog.)

Zombies, Schmombies... Full Text from the Original Author

The materialist position about consciousness consists in the view that consciousness can be fully explained once we understand the physical materials and processes in the brain. Consciousness will emerge as a supervenient property that can ultimately be reduced to some underlying physical basis. For materialism to go through, it is not sufficient that consciousness be somehow related to or dependent on the brain; it must be nothing more than a brain function, whose supervenience is obscured by some unique aspectual or descriptive stance that stands in the way of our seeing the connection intuitively. In the some versions, such obscurities will eventually disappear, and we will be able to eliminate the introspective illusion of an inner self. Others see the aspectual stance as inherent in the situation. On either view, there is nothing in reality that can either be explained, except as a dependent phenomenon, or do any explaining, other than the physical world.

Most opponents of the materialist view rely heavily on one or more intuition pumps that allegedly bring out a gap between the knowledge and understanding of physical facts and an explanation of consciousness. The "zombie" argument is one such effort. Imagine a creature that has all the physical properties that we would expect a human being to have, and behaves in the ordinary way that human beings would in similar situations, but lacks any hint of consciousness. If this is conceivable (so the argument goes) then physical facts cannot be the logical, or conceptual, foundation of consciousness.

In "The Reverse-Zombie Argument Against Dualism" Richard Brown suggests that the zombie thought experiment provides no compelling evidence that physicalism is wrong. There appear to be at least three tracks to his argument, which I will try to bring out.

The first idea is the contention that zombies, as described by David Chalmers and others, may not actually be conceivable at all. It is easy to miss the logic of Brown's argument here, because at the end he leads us somewhat astray, in my opinion, with suggestions that point in a different direction. One is that proponents of zombieism ought to offer some "evidence" for the conceivability of zombies. A second, related one occurs when Brown says that he himself cannot conceive of a zombie; and again, when he demands "some reason to think that we are really conceiving of a zombie world as opposed to a world that is very similar to ours but not microphysically identical". These points all seem a bit odd, to say the least. Conceptual arguments involve the logic of concepts; any "evidence" for them would surely not be of the empirical sort, and plenty of support has been offered on the conceptual side. The arguments do not depend on the strength of any one person's imagination, but on whether anyone can find a logical contradiction in their use of concepts. And though gross imaginative errors may be to some degree corrigible (I might say I'm imagining a duck but in fact be imagining a chicken), it makes no sense to say that someone who claims to be imagining a microphysical duplicate of me might "really" be imagining something that differs in some small way. (What does "really" really mean here?) But let me try to respond with a defense of the zombie imaginer before we move on to Brown's main argument. My "evidence" will consist in conceptual support for the point that conceiving of a zombie requires nothing more than adding and subtracting properties, something any normal person can do. So first, I can imagine someone physically identical to myself who is in the same room but is not aware of the slightly bluish tint of the late afternoon light, or the background humming of the air conditioning, while I am aware of all that. For I can imagine myself not having been aware of any them, and yet being physically identical to my actual self; just as when I see the duck and then see the rabbit in the same drawing, I have no reason to believe that a microphysical change took place, and even less reason to think that a determinate, repeatable microphysical change took place. Similar arguments could be brought for memory, imagination, and other components of consciousness. Therefore I can imagine a being that is physically identical to myself but lacks consciousness. Second, we can arrive at the concept of a zombie by expanding on concepts like blindsight or sleepwalking. These documented empirical states involve acting and behaving in certain situations like a normal human being but completely lacking awareness of one's behavior or surroundings. A being who is always in such states would be a zombie.

This should suffice for evidence of the conceivability of zombies. It is always possible to submerge one's conceptual abilities by becoming enmeshed in a theory. If one believes that all properties are directly reducible to underlying physical characteristics, it becomes difficult to conceive of anything that is not so reducible. In this way, entities that lacked the Aristotelian notion of substance were inconceivable prior to 18th century empiricism. If someone finds it impossible in theory to separate physical structure from any higher-order property whatsoever, then they might react to the notion of a zombie as "inconceivable" in the sense of "beyond the capabilities of imagination". But imagination tied down by theory is not the relevant power for assessing the viability of zombie conceptions.

The more important aspect of Brown's position does not rely on imaginative prowess. His point is that we ought to grant the physicalist at least the possibility that consciousness is nothing more than a high-level effect of the biophysics of the brain. If we do that, then we grant the possibility that consciousness is a natural kind term for some complex configuration of physical parts and processes. On a Kripkean theory of reference, a natural kind terms refer to a natural kind by means of some property that constitutes its identity. "Water" refers to all and only substances that are actually H2O . Once we know that that is the case, we realize that it is necessarily the case, and that the statement "it's water, alright, but it's not H2O" contains a conceptual confusion. "Consciousness" may similarly refer to whatever the underlying physical basis of consciousness turns out to be. We may not know that identity now, but when we do we will realize that zombies - physical duplicates of ourselves but without consciousness - never really were conceivable in the first place. According to Brown, if we insist that zombies are conceivable, we simply beg the question against this argument.

The question I have about this argument is, who is really begging the question? The logic of Brown's argument is that dualists cannot force the issue against materialism by stating a priori that zombies are conceivable, since it may turn out a posteriori that the connection between brains and consciousness is a necessary one. By the same token, one could have argued in the 19th century that a thought experiment designed to show that light is not a substance but a wave begs the question against the a posteriori necessary truth that light is the propagation of photons. The form of the objection seems wrong, because we cannot say in advance that discovering a physical basis for consciousness will make zombies inconceivable. Consciousness could be more like the terms "evolution" or "radiation" than like "water" or "heat". The former are natural kind terms, but neither has an essence that can be expressed in an identity statement. I fail to see any reason why thought experiments should be constrained by the combined demands of a controversial theory of reference for natural kind terms and the empirical possibility that reductionist programs will be successful. To focus on the latter for a moment, after two centuries of psychophysical experiments we still have no reason to believe that consciousness can be reduced to biophysical properties. As Chalmers carefully explains, none of the popular reduction programs have brought us any closer to bridging consciousness with the physical world. Take our current, fairly sophisticated understanding of color vision; how does it even come close to explaining why red objects appear red and not green? No physicalist story even gets off the ground on this kind of question. The same holds for consciousness in general: in spite of having mapped and experimented with dozens of brain areas, having sophisticated biochemical analyses of brain activity, and even manipulating some basic motor functions with digitally simulated brain signals, we don't have so much as a program for explaining conscious experience, or even the function of consciousness, as an outcome of any of this biophysical research. I think it is quite a leap to say that dualists beg the question by ignoring the possibility that the holy grail of materialism will someday be found.

A second point Brown makes is that conceivability does not entail possibility. The zombie argument depends on the following kind of reasoning. Suppose it were the case that the mental logically supervenes on the physical. Then it would be a metaphysical fact about the universe that whenever you have mind, you have a material foundation. But logical supervenience is an identity relation, so whenever you have the appropriate physical foundation, you must also have mind. Then the concept of a physical foundation without mind ought to be a contradiction of some sort, like the concept of space without distance or consciousness without thought. But the zombie argument is designed to show that this is not the case. Let it be granted, then, that the zombie argument demonstrates the conceivability of zombies. We can conceive of life without death, too, and many other things that may not in fact be physically possible. In the end, then, the zombie argument demonstrates nothing of interest to anyone except philosophers, and the search for a materialist explanation of consciousness can proceed.

I think Brown can reasonably object that while zombies may be metaphysically possible, this kind of conclusion may not establish anything very useful in the debate on consciousness. It establishes that one can be a dualist without violating any rules of metaphysics. But that is an achievement of very limited scope. For no modern dualist wants to be a dualist about substances; we all begin from essentially the same scientific conception of the universe. We believe there is nothing added to biological substrate of consciousness in the sense in which some god or unknown force disperses some ethereal quasi-matter which, combining with our brains, creates consciousness. On the contrary, we all agree that there is no substrate except matter, and the question is how, from matter, you get the qualitative view that is awkwardly expressed by the phrase "what it is like to be" a human, raptor, etc.

But the logical possibility may, on the other hand, be sufficient for what the modern dualist really wants to establish. The point is to argue against the program in which, by assembling enough information about the mechanics of brain processes, and relating that through tomography and other techniques to certain mental phenomena, we will eventually be able to reduce consciousness to brain processes. Someone who believes that there is no matter or force except the ones described by modern physics does not have to purchase that program. They can hold that it is the wrong level of explanation for mental processes. They can believe that mental predicates collect the phenomena that physically supervene on biological entities at too high a level to ever be reduced. They can hold that enormous differences in the underlying structures can accommodate the same mental phenomena, described by the same psychological terms and following the same psychological laws. On this view, the correct kinds of programs for understanding consciousness could be those of William James, Husserl, and Wittgenstein, and not those of Smart, Churchland and Dennett.

I turn finally to the "zoombie" and "shombie" examples Brown offers. As he describes them, a "zoombie" is "a creature which is identical to me in every non-physical respect but which lacks any (non-physical) conscious experience". The idea seems to be that just as my zombie twin is identical to me in every physical respect but lacks qualitative consciousness, my "zoombie" twin is identical to me in every non-physical respect but lacks qualitative consciousness. If the former suggests that consciousness is not a physical property, the latter suggests that it is not a non-physical property.

A "shombie" is "a creature that is microphysically identical to me, has conscious experience, and is completely physical". If shombies are conceivable, then dualists are at best guilty of rejecting the principle of inference to the simplest explanation that accounts for all the known facts. For why should we go about imagining exotic explanations for consciousness when it is perfectly conceivable that physics can explain it all?

According to Brown, these two thought experiments constitute something like a parity of reasoning argument against the zombie argument, and therefore against this particular kind of objection to physicalism. The zombie argument says that it is conceptually possible to disassociate the human body and behavior from conscious experience, and that therefore it is not incumbent on those who hold a naturalistic view of the universe to believe that consciousness is identical to some set of physical processes in the brain. The zoombie argument says that it is conceptually possible to dissociate all non-physical human qualities from conscious experience, and the shombie argument says that it is possible to associate all conscious experience with physical systems like the one in which our minds are embodied. Both thought experiments attempt to show that the zombie argument does not produce any conclusion against physicalism that cannot be produced against dualism by parity of reasoning. So either the zombie argument fails against physicalism, or the zoombie and shombie arguments are equally conclusive against dualism.

I agree that the zombie argument is not a conclusive argument against physicalism; but what it purports to show, at least, is that we are not forced to choose between a materialist theory of consciousness and a spooky view of the universe. If we can conceptually dissociate consciousness from the particular forms in which it is embodied, we can imagine a universe in which it is realized in other ways; and if we can do that, we can give up the idea that there must be a reductive, biophysical explanation of consciousness. I fail to see what parallel objective is achieved by positing "zoombies", since no one is claiming that there is a necessary link between our "non-physical" qualities and consciousness. Brown gives no indication of what he means by such qualities, but it cannot be things like mental or emotional states, because to assume those are non-physical would surely beg the question about consciousness. Perhaps we are talking about relational properties, value-bearing predicates, multiplicity and the like. But we can agree that there is no conceptual link between those properties and consciousness without inventing any new creatures. Since the basis for the sort of property dualism that people like Chalmers propose is not parallel to the metaphysical claims of the materialists, I don't see that this argument has a target.

"Shombies" allegedly show that we can imagine a creature that is "completely physical" having conscious experience. Brown again avoids unpacking the notion of "completely physical", but one thing we cannot say here is that no predicates other than physical ones apply to such creatures, since there is no such thing as an entity to which relational predicates, for instance, do not apply. It appears, then, that the idea of a "shombie" must be roughly that of a machine that has conscious experience. This sort of thought experiment has been tried many times, and I'm not sure what is added by calling it a "shombie". But it does bring out the foolishness of depending on either zombies or robots to prove anything about consciousness. One side says "I can imagine a conscious machine, so consciousness must be reducible to physics"; the other side says "I can imagine a non-conscious twin, so consciousness must not be reducible to physics". Personally I can imagine a talking cloud; am I entitled to the conclusion that we are in cloud-cuckoo land?

Thought experiments, as Wittgenstein pointed out, are not analogous to real experiments, only with thought-materials. They are devices to make us think about what we would say in a very unusual situation; and this can give us insights into how our concepts are organized and how our language works. If we conceive of the mind-body problem along these lines, thought experiments might help us solve it. The zombie idea is therefore somewhat effective in refuting the idea of a conceptual link between matter and mental phenomena; not a small accomplishment in light of the very strong pull that our basic scientific convictions have on our thinking as a whole. But they cannot answer any naturalistic questions, such as whether the notion of conscious experience will eventually fall out of a detailed description of the operation of brain cells. This is a matter for scientific research, and the only reasonable answer we can give right now is that it is far from doing so at this stage of the game. The materialists want to press on because they are convinced there is no other way. The zombie argument suggests that they are wrong about that, but it does not prove that success is conceptually impossible. Brown's thought experiments are helpful is suggesting this corrective to anyone who uses a zombie to scare the materialists away from their research projects.

Anton Alterman

LIPS Conference, St. John's University, Queens, New York, October 18, 2008



Saturday, December 15, 2007

Churchland Again: How to Duck Some Objections

Other minds have been debating my Churchland post over at DuckRabbit, attributing to a certain H.A. Monk (a name I have assiduously but unsuccessfully tried to excise from this blog, since it is internally related to my identity on my other blog, The Parrot's Lamppost) various assertions that concede a bit too much to both materialist and Cartesian views on the mind-body problem. Though the discussion seems to have ended up in a debate on ducks and rabbits (which I thought would have been settled long ago on that site; in any case, see my "Aspects, Objects and Representations" - in Carol C. Gould, ed. Contructivism and Practice: Toward a Historical Epistemology, Rowman and Littlefield, 2003 - for yet another contribution to the debate) Duck's original post offers a number of points worth considering. (Have a look also at N.N.'s contribution at Methods of Projection. N.N. picked the right moniker, too, maybe because there are also two n's in "Anton".) Here is a version of what I take to be Duck's central criticism of what I said about Churchland:
It's true that the materialist answer "leaves something out" conceptually; but the reply cannot be that we can bring this out by separating the third-personal and first-personal aspects of coffee-smelling, and then (by "turn[ing] off a switch in his brain") give him only the former and see if he notices anything missing. That the two are separable in this way just is the Cartesian assumption common to both parties. (Why, for example, should we expect that if he simply "recognize[s] the coffee smell intellectually" his EEG wouldn't be completely different from, well, actually smelling it?) I think we should instead resist the idea that registering the "coffee smell" is one thing (say, happening over here in the brain) and "having [a] phenomenological version of the sensation" is a distinct thing, one that might happen somewhere else, such that I could "turn off the switch" that allows the latter, without thereby affecting the former. That sounds like the "Cartesian Theater" model I would have thought we were trying to get away from.
While I appreciate the spirit of this comment, I must say that I think it does not merely concede something to Churchland, it is more or less exactly what Churchland is saying, though you might want to add "seen through an inverting lens". Churchland indeed wants to deny that "the two are separable in this way"; in fact he takes an imaginary interlocutor sharply to task for asking him to provide a "substantive explanation of the 'correlations' [between "a given qualia" and "a given activation vector"]" because this "is just to beg the question against the strict identities proposed. And to find any dark significance in the 'absence' of such an explanation is to have missed the point of our explicitly reductive undertaking" (Philosophical Psychology 18, Oct .2005, p.557). In other words: if what we have here is really an identity relation - two modes of presentation of things that are exactly, numerically the same - how dare you insist that I should explain how they are related. They are related by being the same thing, Q.E.D.!

My post was largely directed as fishy moves like this. The problem is that we have two things that we can - and lacking any evidence to the contrary, must - identify (pick out, refer to) by two completely different procedures; yet Churchland wants to assert that they are identical. What notion of identity is at work here is hard to say.
Since Churchland rejects the notion of metaphysical necessity it cannot be "same in all PW's". But it must be more than "one only happens when the other happens" since that is a mere correlation. Even "one happens if and only if the other happens" could mean nothing more than that some natural law binds the occurrence of the two things together, which does not give us numerical identity. He wants to say "blue qualia are identical to such-and-such coding vectors", and we have to take this as meaning more than that there is evidence for their regular coinstantiation. But to make it theoretically sound, or even plausible, in light of the fact that we recognize the two ideas in totally different ways, he must offer two things, at least: (1) an explanation of why these apparently distinct facts (qualia/coding vectors) are actually one and the same phenomenon (what makes the one thing manifest itself in such dissimilar ways); and (2) experimental evidence of an empirical correlation between them. Yet he also tells us that we are "begging the question" if we ask for an explanation! And as for the empirical correlation, it is not just that no one has sat down and examined a subject's cone cell "vectors" and asked them, "Now what color do you see?"; the fact is that the whole idea of "coding vectors" is a mathematical abstraction from a biological process that almost certainly only approximates this mathematical ideal, even before we get to the question of how regularly the outputs of the process end up as the particular color qualia that are supposed to have been encoded.

I am not saying there is no evidence at all for the analysis Churchland offers (based on the so-called "Hurvich-Jameson net" at the retinal level and Munsell's reconstruction of possible color experiences at the phenomenological level), but that there is not even evidence of a strict correlation. Some of the things that Churchland discusses - for example, the fact that this analysis of color vision is consistent with the stabilization of color experience under different ambient lighting conditions (p.539) - strongly suggest that something about the analysis is right, but do not constitute direct empirical evidence for it. What we are really being offered is a notion of identity that has as its basis neither metaphysics, nor scientific explanation, nor sufficient quantitative evidence to establish a strict correlation. We can be excused for saying "no thanks" to this libation.

And if this unanalyzed notion of the identity of phenomenological and biological facts is also being proffered in the name of some other philosophical position - say, Wittgenstein's - we should be no less skeptical. Merely proclaiming the lack of distinction between phenomenology and physiology, inner and outer, mind and world, something and nothing, etc. does not establish anything as a viable philosophical position on consciousness. Even adding the observation that one gets rid of philosophical problems this way does not establish it as a viable position. One gets rid of problems also by saying that god established an original harmony of thought and matter. If you can just swallow this apple whole, you'll find that the core goes down very easily.

Whoops, what happened to my erstwhile Wittgenstein sympathies? Well, maybe the apple I don't want to swallow is really this interpretation of Wittgenstein. Duck and I agree that being sympathetic to Wittgenstein does not require dismissing all scientific investigation of the brain (or the world in general) as irrelevant. But I don't think we agree on why. Duck quotes the following passage from the PI :
'Just now I looked at the shape rather than at the colour." Do not let such phrases confuse you. [So far so good; but now:] Above all, don't wonder "What can be going on in the eyes or brain?" ' (PI p.211)
What is Duck's view of this recommendation? He is not quite sure, but finally decides that philosophers' conceptual investigations will keep scientists honest, so they avoid causing problems for us philosophers:
In a way this is right... Don't wonder that... you thought that was going to provide the answer to our conceptual problem. But surely there is something going on in the brain! Would you tell the neuroscientist to stop investigating vision? Or even think of him/her as simply dotting the i's and crossing the t's on a story already written by philosophy? That gets things backwards. Philosophy doesn't provide answers by itself, to conceptual problems or scientific ones. It untangles you when you run into them; but when you're done, you still have neuroscience to do. Neuroscience isn't going to answer free-standing philosophical problems; but that doesn't mean we should react to the attempt by holding those problems up out of reach. Instead, we should get the scientist to tell the story properly, so that the problems don't come up in the first place.
For my part I don't think this is the point of Wittgenstein's various proclamations about the independence of philosophy from science. Wittgenstein was concerned that physicalistic grammar intrudes into our conceptual or phenomenological investigations, making it impossible to untangle and lay out perspicuously the grammar of phenomena. This is the root of what we call "philosophical problems". It is not the scientist who we have to get to "tell the story properly", it is the philosopher. The scientist does not have a fundamental problem with importing the grammar of phenomenology, thereby tying her physical investigations into knots. It is the other way around: the magnetic pull of physical concepts constantly threatens to affect conceptual investigation. To take a slightly oversimplified example, we say we can "grasp" a thought, but it is an imperceptible step further along the path of this metaphor that allows us to think we can capture it concretely - say, in a proposition, or a sentence of "mentalese" - in a sense that depends quite subtly on our ability to "grasp" a hammer or the rung of a ladder (picking it out as a unique object, self-identical through time, involved in a nexus of cause-effect relations, etc.). True, it takes quite a leap before you are ready to say, "The thought 'the cat is on the mat' just is this neuronal activation vector'", but that is one logical result of this sort of thinking. That we are ready to call this the solution to a philosophical problem just puts the icing on the cake; it is the dismissal of philosophy per se, in more or less the way we can dismiss morality by pointing out that we are all just physical objects made of atoms anyway, and who could care what happens to that?

When Wittgenstein says, "don't wonder, 'What can be going on in the eyes or the brain?'" he is using duck-rabbit-type phenomena to show that conceptual or psychological problems may not be tracked by any physical difference at all. In fact, there is a passage just after the one cited by Duck in which WIttgenstein lays it out as clearly as anyone could ask. He suggests a physical explanation of aspectual change via some theory of eye tracking movements, and then immediately moves to say,
"You have now introduced a new, physiological criterion for seeing. And this can screen the old problem from view, but not solve it". And again, he says, "what happens when a physiological explanation is offered" is that "the psychological concept hangs out of reach of this explanation" (p.212).
The point is very straightforward, and it is certainly compatible with what I have been saying about Churchland. The physical level of explanation just flies past the psychological concepts without recognizing or accounting for them. But in Duck's view, I am guilty of reintroducing the bogey of dualism and the "Carteisan theater" (I'm planning a post on Dennett soon so I'll avoid this bait right now):

So what's the moral? Maybe it's this. In situations like this, it will always seem like there's a natural way to bring out "what's missing" from a reductive account of some phenomenon. We grant the conceptual possibility of separating out (the referent of) the reducing account from (that of) the (supposedly) reduced phenomenon; but then rub in the reducer's face the manifest inability of such an account to encompass what we feel is "missing." But to do this we have presented the latter as a conceptually distinct thing (so the issue is not substance dualism, which Block rejects as well) – and this is the very assumption we should be protesting. On the other hand, what we should say – the place we should end up – seems in contrast to be less pointed, and thus less satisfying, than the "explanatory gap" rhetoric we use to make the point clear to sophomores, who may very well miss the subtler point and take the well-deserved smackdown of materialism to constitute an implicit (or explicit!) acceptance of the dualistic picture.
Absolutely, a physical explanation or description of consciousness is "conceptually distinct" from a phenomenological one. I can see no other possible interpretation of the passage about the eye-movement explanation of "seeing-as" phenomena. Does this make Wittgenstein a "dualist"? Certainly not in the Cartesian sense. True, Wittgenstein not only studied architecture and engineering and cited Hertz and Boltzmann in his early work; he also read (and failed to cite) Schopenhauer and James and had a deep appreciation of "the mystical", which he further identifies with "the causal nexus"; he says in the TLP that philosophy should state only facts, and that this shows how much is left out when all the facts have been stated. But is he now going so far as to suggest that there are different worlds, of scientific and mental reality? I seriously doubt it; and neither am I. There are different levels of explanation, or in his own terminology, different language games. This is not a Cartesian dualism but a point about the structure of thought. It is the same point that much of the Blue Book is based on.

I have not said much about my view of consciousness in this blog. But we're only just getting started, I've got time. I will say this, though: the resolution of the mind-body problem cannot be as simple as, for example, the New Realist (or "neutral monist") school hoped it would be. There, various aspects of reality were said to consist of a single "stuff" (read "substance", with various proposals for what this would be circulating at the time) which took on physical or psychological "aspects" depending on our interest, point of view, or whatever. This is a nice, compact view, but it does not do justice to the issue. There is a brain without which there is nothing in the world called "thinking", and a world without which nothing in a brain can count as "thought". There is every reason to believe that every event that ever counted as a thought took place in a brain, and that something was going on in the brain without which that thought would not have happened. This all has to be accounted for, and it is not sufficient to say that there are different aspects to some general substance or process. Sure, there are different aspects to everything, but this won't get us very far with the mind-body problem. How did an "aspect" of something that is also matter end up as consciousness? The problem is only pushed back. How can an "aspect" of whatever be self-aware, control its own actions, or compose a piano sonata? These are very peculiar aspects. If we could put them under an electron microscope we would not find out what we want to know about them.

I suspect that something like the following is the case: the various phenomena we call "the mind" are asymmetrically dependent on the brain, but the relationship is so loose that there is never anything like the "identity" relationship Churchland wants, nor a mere difference in points of view between the physical and phenomenological "aspects". We recognize certain psychological phenomena and talk about them and analyze them, and there is no such thing as a specifiable set of neural events that are necessary and sufficient for the instantiation of these phenomena - perhaps not even as types, and certainly not as specific thoughts, volitions, etc. There may be some wave oscillations in the brain that correspond to conscious states, but they are not those conscious states. There are particular portions of the brain that are primarily involved in certain aspects of our intellectual activity - emotions, language, memory, etc. - but there is not a specifiable neural "vector" that is "identical" to Proust's sensation of the taste of his mother's "sweet madeleines", much less to the flood of memories it evokes. Perhaps in Churchand's utopia we can replace Swann's Way with some mathematical specifications of its underlying neural activity without any particular loss, but I am not holding my breath.

Why do I think this, or even have a right to hold it out as a reasonable objection? Just because I think psychological concepts are not he rigid, well-articulated concepts that you find in much analytic philosophy. There is a way you can talk about things that are not uniquely or cleanly definable (Wittgenstein: "You are not saying nothing when you say 'stand roughly there...'"; a quote that is roughly accurate!). Talking about them is intellectually interesting in philosophy, important in clinical psychology and ethics, satisfying in the arts. It has been recognized by some neuroscientists and philosophers (Varela and others) that unless you have some kind of scientific phenomenology to begin with, you can't hope to reduce anything to neurology. But that position presupposes that there is something like a science of folk psychological concepts, on something like the lines that Husserl, Sartre and others tried to give us. And Wittgenstein too, in a certain sense: only his phenomenology of mind is imbued with the understanding that part of the "science" we are looking for involves the recognition of the vagueness or circumstantial relativity of concepts.

So how about a vague specification of cone cell coding vectors? "There is a 95% correlation between this coding vector and observed reports of red sensations." I could live with that. But it still doesn't give us a claim to "identity", nor does it justify saying that these are different "aspects" of the same event. They are different things that generally must happen
in order to recognize something as red. But I can say I dreamed of a red balloon and no one will say, "Oh, but there were no cone cell vectors, you couldn't have." And of course even my memory of a red balloon is a memory of something viscerally red, with no conce cell activity to show for it.

Wednesday, December 5, 2007

Brain Freeze, or Churchland on Color Qualia

It's been two months since I posted anything here, which is not how it was supposed to go. I have some excuses: replies to three papers at two recent philosophy conferences, a lack of breaking news on the cog sci front, and some personal stuff that I won't get into. Anyway, the last of the conference papers was concerned with a relatively recent paper by Paul Churchland, in which he argues for the "identity" of color "qualia" (an obnoxious Latinate neologism that philosophers use to refer to our mental experience of colors) with "cone cell coding triplets" or "vectors" - an analytic description of how the eye reacts on the cellular level to light of various wavelengths. Churchland further asserts that based on this analysis he can make certain predictions about our color experience in unusual cases, a feat that, according to him, is usually assumed to be beyond the power of materialist identity theories. That is the main point here; the identity of (a) the experience, and (b) the biochemical basis of the reaction, is said to not only account for ordinary experiences like seeing red, but for experiences which most people have not had. Churchland describes how to produce such experiences and provides various full-color diagrams to assist. The predictive power of the theory allegedly shows that the qualia-coding vector relationship is not a mere correlation but an actual identity.

It is not impossible that some philosophers have carelessly suggested that materialism cannot be true because it cannot make predictions about experience. But to rest the case against materialism on this narrow basis is a very bad idea, for the simple reason that there are straightforward and well known areas in which knowledge of the physical structure of the body allows you to make specific phenomenological predictions. For example, recently it was discovered that glial cells, which make up much of the central nervous system, contribute to severe or chronic pain by stimulating the pain-transmitting neurons. Prediction: find a drug that deactivates the glial cells, and with or without more traditional pain-relief methodologies (e.g., those which interfere with the transmission of signals across nerve synapses or attempt to freeze the nerve itself) the patient will feel less pain. There is a perfectly good phenomenological prediction from neurological facts.

And there are even easier cases. We know that the lens of the eye delivers an inverted image, which is subsequently righted by the brain. This suggests that our brains, without our conscious effort, favor a perspective that places our heads above our feet. (It is also possible that it is simply hard-wired to invert the image 180 degrees, but for various reasons that theory does not hold water.) Prediction: make someone wear inverting glasses, and they will see un upside down image at first (the brain inverts it out of habit), but eventually the brain will turn it right side up. It works!

And it gets even easier. After all, there were times long ago when we did not know anything about the internal structure of sense organs. Our auditory capabilities rest on the action of thousands of tiny receptors lodged in hair cells in the Organ of Corti, part of the cochlea of the inner ear. Prediction: dull the function of these receptors and and the subject will experience a loss of hearing. Wow, another phenomenological prediction. I'm sure you could go hog wild with this. Poke your left eye out and you will see in diminished perspective, an amazing prediction in itself. Practice seeing through one eye for a long time and your sense of perspective should increase. Such predictions differ a lot from an example that Churchland presents in another context, that trained musicians "hear" a piece differently than average audiences. That is also a predictable phenomenological fact, but it involves a change in the mental software, through accustomization and training, and does not obviously involve any sensual change. To see a new color or to have fewer distinct sounds reach the brain from the cochlea are sensual changes; to hear more deeply those sounds that do reach the ear, to organize them more efficiently and recognize more relationships between is not a sensual change but an intellectual one that we might metaphorically characterize as "hearing more than others". In fact musicians hear the same thing others hear but understand what they hear in a more lucid way. The sensual phenomena I have mentioned are actual changes in what reaches the brain for processing or in processing at a subliminal level, and do not depend on how we train ourselves to organize the information we receive.

I admit that my predictions are not very interesting; they operate at a more macro level than Churchland's strange color qualia, though not as macro as the following: cut out someone's tongue and they won't taste much. That's about like: cut out someone's brain and they won't think much. That may sound pretty obvious, but it wasn't always. Churchland is playing on the fact that intimate knowledge of how vision works is a relatively recent and still growing science. Thus it sounds like quite an amazing feat that he should be able to "predict" color "qualia".

But actually, although his predictions are more refined than mine, digging deeper into more subtle properties of the visual system, they are no more predictions of "qualia" than the general statement: interfere with some physical property of a sensory apparatus and you will change the sensations experienced by the subject. Refining this down to a specific phenomenological experience does not get closer to predicting "qualia", it merely makes a more specific prediction based on a fairly well fleshed out physical theory. It is roughly at the level of first discovering certain facts about the eye and then discovering that those facts are consistent with seeing a green after-image when exposed to a flashbulb. "I predict a green qual!" Okay, that's a little better than "I predict the stock market with crash - some time..." But it doesn't really do much for materialism. (And I'm not even talking about "eliminative" materialism here, which I said I'd refuse to take seriously, just the more typical materialist identification of experience with physical facts.)

Why? We could gloss Churchland's prediction as follows: "I predict that if you look at this in the right way you will have that experience that is commonly understood to be going on when a person utters the words, 'I see green'". And what is that? Just the very thing that non-materialists bring up as an "explanatory gap". Churchland can't predict we will have particular qualia because he doesn't have even so much as a theory as to what the relationship is between qualia and their scientific background. He seems to think that a correlation which has predictive accuracy is eo ipso an identity relation. But this is just another brain scam. One might say: qualia are a suspect kind of entity anyway, so why should I need a theory to account for them? Fine, but what you can't say is: these qualia you talk about, they just are these coding vectors, and then act like you've explained qualia. For example, suppose you were to say: these UFO's you talk about, they just are marsh gas. Okay, you've explained away UFO's. But you surely haven't explained UFO's. You've submitted the thought: until and unless you give me some specific physical evidence that there are these things, "UFO's", that cannot be explained by any other consistent set of physical facts except that secret aircraft controlled by animate beings are navigating our skies, I deny that UFO's exist as a category of object requiring independent explanation. Similarly, one can say: I can explain everything there is to explain about sensation without reference to "qualia", so why should I be obliged to give you a separate explanation of them? But that is not what is being offered. Rather, we are told, color qualia exist; they are cone cell coding vectors.

"Laughter exists; it is... [insert physical description of lung contractions and facial expressions]"
"Orgasm exists; it is... [insert physical descriptions of male or female anatomical changes during orgasm]"
"Aesthetic appreciation exists; it is... [insert data from brain scans of people listening to Mozart]"
"Religious rapture exists; it is... [insert data from brain scans of people talking in tongues]" (this has actually been studied, by the way)

When is Churchland going to wake up and smell the coffee? I'm not sure, but I don't think we should test it by asking him whether he's awake or not; better check his brain scan and let him know. Then do an EEG and see if he's smelling the coffee. With sufficient training he could be taught to look at the EEG and say, "Why, I was smelling coffee!" (This is the flip side of Churchland's utopia, in which we are all so well-informed about cognitive facts that introspection itself becomes a recognition of coding vectors and the like.) Now for the tricky part: turn off the switch in his brain that produces the coffee-smelling qual, and tell him that every morning, rather than having that phenomenological version of the sensation, he will recognize the coffee smell intellectually and be shown a copy of his EEG. And similarly, one by one, for all his other qualia.

Don't say: well, he doesn't deny these qualia exist, after all; he just thinks they are identical to blah-blah-blah... If he thinks they are identical to blah-blah-blah then he should not object in the least if we can produce blah-blah-blah without those illusory folk-psychological phenomena we think are the essence of the matter. So, on with the experiment. Where do you think he will balk? When we offer to substitute a table of coding vectors for the visual quals of his garden in springtime? An EEG for the taste of grilled tuna? Maybe a CAT scan of soft tissue changes rather than the experience of orgasm? I'd really like to know just how far he is willing to go with this. Would he wear one of those virtual reality visors, having in the program only charts and graphs and other indicators of brain and body function? Maybe Churchland is the only one among us who really understands how to have fun. Personally, I'll keep my red roses, my grilled tuna taste, and... the other stuff, thanks.


Saturday, October 6, 2007

AI, Cog Sci, Pie in the Sky

So I've been working my way through this long article on robotics that appeared in the July 29 edition of the Sunday Times, and I'm thinking the author, Robin Marantz Henig, is being very measured and balanced in dealing with these nasty questions, like "Can robots have feelings?" and "Can they learn?" etc. And yet I can't avoid the nagging conceit that in spite of her good intentions, she just doesn't get it.

Get what? Get what cognitive scientists really want. Get the idea of what Andy Clark, quoting computer scientist Marvin Minsky, calls a "meat machine". Artificial intelligence/meat machine: two sides of the same coin. Robots think; brains compute. It's a bit confusing, because it sounds like we're talking about two different things, but they are logically identical. Nobody said that wires can't look like neurons, or neurons can't look like wires; if we just used gooey wires inside robots, people who opened them up might say, "Oh, of course they have feelings, Madge, what do you think?" Maybe when we start creating real-life Darth Vaders, with some PVC-coated copper inside the skull (how long can it be until this happens?) people won't jump quite so quickly on the train going the other way: "Oh, of course all we are is elaborate computers, Jim, what do you think?" But the seed will have been planted, at least. With a little help from the connectionist folks we might begin one of those epistemological shifts to a new way of thinking, sort of like when people began to accept evolution as a natural way of looking at species. This is the picture that cognitive scientists really want.

Ms. Henig describes her encounters with a series of robots at the M.I.T. lab of Rodney Brooks: Mertz, whose only performance was to malfunction for the author; Cog, a stationary robot that was "programmed to learn new things based on its sensory and motor inputs" (p.32); Kismet, which was designed to produce
emotionally appropriate "facial" expressions; Leo, which was allegedly supposed to understand the beliefs of others, i.e. it had a "theory of mind"; Domo, equipped with a certain amount of "manual" dexterity; Autom, linguisitcally enabled with 1,000 phrases; and Nico, which could recognize its "self" in a mirror. (You can get more intimately acquainted with some of these critters by going to the Personal Robots Group at the MIT Media Lab web site. Before they try to create consciousness in a can, the roboticists should try fixing their Back button, which always leads back to the MIT site rather than their own page.) Throughout her discussion, Henig expresses both wonder at the tendency of people to interact with some robots as if they were conscious beings (a result of cues that set off our own hard-wired circuitry, it is surmised) as well as disillusionment with the essentially computational and mechanical processes responsible for their "humanoid" behavior. It is the latter that I am referring to when I say I don't think she's quite clued in to the AI mindset.

The first hint at disillusionment comes when she describes robots as "hunks of metal tethered to computers, which need their human designers to get them going and smooth the hiccups along the way" (p.30). This might be the end product of one of my diatribes, but how does it figure just 5 paragraphs into an article called "The Real Transformers", which carries the blurb: "Researchers are programming robots to learn in humanlike ways and show humanlike traits. Could this be the beginning of robot consciousness - and of a better understanding of ourselves?" Is Henig deconstructing her own article? She certainly seems to be saying: hunks of metal could only look like they're conscious, they can't really be so! Whereas I take it that computationalists suggest a different picture, of a slippery slope from machine to human consciousness, or at least a fairly accurate modeling of consciousness by way of the combined sciences of
computer science, mechanics, neuropsychology, and evolutionary biology. (Sounds awfully compelling, I must admit.)

Henig does say that the potential for merging all these individual robot capacities into a super-humanoid robot suggests that "a robot with true intelligence - and with perhaps other human qualities, too, like emotions and autonomy - is at least a theoretical possibility." (p.31) Kant's doctrine of autonomy would have to be updated a bit... And can we add "meaning" to that list of qualities"? (I'd like to set up a poll on this, but it seems pointless until I attract a few thousand more readers...) The author seems inclined to wish that there were something to talk about in the area of AC (Artificial Consciousness :-) but then to express disappointment that "today's humanoids are not the sophisticated machines we might have expected by now" (p.30). Should we be disappointed? Did anybody here see AI? (According to the article Cynthia Breazeal, the inventor of Kismet and Leo, consulted to the effects studio on AI - though not on the boy, who was just a human playing a robot playing a human, but on the Teddy bear.)

Cog, says Henig, "was designed to learn like a child" (p.32). Now here come a series of statements that deserve our attention. "I am so careful about saying that any of our robots 'can learn'", Brooks is quoted as saying. But check out the qualifiers: "They can only learn certain things..." (that's not too careful already) "...just like a rat can only learn certain things..." (a rat can learn how to survive on its own in the NYC subways; how about Cog?) "...and even [you] can only learn certain things" (like how to build robots, for example). It seems to be inherent in the process of AI looking at itself to imagine a bright future of robotic "intelligence", take stock of the rather dismal present, and then fall back on a variety of analogies to suggest that this is no reason to lose hope. Remember when a Univac that took up an entire room had less capabilities than the chip in your cell phone? So there you go.

Here we go again: "Robots are not human, but humans aren't the only things that have emotions", Breazeal is quoted as saying. "Dogs don't have human emotions either, but we all agree they have genuine emotions." (Obviously she hasn't read Descartes; which may count in her favor, come to think of it.) "The question is, What are the emotions that are genuine for the robot?" (p.33) Hmmm... er, maybe we should ask the Wizard of Oz? After reading this statement I can't help thinking of Antonio Damasio's highly representational account of emotions. For Damasio, having an emotion involves having a representation of the self and of some external fact that impacts (or potentially impacts) the self; the emotion consists, roughly, in this feedback mechanism, whereas actually feeling the emotion depends on consciousness, i.e., on recognition that the feedback loop is represented. On this model, why not talk about emotions appropriate to a robot? Give it some RAM, give it some CAD software that allows it to model its "self" and environs, and some light and touch sensors that permit it to sense objects and landscapes. Now program a basic set of attraction/avoidance responses. Bingo, you've got robot emotions. Now the feeling of am emotion, as Damasio puts it - that will be a little harder. But is it inconceivable? It depends, because this HOT stuff (Higher-Order Thought, for those socially well-adjusted souls out there who don't spend your lives reading philosophy of mind lit) can get very slippery. Does the feeling require another feeling in order to be felt? And that require another feeling, etc.? I suppose not, or no one would pause for 2 seconds thinking about this theory. One HOT feeling is enough, then. Great. RAM 2 solves the problem; the robot now has a chip whose function is to recognize what's being represented on the other chip. This is the C-chip (not to be confused with C-fibers) where Consciousness resides, and it produces the real feelings that we (mistakenly, if Damasio is right) call "emotions". So, we're done - consciousness, feelings at least, are represented in the C-chip, and therefore felt. Now we know what it's like to be a robot: it's like having second-order representation of your emotions in a C-chip. And now we can end this blog...

Unless we are concerned, with Henig, that still all we have are hunks of metal tethered to computers. Let's move on. Leo, the "theory of mind" Bot, M.I.T calls "the Stradivarius of expressive robots". Leo looks a bit like a Pekingese with Yoda ears. If you look at the demo on the web site y
ou can see why Henig was excited about seeing Leo. A researcher instructs Leo to turn on buttons of different colors, and then to turn them "all" on. Leo appears to learn what "all" means, and responds to he researcher with apparently appropriate nods and facial expressions. Leo also seemed capable of "helping" another robot locate an object by demonstrating that the Bot had a false belief about its location. Thus, Leo appears to have a theory of mind. (This is a silly way of putting it, but it's not Henig's fault; it's our fault, for tolerating this kind of talk for so long. Leo has apparently inferred that another object is not aware of a fact that Leo is aware of; is this a "theory of mind"?) But, says Henig, when she got there it turned out that the researchers would have to bring up the right application before Leo would do a darned thing. Was this some kind of surprise? "This was my first clue that maybe Leo wasn't going to turn out to be quite as clever as I thought." (p.34) If I were an AI person I would wonder what sort of a worry this was supposed to be. I would say something like: "Look, Robin, do you wake up in the morning and solve calculus problems before you get out of bed? Or do you stumble into the kitchen not quite sure what day it is and make some coffee to help boot up your brain, like the rest of us? Why would you expect Leo to do anything before he's had his java?" Well, complains the disappointed Henig, once Leo was started up she could see on computer monitors "what Leo's cameras were actually seeing" and "the architecture of Leo's brain. I could see that this wasn't a literal demonstration of a human 'theory of mind' at all. Yes, there was some robotic learning going on, but it was mostly a feat of brilliant computer programming, combined with some dazzling Hollywood special effects." (p.34). Leo was not even recognizing objects per se, but magnetic strips - Leo was in part an elaborate RFID reader, like the things Wal-Mart uses to distinguish a skid of candy from a skid of bath towels. Even the notion that Leo "helped" the other Bot turns out to have been highly "metaphoric" - Leo just has a built in group of instruction sets called "task models" that can be searched, compared to a recognizable configuration of RFID strips, and initiated based on some criteria of comparison.

And what exactly do humans do that's so different? You know what the AI person, and many a cognitive scientist, is going to say: after 10's of millions of years of evolution from the first remotely "conscious" living thing to the brain of Thales and beyond, the adaptive mechanisms in our own wiring have become incredibly sophisticated and complex. (So how do you explain Bush, you ask? Some questions even science can't answer.) But fundamentally what is going on with us is just a highly evolved version of the simple programming (! - I wouldn't want to have to write them!) that runs Leo and Cog and Kismet. What conceivable basis could we have for thinking otherwise?

Henig goes on to talk mainly about human-robot interaction, and why the illusion of interacting with a conscious being is so difficult to overcome. Here, as you might expect, the much-ballyhooed "mirror neurons" are hauled out, along with brain scans and other paraphenalia. I don't have too much to say about this. There are certainly hard-wired reactions in our brains. One could argue that what makes humans different from all possible androids is that we can override those reactions. A computer can be programmed to override a reaction too, but this merely amounts to taking a different path on the decision tree. It overrides what it is programmed to override, and overrides that if it is programmed to do so, etc. But someone will say that that is true of us too; we merely have the illusion of overriding , but it is just another bit of hard-wired circuitry kicking in. Since this spirals directly into a discussion of free will I'm going to circumvent it. I think evolved, genetically transmitted reaction mechanisms may well play a part in our social interactions, and if some key cues are reproduced in robots it may trigger real emotions and other reactions. What happens once that button is clicked is a matter that can be debated.

The article concludes with a variety of surmises on consciousness, citing Dennett, philosophy's own superstar of consciousness studies, and Sidney Perkowitz, an Emory University physicist who has written a book on the human-robot question. Consciousness, says Henig, is related to learning and emotion, both of which may have occurred already at the M.I.T. lab, though only Brook seems to think the robots actually "experienced" emotions in the sense that Damasio requires. Dennett says that a robot that is conscious in the way we are conscious is "unlikely"; John Haugeland said the same thing in 1979; robots "just don't care", he says (see "Understanding natural Language"). And these are some of the people who are most inclined to describe the mind as a in some sense a computational mechanism located in the structure of the brain. But people who would go much further are not hard to find. "We're all machines", Brooks is quoted as saying. "Robots are made of different sorts of components than we are... but in principle, even human emotions are mechanistic". (p.55) He goes on: "It's all mechanistic. Humans are made up of biomolecules that interact according to the laws of physics and chemistry." (I'm glad he didn't say "the laws of biology".) "We like to think we're in control, but we're not." You see, it's all about free will. These cog sci guys want to drag us into a debate about free will. No, I take that back, they have solved the problem of free will and they want us to see that. Or possibly, they have been reading Hobbes and want to share the good news with us. Whatever.

Henig's elusive, ambivalent position on robotic consciousness is easy to sympathize with, and as anyone who has read this post thoughtfully can tell, the ultimate point of my article is not to take her to task for being naive or ambivalent. It is that perspectives like the one coming from Brooks have insinuated themselves into our culture - into the media, philosophy, and cocktail parties - and legitimized the notion that whatever is left of the mind-body problem will just be taken care of by the accumulated baby steps of Kismets and Leos and Automs. Statements like the ones Brooks makes are tokens of the inability of people to think outside their own intellectual boxes. There is plenty of scientific evidence for the fact that mental processes go on below the level of consciousness (blindsight, etc.); there is not the remotest shred of evidence that these processes are mainly computational, or that computations, however complex, can yield outputs that have more than a superficial similarity to any kind of animal consciousness. There is every reason to believe that every fact and event in the universe has a scientific explanation; there is not the slightest reason to believe that explanation of consciousness is more like the Cartesian-Newtonian mechanisms behind the motion of mid-sized objects at slow speeds than it is like the probabilistic fields of quantum electrodynamics. We don't have a clue how consciousness works; not at the neural level, and certainly not at the computational level. We are in the same position that Mill was in the 19th century when he said that whatever progress we might hope for in the area of brain research, we are nowhere near knowing even whether such a research program will produce the results it seeks, much less what those results might be. We very likely do not even have two psychologists, neurologists or philosophers who agree with one another on what an emotion is, much less whether a robot could have one.

What's more, at present we have no philosophical or other justification for the notion that when we are trying to solve the mind-body problem, or talk about the mind or consciousness at all, what we are looking for should be thought of at the level of explanation of basic science or computation rather than traditional philosophy or psychology. People have brought all sorts of tools to the study of literature - lately, even "evolutionary literary studies" have gained a foothold, to say nothing of Freudian, Marxian, linguistic, deconstructionist or anthropological approaches. Does any of this demonstrate that the best understanding of literature we can obtain will be through these approaches, which subvert the level of literary analysis that studies the author's intentions, rather than through traditional literary criticism or philosophical approaches to fictionality? I don't know that philosophers or literary critics are in general ready to concede this point, though obviously various practitioners of postmodernism and other such trends would like to have it that way. Then why would we concede that the best approach to the mind-body problem is through AI, IT, CS, or other two-letter words? We might be better off reading William James (who was hardly averse to scientific study of the mind) than reading Daniel Dennett. Or reading Husserl than reading Damasio. We'd certainly be better off reading WIttgenstein on private language than Stephen Pinker on the evolutionary basis of cursing.

Put all the C-chips you want into Leo or Nico. Putting in a million of them wouldn't be that hard to do these days. Give them each 1,000,000 C-chips, 10 petabytes each; what will that do? Get them closer to consciousness? They're still hunks of metal tethered to computers, and for all we can tell, nothing that any AI lab director says is going to make them anything more.