Thursday, March 18, 2010

Pains in the Brain: On Liberating Animals from Feeling

Before I say anything about my year-plus hiatus on publishing here, let me just throw in a practical note. Due to a blitzkrieg of spam posts on this and other blogs I operate on Blogger, I have had to change the policy so that all posts must be moderated. But for all this obnoxious garbage coming my way I had no intention of preventing even the most ridiculous comments, so long as they were intended as contributions. But after spending time every day, for weeks in a row, logging in and deleting spam comments one by one I finally had no choice. Anyone who wishes to comment on this or any other post is strongly encouraged to do so. I still intend to publish every appropriate comment, regardless of how much I may disagree with it. The only exceptions I can think of would be extreme ad hominem attacks, racial insults, threats and the like. Please be aware that it may take me a little while to get notified of a comment and publish it. Thanks for your understanding.

So, as I was saying, here it is well over a year since I've published anything on this blog. Not that I haven't been thinking about it. Not that I haven't wished to throw my two cents in about this or that earthshaking advance in neurophilosophy. Not that I've been any less pissed off at watching the philosophers of mind, and many others, become shameless sycophants of neurobiologists, philosophical thought pander to laboratory experiments, and philosophical nihilism creep in on the hind paws of physicalism. It's just that I've spent a lot of my energy doing other things. Did you know, for example, that there are certain fields in which it is actually the practice to pay authors for the articles they publish? Yes, this unusual ritual prevails, for example, in fiction, essay, and many other kinds of writing. Imagine: you receive a letter from the Philosophical Review in which you are not only told that your essay has been accepted, but that you will receive actual money for the two years of hard work it took you to write the piece. Incredible, I know, but outside the beknighted realm of purely theoretical academic discourse, it's the norm. I mean, thanks for the Socratic tradition of offering your words of wisdom for nothing, in contrast to those shallow, relativistic Sophists; but just the same, if my essay is worth publishing, it's worth paying for.

Indeed, I suspect that the author of the piece that has drawn me back to this blog for a moment, was not too badly compensated for filling a small part of the New York Times Op-Ed page. And I not only do not have a problem with that, as you may have figured out - I actually applaud the author for obtaining (I presume) an honorarium for the service of providing a philosophical perspective to the readers of the Times. For the author, Adam Shriver, identifies himself as "a doctoral student in the philosophy-neuroscience-psychology program at Washington University". The fact that being a philosopher these days means being able to attach "neuroscience" to your discipline, title, major, article, view, affiliation, or whatever you happen to have on offer, is regrettable, but let's just say, for the moment - he's a philosopher. And there is no question that what Shriver offers us is philosophical to the bone. Or at least, the meat.

The title of Shriver's piece, possibly contributed by some Times editor, is "Not Grass-Fed, but at Least Pain-Free". The former hyphenated adjective refers to the practice of allowing farm animals to graze freely rather than be starved, force-fed, or fed artificial or unnatural products; it is sort of Livestock 101 for organic farming advocates. I guess the latter ("pain-free") is a gesture in the direction of "cruelty-free", a popular label on products which purport to observe some ethical norms in the treatment of animals, especially in the matter of cosmetics testing. The title suggests that the general thrust of the piece will be to argue that although it may not be practical to observe every last principle of humane livestock farming, one should at least avoid practices that cause the animals pain.

That's what I would have guessed, anyway. And if you are an extraordinarily literaral person, you might say that the title is accurate. I, however, am not particularly literal; perhaps that's another reason I've turned some of my attention to writing fiction lately, aside from the purely pecuniary one. So I find the title disturbing for what it fails to convey; just as I find the piece disturbing as much for what it fails to say as for what it does. Indeed, I find it repugnant, morally and otherwise. But it will take a little work to tell you why.

Soon-to-be Dr. Shriver's perspective may be summed up as follows:
(1) "We are most likely stuck with factory farms..." since they produce most of our red meat.
(2) But animals in factory farms suffer a lot of discomfort.
(3) It is bad to feel pain. (Take that phrase as literally as you possibly can for the moment.)
(4) "It is still possible to reduce the animals' discomfort - through neuroscience." Cited in this regard are studies which show that it is possible to genetically engineer animals to block certain pain pathways (including studies underway in Shriver's own department). We will expand on this shortly.
(5) The meat from such genetically engineered animals would be safe to eat.
(6) Since new research shows that the suffering from pain can be dissociated from pain-avoidance behavior, the animals could be so engineered as not to casually injure themselves.
(7) In light of all this, we are morally obligated to use these genetic engineering techniques (once they are commercially available): "If we cannot avoid factory farms altogether, the least we can do is eliminate the unpleasantness of pain in the animals that must live and die on them. It would be far better than doing nothing at all."

Now, before I begint to apply my forceps to this view, allow me to note something which is, unfortunately, quite pertinent. I began, not without reason, by talking about money, and the limited opportunity to make some by writing philosophy. One creative way around this is to write an Op-Ed piece for the NY Times. But how much more creative might it be if one could gang up with some neuroscientists, help make some commercial applications of their genetic engineering experiments socially acceptable, and maybe scoop up a few bucks as the new methods come to market? Well, before anyone gets bent out of shape, I'm not suggesting that Adam Shriver is in it for the money. Zhou-Feng Chen, to whose research Shriver alludes, is at his school, but not in his department. (Who knows who's on his committee, though? There are few accidents in the sickening mire of academic poltics.) Moreover, in a news article and interview that covers much the same ground as the Times piece, Shriver has let on to the magazine New Scientist that he is "
a long-time vegetarian" and thinks that "eliminating factory farms would be the best option". According to the article, he says "I would be happy to jettison my idea" on the (implausible) condition that "someone can prove that we really are on the verge of moving to that kind of society". Not only that, but Shriver has published an article in Philosophical Psychology (V.19 #4 Aug 2006) in which he defends the idea that animals are sentient and "the belief that nonhuman animals experience pain in a morally relevant way is reasonable, though not certain" (from the Abstract; you are welcome to purchase the article for yourself for a cool $36.48 plus tax). Of course, if they are not sentient there might not be a very good ethical foundation for breeding them to be pain-free; but who knows which view is leading which? Anyway, it seems that Shriver is in some sense to be counted among the cow-huggers of the world, genuinely wants to do right by our bovine friends, and wouldn't sink his teeth into a veal cutlet even if the poor little things were treated like King Tut (who didn't live much longer than they do). Nevertheless, I feel compelled to point out that, whether it is Shriver or some colleagues for whom he is doing a completely pro bono service, there are big bucks to be made from patents, consulting fees and what-have-you if any such pain-eliminating genetic therapy were to become commercially viable. So without making any undue assumptions about philosophers, suffice it to say that eliminating the the social resistance to such techniques could certainly facilitate the accumulation of fortunes, which will only be obtained if public outrage does not make it too costly to market this therapy.

In any case, many a ship has foundered on the shoals of good intentions, and I'm sorry to say that this boat is well on its way to the bottom. New Scientist refers to the "yuck factor" in explaining why we react negatively to the idea of pain-free animals, and opines in an editorial that "logically speaking, pain-free animals make sense. But only in a world that has already devalued animal lives to the point where factory farming is acceptable. Our visceral reaction to pain-free animals is actually a displaced reaction against the system that makes them necessary.
" That's a catchy line; only it sort of sidesteps the issue by suggesting that we have to value the "lives" of animals to avoid the logic of pain-free meat. Similarly, the magazine quotes Marc Bekoff (UC Boulder) to the effect that "The fact that they are alive, even if not sentient, warrants against using them in ways that result in their death." In my view, any argument that depends on assigning such value to animal lives is weak, because though they are alive, they don't necessarily have "lives" in (say) a Kantian or Aristotelian sense, or value in (say) a Millian sense. Life and value as it relates to humans in the classic ethical theories may be quite different from what we call life and value in animals, because, e.g., they may not have a sense of themselves as having value in the way that we do, or exercise free will in the sense that we do, or perceive themselves as temporally continuous as we do, or see themselves as the subject of rights and responsibilities as we do, etc. And our perceiving them as being similar to us in these ways may be nothing more than a projection. I think we have to live with the possibility that this is the case, and figure out why, anyway, it is a terrible idea to treat animals in certain ways that would be extremely objectionable if they were applied to humans.

For even if animals do not have "lives" and value in our sense, we do; we have lives which are diminished in some way by treating animals as a mere means to an end.
That is, what we have to ask ourselves is why we value our own lives so little as to reason that by altering a basic feature of another creature's genetic makeup we can somehow make ourselves morally better. When you think about it, the idea is really quite incoherent. It suggests the following principle: a practice that is morally repugnant with respect to some living creature can be made less repugnant by changing the creature's mental state such that it does not find the practice unpleasant. But wasn't the problem in the first place that we (some of us at least) found the practice repugnant? Yes, of course - but wasn't the reason we found it repugnant just that we believed the creature found it unpleasant? No, emphatically not; we may not have had a thought about the creature's suffering, but found it repugnant nonetheless.

Suppose I see a fur trapper clubbing some baby seals to death. I run over and demand that he stop. "But don't you realize," he says, "that I am only clubbing those seals who have the genetic mutation depriving them of pain sensation." "Oh, well then", I say with a broad smile, "that's much better. I see you truly care about our flippered friends after all. Go right ahead then, just don't make a mistake and club one that feels pain!" Of course almost no one who reacts badly to the initial situation is going to be moved to change their feeling about it after being informed that a clubbing on the head is just like playing with beachball for some baby seals. This shows just how absurd it is to think that what is at stake here is merely the pain of animals. It is true that undue suffering should be alleviated in animals; but far from true that artificially removing the sensation of pain from animals we are intentionally harming puts us on a higher moral platform. It is just as likely that it makes us worse; after my conversation with the trapper I might would feel doubly sorry for the seals that could not even react appropriately to being critically injured.

The principle, "don't change our practice towards a subject; change the subject so it doesn't mind our practice" can lead down some even more bizarre paths. Suppose I could make it fine for a duck, indeed even beneficial, to have its bill cut off, by introducing a genetic mutation that makes ducks hop around like rabbits and nibble on lettuce instead of fishing. I am thinking the world would not exactly applaud this innovation, but rather deplore it twice, as a double insult to the animal. The truth is, we have turned a corner in which we can play fast and loose with ontology through genetic engineering, and our defective moral consciences will likely permit us to take what we can from this in order to diminish our sense of impropriety at otherwise heinous acts. Let us then clone thousands of humpbacks and blue whales with their pain sensors nicely removed so we may once again train our sonarscopes and exploding harpoons on them! Whaling is so fine a tradition, after all, and now we can practice it without our sea mammal friends suffering any pain! Rhino horns? Rip em out! - those rhinos were just pain-free clones anyway. What's a steel jaw trap to a bear that can't feel pain? Just another day in the forest, my friends!

Yes, bring on the neuroscientists, with their solutions to the ethical problems of mankind. Indeed, on a clear day you can see beyond the horizon to that earthly paradise where we can do just about anything we please without a tinge of moral uneasiness. Consider the following suggestion: suppose we had a nerve gas
(some bodywide form of lydocaine, perhaps) that we could spray over opposing soldiers in battle, leaving them unchanged except to to take away their ability to feel pain - wouldn't it be morally incumbent on us to use it? I mean, let's just agree that we can't get rid of war, okay? But we can at least kill the pain. People are known to suffer a lot from bullet wounds, flying shrapnel and incendiary devices, after all. Spray 'em with the gas, then spray them with bullets and feel much better about it.

Well, I can see the objection: if we did that, then they'd just keep coming at us and we'd never win. (Sounds like Night of the Living Dead? So, zombies are a respectable topic of philosophical discussion these days; and perhaps now they are a respectable goal of philosophical neuroscience.) A bullet through the kneecap is generally a very effective deterrent to a soldier's further advance, but if they don't even feel it they might just keep hopping along on their one good leg until they perhaps kill us. Okay, then, let's change the strategy: why not spray our own soldiers with pain-killing gas? Then we would not only benefit directly but win the war! Indeed, I'd be shocked if the Pentagon has not experimented with this sort of thing already. (Shutter Island?) If there is any downside to it, it would be that the pain-free soldier might not care that he is walking into machine gun fire, and would therefore take unnecessary risks. But the new techniques get around that problem by not suppressing the harm-avoidance genes, just the pain-feeling genes. The more capabilities we acquire with this technology, the more we can obtain designer creatures that have just those qualities we want them to have and lack just the ones we want them to lack.

If you are okay with this experiment so far, you are to my mind too demented to carry on a meaningful conversation with. For the first question one should have is: Doesn't this somehow give moral legitimacy to war, such that even wars of aggression could be fought without the cost in human suffering that is one of the great historical motivations to stop wars from happening? Aren't we, in the name of eliminating pain, actually making it easier to continue practices that are normally thought to be wrong partly because they cause a lot of it? Am I being unfair, suggesting that Shriver's well-intentioned defense of pain-free cows lead down a slippery slope, from cattle to the battlefield, and even sanctions war as a means to a political end? I don't think so. But in case you are still not convinced...

Suppose I am a serial killer, who likes to mutilate my victims. Perhaps I could get off with a lighter sentence if I tell the judge: in order to minimize the suffering of my victims I administered morphine before mutilating them. Fine, I guess that has a kind of impeccable logic to it. If I ever happen to fall into the hands of such a demon I hope it's one with a large supply of morphine. But suppose, now, I find a doctor who tells me that he distributes morphine on request to anyone who identifies himself as a possible serial killer. "If you can't eliminate serial killers," reasons the doctor, "at least you can help the victims by making morphine available to them." That's not quite so impeccable. "Doctors" have been employed in all sorts of heinous circumstances to medicate torture victims and other unfortuntes. Do we thank them for them humane servces - or deplore their participation in evil schemes, regardless of what their role is? If a neuroscientist delivers gene therapy technology to a poultry farm that shackles geese for the production of pate de foie gras, is this person a humanist - or an accomplice to a crime?

Of course, there are times when you want to eliminate pain artificially. Surgery is one. What benefit did people ever receive from feeling the pain of the surgical knife? I'm sure most of us reel in horror at stories of 19th century surgeries for which a shot of booze was the only anaesthetic. Besides, many modern types of operations are so long and invasive that no one could even bear them, and we would prefer to die instead. Eliminating surgical pain is an absolute good, because doing so makes it possible, or easier, to advance our personal agenda of being cured of some malady. But eliminating the pain of being shot with a bullet in a war does not typically advance the personal agenda of the one whose pain is eliminated. This could be the case if, say, highly motivated revolutionaties could get the gas; or soldiers fighting for a cause they are willing to die for, of their own free will. But typically, a soldier is a recruit, a draftee, a mercenary, a person seeking a way out of poverty - someone who either had no choice, or simply hoped they could get some benefit from military service without suffering greatly. Sending these people into battle under the influce of morphine, or whatever, advances the agenda of someone else who wants to use their bodies to achieve a political goal. Something similar clearly applies to animals used for meat. It is not as if the practice, as a whole, of slaughtering animals is for the animals' benefit. It is for the benefit of our appetites and the pockets of agribusiness. To make it painless for the animal to undergo this slaughter is to make ourselves immune to the thought that something may be wrong with our practices. This is like a meta-wrong that does not merely outweigh the utilitarian benefit it promotes but surrounds it like a dark cloud. Lobotomies had their benefits too. Come to think of it, the sponsors of pain-free livestock might just be consistent enough to think it's a perfectly reasonable option today.

Something is rotten in Denmark, and it may be a piece of painlessly produced meat. I suggest the following principle: elimination of pain is good relative to a situation in which a reasonable and rationally chosen goal of the subject is advanced by it; otherwise, it is either morally neutral, or a further harm in addition to whatever caused the pain. I think this saves most of our intuitions about pain. Pain elimination for medical reasons is generally good; the one in pain wants to recover and has good reason to want it. Pain relief for the suicide bomber is probably an additional evil, as the goal is not reasonable and pain relief may encourage the subject to pursue it. Pain relief in most situations where such relief advances no goal of the subject but makes the subject more compliant to undergo potentially harmful experiences is an evil, as it makes pain zombies out of formerly sentient subjects, and only advances goals to which the subject ought to rationally object. Pain relief is perhaps morally neutral when it neither advances any goals of the subject nor deprives the subject of any rationally selected good.

This is all quite apart from: (a) side effects that come with pain relief, either through gene therapy or medication, which may increase the potential harm of such treatments; (b) the fact that deprivation of pain sensations can lead to further harm due to the subject's inability to recognize internal or external danger signs and avoid them (this problem is completely eliminated by the claim that animals could be engineered to want to avoid harm without having pain, since there is pain they can't avoid but will not complain about even though it might signal a serious problem); and (c) the use of unethical, harmful practices in experimentation on subjects in the pursuit of pain relief therapies. Each of these could require another essay, but I don't want to write a book about Shriver's proposal. I do, however, want to briefly address something I alluded to earlier, closely related to the second (b) of these points.

To support his view, Shriver selects a couple of specific forms of discomfort that animals are forced to undergo in order to provide gustatory delights for the human race. One is the confinement of calves to produce veal; another is "severe gastric distress" caused by "unnatural high-grain diets". Keeping in mind that the ability of the genetically engineered animals to "recognize and avoid, when possible, situations where they might be brusied or otherwise injured" is supposed to be a key advantage of the new method. It is well known that people with the rare medical condition that deprives them of pain sensations (CIPA, Congenital Insensitivity to Pain with Anhydrosis) often end up losing limbs and sustaining other very serious injuries. No one would think such a condition would be beneficial to animals without the added claim that they can be engineered to avoid harm even without the motivation of having to avoid pain. But look at the conditions Shriver himself uses as examples. Unnatural confinement is not something the animal can avoid even if they do wish to avoid harm. Veal calves confined so tightly that they can't sit or lie down; geese shackled to prevent almost any movement whatsoever; pigs attached by a snout ring to a wall or fence; these kinds of barbaric practices produce distress that cannot be avoided by leaving the animal with the ability to recognize potential harm. Nor are they made any kinder by removing pain sensations. Diets of grain, injections of hormones and antibiotics, all sorts of practices that create internal conditions the animal cannot possibly avoid even with all the wonders of modern neuroscience: how is saving the harm-avoidance instinct supposed to help in the least with these? What it comes down to is really this: by having the animal take care of avoiding bruises, self-inflicted wounds, and the like, this technique saves the livestock farmer and the slaughterhouse from having to deal with thousands of needlessly injured animals who would thereby end up in the debit column on their balance sheets. The underlying point is not to give the animal a more normal life than a pain zombie would be expected to have, but to cut losses for the owner. That pretty much guts the moral argument for this technique even without all the bizarre consequences it entails.

Shriver is a vegetarian; I'm not. He has a kind of moral lead on that one. I was a fairly strict vegan for about two decades, but now I eat poultry often enough, organic or free range and antibiotic free when I can get it; I eat fish, and I feed my kids red meat when they ask for it. I do not know of a convincing moral argument against killing animals for food. But I find the practices of the meat industry as a whole disturbing, morally repugnant and environmentally destructive. Of particular concern are the specialty foods that require the mistreatment of animals through extraordinarily strict confinement. But slaughterhouses and livestock farms are not alone in mistreating other species. Thoroughbred race horses and circus elephants don't fare much better. Numerous acts of terror are committed against animals by poachers and people seeking mythical cures for all sorts of ailments - Asian tigers and African rhinos being well-kn own examples. Add to that the frequent abuse of domestic animals, the use of now illegal painful traps in the wild, and perhaps we should just start a breeding program to replace all existing animal species with pain-free substitutes. Or we can continue building the pressure for the abusive industries and individuals to change their practice. That's what we do with abusive practices towards humans, right? Let's not start developing pain-free women so they can be burned to death over a dowry or have their sexual parts surgically removed without causing great ethical dilemmas; let's start treating animals in a more humane way and put the pressure on the veal and foie gras producers and the other abusive practices.

Tuesday, October 28, 2008

Return of the Zombie

Please see my previous post for a little background on the urgent philosophical question of whether zombies can beat zoombies and shombies in a ping pong match. At least we know that they can all beat Sarah Palin in a debate.

I readily acknowledge both my tardiness and my wordiness (the two not being unrelated) in replying to Richard Brown. The world, or at least my path through it, is unfortunately so configured that blogging often has to take a back seat to things that I consider mundane and relatively dull. Oh well. The present issue came to life when Richard, on his blog, offered some ideas about creatures
(zoombies) that are complete non-physical duplicates of normal law-abiding citizens like you and me, but fail to be conscious; and those that are physical duplicates, have no non-physical properties, and yet are conscious (shombies). Both of these beings are conceivable, according to Richard, or at least as conceivable as zombies, which are physical duplicates of ourselves that lack consciousness. The conceivability of zombies is supposed to support the argument that physicalism is wrong, because if we can conceive of a creature exactly like us but not conscious, it follows from this that it is not logically necessary that physical systems like ours must be conscious; and from this it follows that we cannot reduce consciousness to some equivalent physical description. So if zombies are conceivable, materialism is wrong. But according to Richard, the conceivability of his two new creatures equally suggest that dualism is wrong. And according to me, the proliferation of these things suggests that we had all better run.

Richard eventually put his thoughts into a form appropriate to the hallowed environment of a philosophy conference (that of the Long Island Philosophical Society), and I responded in similarly civilized fashion. And now that we've got that over with we can proceed to thrash about and flame each other on the Internet. (Just kidding - I think.) I will take up as many of Richard's responses to my reply as I can, while conceding in advance that he will probably outlast me (if not outwit me) in any blog debate. And given that Brown is the name he chose for his online identity I shall now revert to that appelation, while wondering aloud how a name like "one more Brown" gets to be a rigid designator.

Brown's response to my critique begins with my defense of the idea that zombies are indeed conceivable. I suggested that I can imagine a being that is physically identical to me but unaware of the blue tint of the light in the room, and I can expand on that concept to conceive of a zombie (who is unaware of not only the bluish tint but everything else). Brown's response is:

"What we need is to imagine me being in the very same brain state and not being conscious of the blueish tint. This is exactly what is in question –that is, whether this is something that can be imagined– and so this is at best question begging."
David Chalmers, you will recall, was said to be begging questions by ruling out the possibility that "mind" is just a popular term for a physical system; if so, according to Brown, the nonexistence of zombies is a necessary truth and zombies are therefore unimaginable. Now I am allegedly begging questions by assuming that I can imagine being in the same brain state whether aware or unaware of a bluish tint. But I think this is a misuse of the term "question-begging". Brown seems to think the (hidden) form of the argument is,
1. Let's assume physicalism is wrong.
2. If physicalism is wrong, then I can imagine that we have physical duplicates that are not mental duplicates.
3. If I can imagine that we have physical duplicates that are not mental duplicates then the mental does not logically supervene on the physical.

4. Therefore physicalism is wrong.
But the second premise does not depend on the assumption that physicalism is wrong. It is an appeal to intuition, pure and simple. According to Brown, Kripkean semantics prohibit the assumption that this intuition is possible until we have first checked to see if physicalism might be correct. I am actually tempted to hand him this point because it would be the proverbial pyrrhic victory. For if I give him that, he equally has to give me the point that he cannot assume that zombies are not conceivable until we have already established what we are currently attempting to discuss. And with this stalemate at hand, we can proceed to lose our ticket to any intelligent discussion of issues which might eventually be decided by some empirical discovery. So it will be question-begging, for example, to say that the following worlds are conceivable: that in which there is no being to whom gave Moses the ten commandments; the one where large manlike creature called 'bigfoot' are nothing but a hoax; and the imaginary space in which Loch Ness is devoid of living creatures larger than a lake trout. These are question-begging in roughly the same sense that it is "question-begging" to say that a world in which there is no physicalist reduction of consciousness is conceivable, and thus that I can conceive of a world in which there is a being physically identical to myself but lacking consciousness. In all these cases, it may, as far as science is concerned, turn out that these names or definite descriptions ("god", "bigfoot", "Loch Ness monster" and "the physical facts that constitute consciousness") identify actual entities, and if we allow that, we cannot say we conceive of the worlds in question.

If this isn't a spurious argument I'll eat my copy of Naming and Necessity. Does Kripke say that we can't conceive of the mind as non-physical? Quite the opposite. Does Putnam say I can't conceive of water as XYZ? Quite the opposite. Here's Putnam: "My concept of an elm tree is exactly the same as my concept of a beech tree... (This shows that the identifcation of meaning 'in the sense of intension' with concept cannot be correct...)" (Mind, Language and Reality, Phil. Papers V.2, p.226) What's the point? I can conceive of things that are necessarily false, e.g., "Beeches are just like elms". Not "I believe [falsely] that I can conceive of a world in which beeches are just like elms" but I conceive of such a world, plain and simple. (Or I imagine it if you like, but conceiving does not have to include mental imagery.)

Brown should get off this begging-the-question kick. Nothing about what I can or can't conceive today depends on what science discovers tomorrow. If I can't conceive of zombies once I have studied the physical reduction of consciousness (which has been added to Psych 101 texts in the year 2525) then fine, I can't do it. But to bring in a posteriori necessity to show that I can't conceive today what might turn out to be false tomorrow is really cuckoo, a curious technical trick at best. If that were really the implication of the theory, it would be a reductio of Kripkean semantics. But that is not what the theory implies.

There is another problem with Brown's methodology, which is captured in his statement that "This is exactly what is in question –that is, whether this is something that can be imagined." Look, an artist covers a canvas in black paint and says, "This depicts a zombie". You are confused, no doubt, but what exactly can you say? "How? Why can't I see the zombie's shape? Is there anything else in the picture? Were you on drugs when you painted it?" These might be legitimate questions; what is not legitimate is to say, "No it isn't; I'm looking right at it and there is no zombie there." Does the artist even need to reply to this? She can laugh, because the statement is nonsense in this context; or she can say, "When you learn to see the world the way an artist sees it, you will perhaps see a zombie there; and if you don't, I can't help you." (In Goodman's terms, not every picture that represents a zombie is a zombie-picture.) The same holds true for mental pictures, conceptions, imaginings, etc. I know what a zombie is, I am not a hallucinating schizophrenic, I am an honest guy and I believe I am conceiving of a zombie. So I am conceiving of a zombie. Once the basic psychosocial background is given, my claim goes through automatically. It's not corrigible. It doesn't depend on facts or on Kripke. And it especially does not depend on some inspection (per impossible) of my conception to compare it in fine detail with the putative physical correlate that will be discovered some time hence. The details of a conception are stipulated, not set in place like clockwork. Otherwise it has to be said that I cannot really conceive of an automobile, since I haven't the foggiest idea what goes on inside a transmission (though I doubt it is little men turning cranks).

Last point, which came up in a discussion session at the conference: the point of the zombie argument is to deny the claim on logical supervenience, the idea that the mental logically supervenes on the physical. "Logical" here is the same as conceptual; the point is to show that the mental is not conceptually identical to some physical substratum (see Chalmers, p,35). Brown, as far as I can tell, seems to think "logical supervience" is just materialism, but I doubt that. The target is not the brand of materialism that says that once the physical facts are known, the facts about consciousness can be scientifically deduced; the target is the brand that says that once the physical facts are known, the facts about concsiousness are logically entailed; they simply fall out of a correct description of the brain. As Kripke says, a consistent materialist would have to hold that a complete physical description of the world is a complete description tout court; once we have it, it should just be obvious where consciousness lies in it, though it might not be called by that name. That is a logical supervenience position, and it is quite different from physicalism in general. Chalmers and I are both physicalists of a sort; we think that at some level, in the world as it is, consciousness is dependent on brain chemistry and structure. The zombie argument is not directed against this belief, and would not be effective against it. It is meant to show that we need not believe that consciousness is going to just "be there" when we announce the result of the ultimate brain scan. Scan all you want; at the end of the day you will still have to have some other kind of explanation for consciousness. The situation is (not coincidentally) somewhat like Kripke's view of rule-following: state every empirical fact you can find about the system, you will not find the rule there. Nor consciousness, if you proceed in that manner. So there is no entailment of consciousness by physical facts, and that is what logical supervenience is, and what the zombie argument is meant to cast doubt on.

The next point in Brown's response refers to my comment that in cases of aspect-change no physical difference takes place, although a mental difference does:

Alterman goes one to cite, as evidence, his convixtion (sic) that he has no reason tot hink that there is a microphysical change in his brain when he is looking at an ambiguous stimulus (like the duck-rabbit, or the Necker cube), but this is rather naive. There is evidence in both Humans and primates that there are changes in brain activation that correlate to the change in perception in these kinds of cases.
Let's keep in mind what we are talking about here. I used the duck-rabbit example to support the point that we can conceive of a zombie by enlarging on the intuitive idea that changes in mental state can occur without a change in the physical description of the system. When I observe the duck and then notice the rabbit it seems that no change takes place in the physical description of the system. Brown is arguing that this is an illusion, for brain scans show some "brain activation that correlate to the change in perception". I think there is less here than meets the eye. It stands to reason that some stimulation occurs when anything like perception, recognition, concentration, etc. takes place. Nobody disputes that, so it can't be the issue. The issue is whether it is conceivable that a being physically identical to myself could exist without conscious activity. And since it is certainly conceivable that no change takes place when I switch from one to the other, it is by enlargement conceivable that some being never undergoes such changes.

But I am not inclined to leave it at that. For the "change" that Brown points to is nothing more than an indication of an increase in blood flow (or possibly electrical activity) to some area involved with perception. (Roughly the same areas are often involved in both external perception and recognition of mental images.) So what does that show? It certainly is a long way from suggesting that some brain activity is identical with the percept "there's a rabbit in this picture"! In fact, though I do not know which particular bit of research Richard has in mind, I would be willing to bet him lunch that it shows only that the act of searching in the picture for the new image (like the achievement of stereoscopic vision, to take another example) involves some brain activity; no way it can show that there is any difference in the organism while it perceives a duck vs. a rabbit.
But I am even willing to grant that such a difference might be found; for example, it might be shown that certan vectors activated in one case have a historical (causal) relation to vectors activated in the perception of actual ducks, and the other in the perception of actual rabbits (or of realistic duck or rabbit pictures - it doesn't really matter which). So let it be the case that for every individual, nerve cell activation occurs in the duck-rabbit picture specifically in relation to the history for that individual of previous perceptions of the appropriate form. Unfortunately, the physicalist is still in need of an identity much stronger than this. The burden on the physicalist is to give a brain specification that just is the cognition of rabbit-shape (or blue-tintedness) or a strong reason why it is likely that such a specification will be found. The burden on the anti-physicalist is just to give an intuitive reason why that is unlikely to happen. Which I did, but I am more than willing to go a step further, and put it like this: there is no reason to think anyone will ever find a neurological specification that is, so to speak, the transcendental condition guaranteeing the truth of the utterance "he sees a rabbit-picture" or "he sees a duck-picture". And if that won't happen, the fact that some blood flows to the area that manages changes in perception is of little interest.

Brown next takes on another example I used to demonstrate the conceivability of zombies, that of sleepwalkers and blindsight. These people, he insists, are in states "which obviously include a physical difference" from ordinary conscious states. Once again, that is not really relevant to the point of the example. We are talking about conceivability; the example is meant to bolster the plausibility of the claim that zombies are conceivable (to provide "evidence" for conceivability, in the only intelligible sense of Brown's demand for it), and if it does that, it has the effect it is intended to have. It is in no way intended to show that people in such states are in physically identical brain states to non-sleeping, non-brain-damaged individuals who might perform the same actions. To show that might be sufficient to prove the conceivability of zombies, but it is far from necessary. I don't think I need to belabor this any more.

I will have to skip over Brown's next few responses because I think they amount to sticking by the line that Kripkean semantics require us to not assume zombies are conceivable just because we think we can conceive them, and I have already responded to this in sufficient detail. So I move on to his response to what he calls my "stunning claim" that no theory of consciousness has even begun to offer a reductive program for phenomenal experience, such as color vision. Actually I was under the impression that no one would find this even interesting, much less "stunning", because it seems that even materialists have practically written off the effort, generally claiming that qualia are mere illusion and beneath the dignity of a physical theory to explain, while anti-materialists have been saying it consistently since Nagel (whose seminal article is almost entirely an exposition of this very point). So what is Brown's answer to my "stunning claim"? HOT! Yes, of all things, he points to David Rosenthal's (or someone's, in any case) "higher-order thought" theory of consciousness as a program for the physicalist reduction of phenomenal consciousness! Talk about stunning - I thought the very reason that HOT has not attracted many followers is precisely that it offers no hope of explaining phenomenal consciousness. But maybe Brown has been having private sessions with POMAL types who think otherwise.

So what is the response of HOT to my request for
"a program for explaining conscious experience, or even the function of consciousness, as an outcome of... biophysical research"? According to Rosenthal, at least, a conscious thought has a qualitative character because the HOT that accompanies it is in some quality-space. That not being very enlightening (even compared with the outright abandonment of attempts to deal with qualia in more hardnosed materialist theories like those of Churchland, Dennett, or Crick) Rosenthal goes on to explain why the HOT has the qualitative it has: it tracks the "similarities and difference" in perceptual space. That's it, the putative program in a nutshell. As for the function of consciousness, Rosenthal's view is that it doesn't really have one; we could get along quite well without it. (Apparently Rosenthal can conceive of zombies; indeed, one could interpret what he says about the function of consciousness to suggest that it is no more than an evolutionary accident that we are not zombies.) In spite of a great deal more verbiage (see Rosenthal's "Sensory Qualities, Consciousness and Perception" in his book, Consciousness and Mind) there is not a whole lot more to this response to what I said was missing.
As Brown characterizes the HOT view of why red objects appear red and not green,
"they do so because we are conscious of ourselves as seeing red not green. You may not like this answer but it certainly does what Alterman says we we don’t have a clue about doing."

Actually, it is not so match a matter of whether one likes the answer as whether one finds it to be an "answer" to anything. It seems to me that this is as far from materialist dreams of a perfect theory as one is going to get. In spite of Rosenthal's often expressed sympathy for materialist analyses of non-conscious thoughts, what he is doing is, broadly speaking, traditional philosophy of mind and language. He offers something like a conceptual analysis of conscious awareness, and gives a defense of it in terms of performance conditions and other standard POMAL ideas. Quite a distance from anything that is going on in the reductive programs that comprise the materialist discourse. I stand by my "stunning claim" - there ain't nothin' happening, in any branch of philosophy or cognitive science, that begins to shed light on how or why we experience reality largely as a succession of qualitative states.

Brown states that he never questioned that conceivability entails possibility, as I said he did in my response. But he presents the main line on which his paper is based, the Kripkean semantics of natural kinds, as being "the typical argument that conceivability doesn't entail possibility".
I grant that he never explicitly says that he agrees with this use of Kripkean semantics; he employs it in another way, to question whether zombies are conceivable. On the other hand, he never disputes the first use; indeed he says a number of things which suggest it, e.g., "it cannot be the case that intuitions about zombies are evidence for or against any theory of consciousness". I was reading this as implying that we could grant the possibility of zombies without the dualist gaining any ground. But I am happy to let Brown be the final arbiter of his own intentions, and leave that portion of my reply as a side-issue directed to those who use the Kripke line in the first way. (It does strike me as ironic that there would be two separate arguments against dualism based on a theory of Kripke's which he employs against materialism, but never mind. Since I don't agree with much that Kripke says about Wittgenstein I am not going to appeal to his authority in this case.)

Brown's next point is that Chalmers, contrary to me, is indeed
"claiming that there is a necessary link between our non-physical qualities and consciousness". I am not going to go through Chalmers' book to verify that this claim is never made, but it seems to me that the basis for Richard's statement is once again the Kripkean view that if "water" refers to H2O in this world, it does so in all worlds; so if "consciousness" refers to a non-physical property in this world, it does so in all worlds, and its non-physicality is therefore a necessary truth. There are various ways of responding to this. The simplest is to say that Chalmers' argument only leads to the point that it could be a necessary truth that consciousness is a non-physical property. Another is that Chalmers simply does not think that consciousness is a non-physical property in every possible world; he thinks that it is contingently non-physical in this world. A more technical response would involve Chalmers' two-dimensional semantics and the "primary" versus "secondary" intensions of natural kind terms, but I can tell from Brown's latest post that this is only going to lead to a brand new debate. I would rather just refer readers to parenthetical remark which constitutes the last paragraph of p.59 in Chapter 2 of The Conscious Mind, which to my mind offers an adequate reply to the basic premise of Richard's paper. (The reason it is adequate is because it spells out in the technical terms of two-dimensional semantics what I have been saying in more straightforward language throughout my comments: that it simply cannot be the case that we can't conceive of certain possibilities until someone has determined whether some empirical fact about the actual world is true.)

A not terribly important side-issue regarding Brown's view is whether it makes any sense to postulate beings that are similar to me with respect to "all non-physical qualities", or beings that are "completely physical" and are conscious. Suffice it to say that I cannot find a way to allow either of these examples without thinking that the answer to whether physicalism is correct is already built in to the description. Brown seems to think that that doesn't matter, because it is just parallel to what the zombie theorist does. But I think it is not parallel, because the zombie example makes no theoretical assumptions and simply depends on intuition, while Brown's claim that it is question-begging is theory-driven, and the theory is used in a counterintuitive way that most of the disputants do not agree with.

At the end of his remarks, Brown says that he can live with the limited goal I attribute to the zombie argument, that of establishing that there is no conceptual link between physics and consciousness. Hmmmm, I thought that that was what the whole debate was about. Chalmers himself believes that consciousness physically supervenes on brain states, and only argues that it is not the case in all logically possible worlds that this is so. In his book, he presents not only the zombie argument but four other arguments (none of which, I believe, are original, though the presentation is) to the same effect. Why should we be so concerned with this? I am concerned with it because I don't think reductive programs are the way to go. I think a lot will be found out about how consciousness is connected with the biological structures of the brain - 40 Hz waves or whatever - but if the relationship between any particular physical instantiation and consciousness is contingent, we will learn more about consciousness through other methods - perhaps what we might call traditional philosophical analysis, perhaps some of what goes by the name of clinical psychology, perhaps aesthetics. Consciousness, in my view if not in Chalmers', has been most usefully explored in the work of Kant, Wittgenstein, Husserl, James, Freud, Jung, Kohler, and other writers of that nature, as well as in literature of great merit from Homer to Joyce. The whole tradition of cognitive science is at this point nothing but a footnote to those insights at this point. In my opinion, it never will be much more than that as far as this question is concerned.



Sunday, October 19, 2008

Zombie, Schmombie - Richard Brown's Efforts to Ressurect Materialism

The indefatigable POMAL blogger and Richard Brown has posted a reply to comments on his Zoombies and Shombies paper, "The Reverse-Zombie Argument Against Dualism" (find a link here), made by a certain "Alderman". Unfortunately, I must object to the egregious act of plagiarism that said Alderman has performed on the comments I sent to Prof. Brown only a few days ago, copying them more or less word for word (how he got hold of them I can only imagine). Should I sue? Actually you can't sue for plagiarism, and I'm not sure what the copyright value of my comments would be, so I have a better solution: Dr. Brown should simply change the "d's" in "Alderman" to "t's" and everything will be alright.

Brown (whose name is quite difficult to misspell, though I tried) certainly outdoes me by a country mile in posting to his blog, an admirable quality that is underrated in the philosophical community. Blogging is I think more in the spirit of philosophy in the Socratic tradition than the institutional control exercised by professional journals and presses. (Anybody who has received the typically biased and ignorant comments
on a rejected article from journal reviewers will probably agree wholheartedly with the title of Brown's blog, Philosophy Sucks!) In the future, I will try to do better than the, hmmmm,,, 10 month gap between this and my last post. (Which is a bit less than the gap in my arts blog. Yikes.) In any case, kudos to Dr. Brown for his blogging efforts - not to mention his Cel-ray tonic. (Jeez, names really do get confusing, don't they? Maybe someone should do some philosophical work on this topic.)

What follows is the complete text of my comments on Brown's paper, delivered yesterday (10/18/08) at the conference of the
Long Island Philosophical Society. The papers and replies will eventually be published in Calipso, the LIPS online journal, at which point I may remove it from here and put in a link. In the next post I will reply to Brown's replies to my reply to his paper. (And perhaps to some of the replies to his replies to my reply to his reply to Chalmers - which can be found on his blog.)

Zombies, Schmombies... Full Text from the Original Author

The materialist position about consciousness consists in the view that consciousness can be fully explained once we understand the physical materials and processes in the brain. Consciousness will emerge as a supervenient property that can ultimately be reduced to some underlying physical basis. For materialism to go through, it is not sufficient that consciousness be somehow related to or dependent on the brain; it must be nothing more than a brain function, whose supervenience is obscured by some unique aspectual or descriptive stance that stands in the way of our seeing the connection intuitively. In the some versions, such obscurities will eventually disappear, and we will be able to eliminate the introspective illusion of an inner self. Others see the aspectual stance as inherent in the situation. On either view, there is nothing in reality that can either be explained, except as a dependent phenomenon, or do any explaining, other than the physical world.

Most opponents of the materialist view rely heavily on one or more intuition pumps that allegedly bring out a gap between the knowledge and understanding of physical facts and an explanation of consciousness. The "zombie" argument is one such effort. Imagine a creature that has all the physical properties that we would expect a human being to have, and behaves in the ordinary way that human beings would in similar situations, but lacks any hint of consciousness. If this is conceivable (so the argument goes) then physical facts cannot be the logical, or conceptual, foundation of consciousness.

In "The Reverse-Zombie Argument Against Dualism" Richard Brown suggests that the zombie thought experiment provides no compelling evidence that physicalism is wrong. There appear to be at least three tracks to his argument, which I will try to bring out.

The first idea is the contention that zombies, as described by David Chalmers and others, may not actually be conceivable at all. It is easy to miss the logic of Brown's argument here, because at the end he leads us somewhat astray, in my opinion, with suggestions that point in a different direction. One is that proponents of zombieism ought to offer some "evidence" for the conceivability of zombies. A second, related one occurs when Brown says that he himself cannot conceive of a zombie; and again, when he demands "some reason to think that we are really conceiving of a zombie world as opposed to a world that is very similar to ours but not microphysically identical". These points all seem a bit odd, to say the least. Conceptual arguments involve the logic of concepts; any "evidence" for them would surely not be of the empirical sort, and plenty of support has been offered on the conceptual side. The arguments do not depend on the strength of any one person's imagination, but on whether anyone can find a logical contradiction in their use of concepts. And though gross imaginative errors may be to some degree corrigible (I might say I'm imagining a duck but in fact be imagining a chicken), it makes no sense to say that someone who claims to be imagining a microphysical duplicate of me might "really" be imagining something that differs in some small way. (What does "really" really mean here?) But let me try to respond with a defense of the zombie imaginer before we move on to Brown's main argument. My "evidence" will consist in conceptual support for the point that conceiving of a zombie requires nothing more than adding and subtracting properties, something any normal person can do. So first, I can imagine someone physically identical to myself who is in the same room but is not aware of the slightly bluish tint of the late afternoon light, or the background humming of the air conditioning, while I am aware of all that. For I can imagine myself not having been aware of any them, and yet being physically identical to my actual self; just as when I see the duck and then see the rabbit in the same drawing, I have no reason to believe that a microphysical change took place, and even less reason to think that a determinate, repeatable microphysical change took place. Similar arguments could be brought for memory, imagination, and other components of consciousness. Therefore I can imagine a being that is physically identical to myself but lacks consciousness. Second, we can arrive at the concept of a zombie by expanding on concepts like blindsight or sleepwalking. These documented empirical states involve acting and behaving in certain situations like a normal human being but completely lacking awareness of one's behavior or surroundings. A being who is always in such states would be a zombie.

This should suffice for evidence of the conceivability of zombies. It is always possible to submerge one's conceptual abilities by becoming enmeshed in a theory. If one believes that all properties are directly reducible to underlying physical characteristics, it becomes difficult to conceive of anything that is not so reducible. In this way, entities that lacked the Aristotelian notion of substance were inconceivable prior to 18th century empiricism. If someone finds it impossible in theory to separate physical structure from any higher-order property whatsoever, then they might react to the notion of a zombie as "inconceivable" in the sense of "beyond the capabilities of imagination". But imagination tied down by theory is not the relevant power for assessing the viability of zombie conceptions.

The more important aspect of Brown's position does not rely on imaginative prowess. His point is that we ought to grant the physicalist at least the possibility that consciousness is nothing more than a high-level effect of the biophysics of the brain. If we do that, then we grant the possibility that consciousness is a natural kind term for some complex configuration of physical parts and processes. On a Kripkean theory of reference, a natural kind terms refer to a natural kind by means of some property that constitutes its identity. "Water" refers to all and only substances that are actually H2O . Once we know that that is the case, we realize that it is necessarily the case, and that the statement "it's water, alright, but it's not H2O" contains a conceptual confusion. "Consciousness" may similarly refer to whatever the underlying physical basis of consciousness turns out to be. We may not know that identity now, but when we do we will realize that zombies - physical duplicates of ourselves but without consciousness - never really were conceivable in the first place. According to Brown, if we insist that zombies are conceivable, we simply beg the question against this argument.

The question I have about this argument is, who is really begging the question? The logic of Brown's argument is that dualists cannot force the issue against materialism by stating a priori that zombies are conceivable, since it may turn out a posteriori that the connection between brains and consciousness is a necessary one. By the same token, one could have argued in the 19th century that a thought experiment designed to show that light is not a substance but a wave begs the question against the a posteriori necessary truth that light is the propagation of photons. The form of the objection seems wrong, because we cannot say in advance that discovering a physical basis for consciousness will make zombies inconceivable. Consciousness could be more like the terms "evolution" or "radiation" than like "water" or "heat". The former are natural kind terms, but neither has an essence that can be expressed in an identity statement. I fail to see any reason why thought experiments should be constrained by the combined demands of a controversial theory of reference for natural kind terms and the empirical possibility that reductionist programs will be successful. To focus on the latter for a moment, after two centuries of psychophysical experiments we still have no reason to believe that consciousness can be reduced to biophysical properties. As Chalmers carefully explains, none of the popular reduction programs have brought us any closer to bridging consciousness with the physical world. Take our current, fairly sophisticated understanding of color vision; how does it even come close to explaining why red objects appear red and not green? No physicalist story even gets off the ground on this kind of question. The same holds for consciousness in general: in spite of having mapped and experimented with dozens of brain areas, having sophisticated biochemical analyses of brain activity, and even manipulating some basic motor functions with digitally simulated brain signals, we don't have so much as a program for explaining conscious experience, or even the function of consciousness, as an outcome of any of this biophysical research. I think it is quite a leap to say that dualists beg the question by ignoring the possibility that the holy grail of materialism will someday be found.

A second point Brown makes is that conceivability does not entail possibility. The zombie argument depends on the following kind of reasoning. Suppose it were the case that the mental logically supervenes on the physical. Then it would be a metaphysical fact about the universe that whenever you have mind, you have a material foundation. But logical supervenience is an identity relation, so whenever you have the appropriate physical foundation, you must also have mind. Then the concept of a physical foundation without mind ought to be a contradiction of some sort, like the concept of space without distance or consciousness without thought. But the zombie argument is designed to show that this is not the case. Let it be granted, then, that the zombie argument demonstrates the conceivability of zombies. We can conceive of life without death, too, and many other things that may not in fact be physically possible. In the end, then, the zombie argument demonstrates nothing of interest to anyone except philosophers, and the search for a materialist explanation of consciousness can proceed.

I think Brown can reasonably object that while zombies may be metaphysically possible, this kind of conclusion may not establish anything very useful in the debate on consciousness. It establishes that one can be a dualist without violating any rules of metaphysics. But that is an achievement of very limited scope. For no modern dualist wants to be a dualist about substances; we all begin from essentially the same scientific conception of the universe. We believe there is nothing added to biological substrate of consciousness in the sense in which some god or unknown force disperses some ethereal quasi-matter which, combining with our brains, creates consciousness. On the contrary, we all agree that there is no substrate except matter, and the question is how, from matter, you get the qualitative view that is awkwardly expressed by the phrase "what it is like to be" a human, raptor, etc.

But the logical possibility may, on the other hand, be sufficient for what the modern dualist really wants to establish. The point is to argue against the program in which, by assembling enough information about the mechanics of brain processes, and relating that through tomography and other techniques to certain mental phenomena, we will eventually be able to reduce consciousness to brain processes. Someone who believes that there is no matter or force except the ones described by modern physics does not have to purchase that program. They can hold that it is the wrong level of explanation for mental processes. They can believe that mental predicates collect the phenomena that physically supervene on biological entities at too high a level to ever be reduced. They can hold that enormous differences in the underlying structures can accommodate the same mental phenomena, described by the same psychological terms and following the same psychological laws. On this view, the correct kinds of programs for understanding consciousness could be those of William James, Husserl, and Wittgenstein, and not those of Smart, Churchland and Dennett.

I turn finally to the "zoombie" and "shombie" examples Brown offers. As he describes them, a "zoombie" is "a creature which is identical to me in every non-physical respect but which lacks any (non-physical) conscious experience". The idea seems to be that just as my zombie twin is identical to me in every physical respect but lacks qualitative consciousness, my "zoombie" twin is identical to me in every non-physical respect but lacks qualitative consciousness. If the former suggests that consciousness is not a physical property, the latter suggests that it is not a non-physical property.

A "shombie" is "a creature that is microphysically identical to me, has conscious experience, and is completely physical". If shombies are conceivable, then dualists are at best guilty of rejecting the principle of inference to the simplest explanation that accounts for all the known facts. For why should we go about imagining exotic explanations for consciousness when it is perfectly conceivable that physics can explain it all?

According to Brown, these two thought experiments constitute something like a parity of reasoning argument against the zombie argument, and therefore against this particular kind of objection to physicalism. The zombie argument says that it is conceptually possible to disassociate the human body and behavior from conscious experience, and that therefore it is not incumbent on those who hold a naturalistic view of the universe to believe that consciousness is identical to some set of physical processes in the brain. The zoombie argument says that it is conceptually possible to dissociate all non-physical human qualities from conscious experience, and the shombie argument says that it is possible to associate all conscious experience with physical systems like the one in which our minds are embodied. Both thought experiments attempt to show that the zombie argument does not produce any conclusion against physicalism that cannot be produced against dualism by parity of reasoning. So either the zombie argument fails against physicalism, or the zoombie and shombie arguments are equally conclusive against dualism.

I agree that the zombie argument is not a conclusive argument against physicalism; but what it purports to show, at least, is that we are not forced to choose between a materialist theory of consciousness and a spooky view of the universe. If we can conceptually dissociate consciousness from the particular forms in which it is embodied, we can imagine a universe in which it is realized in other ways; and if we can do that, we can give up the idea that there must be a reductive, biophysical explanation of consciousness. I fail to see what parallel objective is achieved by positing "zoombies", since no one is claiming that there is a necessary link between our "non-physical" qualities and consciousness. Brown gives no indication of what he means by such qualities, but it cannot be things like mental or emotional states, because to assume those are non-physical would surely beg the question about consciousness. Perhaps we are talking about relational properties, value-bearing predicates, multiplicity and the like. But we can agree that there is no conceptual link between those properties and consciousness without inventing any new creatures. Since the basis for the sort of property dualism that people like Chalmers propose is not parallel to the metaphysical claims of the materialists, I don't see that this argument has a target.

"Shombies" allegedly show that we can imagine a creature that is "completely physical" having conscious experience. Brown again avoids unpacking the notion of "completely physical", but one thing we cannot say here is that no predicates other than physical ones apply to such creatures, since there is no such thing as an entity to which relational predicates, for instance, do not apply. It appears, then, that the idea of a "shombie" must be roughly that of a machine that has conscious experience. This sort of thought experiment has been tried many times, and I'm not sure what is added by calling it a "shombie". But it does bring out the foolishness of depending on either zombies or robots to prove anything about consciousness. One side says "I can imagine a conscious machine, so consciousness must be reducible to physics"; the other side says "I can imagine a non-conscious twin, so consciousness must not be reducible to physics". Personally I can imagine a talking cloud; am I entitled to the conclusion that we are in cloud-cuckoo land?

Thought experiments, as Wittgenstein pointed out, are not analogous to real experiments, only with thought-materials. They are devices to make us think about what we would say in a very unusual situation; and this can give us insights into how our concepts are organized and how our language works. If we conceive of the mind-body problem along these lines, thought experiments might help us solve it. The zombie idea is therefore somewhat effective in refuting the idea of a conceptual link between matter and mental phenomena; not a small accomplishment in light of the very strong pull that our basic scientific convictions have on our thinking as a whole. But they cannot answer any naturalistic questions, such as whether the notion of conscious experience will eventually fall out of a detailed description of the operation of brain cells. This is a matter for scientific research, and the only reasonable answer we can give right now is that it is far from doing so at this stage of the game. The materialists want to press on because they are convinced there is no other way. The zombie argument suggests that they are wrong about that, but it does not prove that success is conceptually impossible. Brown's thought experiments are helpful is suggesting this corrective to anyone who uses a zombie to scare the materialists away from their research projects.

Anton Alterman

LIPS Conference, St. John's University, Queens, New York, October 18, 2008



Saturday, December 15, 2007

Churchland Again: How to Duck Some Objections

Other minds have been debating my Churchland post over at DuckRabbit, attributing to a certain H.A. Monk (a name I have assiduously but unsuccessfully tried to excise from this blog, since it is internally related to my identity on my other blog, The Parrot's Lamppost) various assertions that concede a bit too much to both materialist and Cartesian views on the mind-body problem. Though the discussion seems to have ended up in a debate on ducks and rabbits (which I thought would have been settled long ago on that site; in any case, see my "Aspects, Objects and Representations" - in Carol C. Gould, ed. Contructivism and Practice: Toward a Historical Epistemology, Rowman and Littlefield, 2003 - for yet another contribution to the debate) Duck's original post offers a number of points worth considering. (Have a look also at N.N.'s contribution at Methods of Projection. N.N. picked the right moniker, too, maybe because there are also two n's in "Anton".) Here is a version of what I take to be Duck's central criticism of what I said about Churchland:
It's true that the materialist answer "leaves something out" conceptually; but the reply cannot be that we can bring this out by separating the third-personal and first-personal aspects of coffee-smelling, and then (by "turn[ing] off a switch in his brain") give him only the former and see if he notices anything missing. That the two are separable in this way just is the Cartesian assumption common to both parties. (Why, for example, should we expect that if he simply "recognize[s] the coffee smell intellectually" his EEG wouldn't be completely different from, well, actually smelling it?) I think we should instead resist the idea that registering the "coffee smell" is one thing (say, happening over here in the brain) and "having [a] phenomenological version of the sensation" is a distinct thing, one that might happen somewhere else, such that I could "turn off the switch" that allows the latter, without thereby affecting the former. That sounds like the "Cartesian Theater" model I would have thought we were trying to get away from.
While I appreciate the spirit of this comment, I must say that I think it does not merely concede something to Churchland, it is more or less exactly what Churchland is saying, though you might want to add "seen through an inverting lens". Churchland indeed wants to deny that "the two are separable in this way"; in fact he takes an imaginary interlocutor sharply to task for asking him to provide a "substantive explanation of the 'correlations' [between "a given qualia" and "a given activation vector"]" because this "is just to beg the question against the strict identities proposed. And to find any dark significance in the 'absence' of such an explanation is to have missed the point of our explicitly reductive undertaking" (Philosophical Psychology 18, Oct .2005, p.557). In other words: if what we have here is really an identity relation - two modes of presentation of things that are exactly, numerically the same - how dare you insist that I should explain how they are related. They are related by being the same thing, Q.E.D.!

My post was largely directed as fishy moves like this. The problem is that we have two things that we can - and lacking any evidence to the contrary, must - identify (pick out, refer to) by two completely different procedures; yet Churchland wants to assert that they are identical. What notion of identity is at work here is hard to say.
Since Churchland rejects the notion of metaphysical necessity it cannot be "same in all PW's". But it must be more than "one only happens when the other happens" since that is a mere correlation. Even "one happens if and only if the other happens" could mean nothing more than that some natural law binds the occurrence of the two things together, which does not give us numerical identity. He wants to say "blue qualia are identical to such-and-such coding vectors", and we have to take this as meaning more than that there is evidence for their regular coinstantiation. But to make it theoretically sound, or even plausible, in light of the fact that we recognize the two ideas in totally different ways, he must offer two things, at least: (1) an explanation of why these apparently distinct facts (qualia/coding vectors) are actually one and the same phenomenon (what makes the one thing manifest itself in such dissimilar ways); and (2) experimental evidence of an empirical correlation between them. Yet he also tells us that we are "begging the question" if we ask for an explanation! And as for the empirical correlation, it is not just that no one has sat down and examined a subject's cone cell "vectors" and asked them, "Now what color do you see?"; the fact is that the whole idea of "coding vectors" is a mathematical abstraction from a biological process that almost certainly only approximates this mathematical ideal, even before we get to the question of how regularly the outputs of the process end up as the particular color qualia that are supposed to have been encoded.

I am not saying there is no evidence at all for the analysis Churchland offers (based on the so-called "Hurvich-Jameson net" at the retinal level and Munsell's reconstruction of possible color experiences at the phenomenological level), but that there is not even evidence of a strict correlation. Some of the things that Churchland discusses - for example, the fact that this analysis of color vision is consistent with the stabilization of color experience under different ambient lighting conditions (p.539) - strongly suggest that something about the analysis is right, but do not constitute direct empirical evidence for it. What we are really being offered is a notion of identity that has as its basis neither metaphysics, nor scientific explanation, nor sufficient quantitative evidence to establish a strict correlation. We can be excused for saying "no thanks" to this libation.

And if this unanalyzed notion of the identity of phenomenological and biological facts is also being proffered in the name of some other philosophical position - say, Wittgenstein's - we should be no less skeptical. Merely proclaiming the lack of distinction between phenomenology and physiology, inner and outer, mind and world, something and nothing, etc. does not establish anything as a viable philosophical position on consciousness. Even adding the observation that one gets rid of philosophical problems this way does not establish it as a viable position. One gets rid of problems also by saying that god established an original harmony of thought and matter. If you can just swallow this apple whole, you'll find that the core goes down very easily.

Whoops, what happened to my erstwhile Wittgenstein sympathies? Well, maybe the apple I don't want to swallow is really this interpretation of Wittgenstein. Duck and I agree that being sympathetic to Wittgenstein does not require dismissing all scientific investigation of the brain (or the world in general) as irrelevant. But I don't think we agree on why. Duck quotes the following passage from the PI :
'Just now I looked at the shape rather than at the colour." Do not let such phrases confuse you. [So far so good; but now:] Above all, don't wonder "What can be going on in the eyes or brain?" ' (PI p.211)
What is Duck's view of this recommendation? He is not quite sure, but finally decides that philosophers' conceptual investigations will keep scientists honest, so they avoid causing problems for us philosophers:
In a way this is right... Don't wonder that... you thought that was going to provide the answer to our conceptual problem. But surely there is something going on in the brain! Would you tell the neuroscientist to stop investigating vision? Or even think of him/her as simply dotting the i's and crossing the t's on a story already written by philosophy? That gets things backwards. Philosophy doesn't provide answers by itself, to conceptual problems or scientific ones. It untangles you when you run into them; but when you're done, you still have neuroscience to do. Neuroscience isn't going to answer free-standing philosophical problems; but that doesn't mean we should react to the attempt by holding those problems up out of reach. Instead, we should get the scientist to tell the story properly, so that the problems don't come up in the first place.
For my part I don't think this is the point of Wittgenstein's various proclamations about the independence of philosophy from science. Wittgenstein was concerned that physicalistic grammar intrudes into our conceptual or phenomenological investigations, making it impossible to untangle and lay out perspicuously the grammar of phenomena. This is the root of what we call "philosophical problems". It is not the scientist who we have to get to "tell the story properly", it is the philosopher. The scientist does not have a fundamental problem with importing the grammar of phenomenology, thereby tying her physical investigations into knots. It is the other way around: the magnetic pull of physical concepts constantly threatens to affect conceptual investigation. To take a slightly oversimplified example, we say we can "grasp" a thought, but it is an imperceptible step further along the path of this metaphor that allows us to think we can capture it concretely - say, in a proposition, or a sentence of "mentalese" - in a sense that depends quite subtly on our ability to "grasp" a hammer or the rung of a ladder (picking it out as a unique object, self-identical through time, involved in a nexus of cause-effect relations, etc.). True, it takes quite a leap before you are ready to say, "The thought 'the cat is on the mat' just is this neuronal activation vector'", but that is one logical result of this sort of thinking. That we are ready to call this the solution to a philosophical problem just puts the icing on the cake; it is the dismissal of philosophy per se, in more or less the way we can dismiss morality by pointing out that we are all just physical objects made of atoms anyway, and who could care what happens to that?

When Wittgenstein says, "don't wonder, 'What can be going on in the eyes or the brain?'" he is using duck-rabbit-type phenomena to show that conceptual or psychological problems may not be tracked by any physical difference at all. In fact, there is a passage just after the one cited by Duck in which WIttgenstein lays it out as clearly as anyone could ask. He suggests a physical explanation of aspectual change via some theory of eye tracking movements, and then immediately moves to say,
"You have now introduced a new, physiological criterion for seeing. And this can screen the old problem from view, but not solve it". And again, he says, "what happens when a physiological explanation is offered" is that "the psychological concept hangs out of reach of this explanation" (p.212).
The point is very straightforward, and it is certainly compatible with what I have been saying about Churchland. The physical level of explanation just flies past the psychological concepts without recognizing or accounting for them. But in Duck's view, I am guilty of reintroducing the bogey of dualism and the "Carteisan theater" (I'm planning a post on Dennett soon so I'll avoid this bait right now):

So what's the moral? Maybe it's this. In situations like this, it will always seem like there's a natural way to bring out "what's missing" from a reductive account of some phenomenon. We grant the conceptual possibility of separating out (the referent of) the reducing account from (that of) the (supposedly) reduced phenomenon; but then rub in the reducer's face the manifest inability of such an account to encompass what we feel is "missing." But to do this we have presented the latter as a conceptually distinct thing (so the issue is not substance dualism, which Block rejects as well) – and this is the very assumption we should be protesting. On the other hand, what we should say – the place we should end up – seems in contrast to be less pointed, and thus less satisfying, than the "explanatory gap" rhetoric we use to make the point clear to sophomores, who may very well miss the subtler point and take the well-deserved smackdown of materialism to constitute an implicit (or explicit!) acceptance of the dualistic picture.
Absolutely, a physical explanation or description of consciousness is "conceptually distinct" from a phenomenological one. I can see no other possible interpretation of the passage about the eye-movement explanation of "seeing-as" phenomena. Does this make Wittgenstein a "dualist"? Certainly not in the Cartesian sense. True, Wittgenstein not only studied architecture and engineering and cited Hertz and Boltzmann in his early work; he also read (and failed to cite) Schopenhauer and James and had a deep appreciation of "the mystical", which he further identifies with "the causal nexus"; he says in the TLP that philosophy should state only facts, and that this shows how much is left out when all the facts have been stated. But is he now going so far as to suggest that there are different worlds, of scientific and mental reality? I seriously doubt it; and neither am I. There are different levels of explanation, or in his own terminology, different language games. This is not a Cartesian dualism but a point about the structure of thought. It is the same point that much of the Blue Book is based on.

I have not said much about my view of consciousness in this blog. But we're only just getting started, I've got time. I will say this, though: the resolution of the mind-body problem cannot be as simple as, for example, the New Realist (or "neutral monist") school hoped it would be. There, various aspects of reality were said to consist of a single "stuff" (read "substance", with various proposals for what this would be circulating at the time) which took on physical or psychological "aspects" depending on our interest, point of view, or whatever. This is a nice, compact view, but it does not do justice to the issue. There is a brain without which there is nothing in the world called "thinking", and a world without which nothing in a brain can count as "thought". There is every reason to believe that every event that ever counted as a thought took place in a brain, and that something was going on in the brain without which that thought would not have happened. This all has to be accounted for, and it is not sufficient to say that there are different aspects to some general substance or process. Sure, there are different aspects to everything, but this won't get us very far with the mind-body problem. How did an "aspect" of something that is also matter end up as consciousness? The problem is only pushed back. How can an "aspect" of whatever be self-aware, control its own actions, or compose a piano sonata? These are very peculiar aspects. If we could put them under an electron microscope we would not find out what we want to know about them.

I suspect that something like the following is the case: the various phenomena we call "the mind" are asymmetrically dependent on the brain, but the relationship is so loose that there is never anything like the "identity" relationship Churchland wants, nor a mere difference in points of view between the physical and phenomenological "aspects". We recognize certain psychological phenomena and talk about them and analyze them, and there is no such thing as a specifiable set of neural events that are necessary and sufficient for the instantiation of these phenomena - perhaps not even as types, and certainly not as specific thoughts, volitions, etc. There may be some wave oscillations in the brain that correspond to conscious states, but they are not those conscious states. There are particular portions of the brain that are primarily involved in certain aspects of our intellectual activity - emotions, language, memory, etc. - but there is not a specifiable neural "vector" that is "identical" to Proust's sensation of the taste of his mother's "sweet madeleines", much less to the flood of memories it evokes. Perhaps in Churchand's utopia we can replace Swann's Way with some mathematical specifications of its underlying neural activity without any particular loss, but I am not holding my breath.

Why do I think this, or even have a right to hold it out as a reasonable objection? Just because I think psychological concepts are not he rigid, well-articulated concepts that you find in much analytic philosophy. There is a way you can talk about things that are not uniquely or cleanly definable (Wittgenstein: "You are not saying nothing when you say 'stand roughly there...'"; a quote that is roughly accurate!). Talking about them is intellectually interesting in philosophy, important in clinical psychology and ethics, satisfying in the arts. It has been recognized by some neuroscientists and philosophers (Varela and others) that unless you have some kind of scientific phenomenology to begin with, you can't hope to reduce anything to neurology. But that position presupposes that there is something like a science of folk psychological concepts, on something like the lines that Husserl, Sartre and others tried to give us. And Wittgenstein too, in a certain sense: only his phenomenology of mind is imbued with the understanding that part of the "science" we are looking for involves the recognition of the vagueness or circumstantial relativity of concepts.

So how about a vague specification of cone cell coding vectors? "There is a 95% correlation between this coding vector and observed reports of red sensations." I could live with that. But it still doesn't give us a claim to "identity", nor does it justify saying that these are different "aspects" of the same event. They are different things that generally must happen
in order to recognize something as red. But I can say I dreamed of a red balloon and no one will say, "Oh, but there were no cone cell vectors, you couldn't have." And of course even my memory of a red balloon is a memory of something viscerally red, with no conce cell activity to show for it.