tag:blogger.com,1999:blog-24894689164532106692024-02-08T15:05:23.885-05:00Brain ScamA rampart against naive materialist views of consciousness.Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-2489468916453210669.post-57832175933658932382010-03-18T10:52:00.001-04:002010-03-18T10:52:19.221-04:00Pains in the Brain: On Liberating Animals from Feeling<div><span style="font-family:trebuchet ms;">Before I say anything about my year-plus hiatus on publishing here, let me just throw in a practical note. Due to a blitzkrieg of spam posts on this and other blogs I operate on Blogger, I have had to change the policy so that all posts must be moderated. But for all this obnoxious garbage coming my way I had no intention of preventing even the most ridiculous comments, so long as they were intended as contributions. But after spending time every day, for weeks in a row, logging in and deleting spam comments one by one I finally had no choice. Anyone who wishes to comment on this or any other post is strongly encouraged to do so. I still intend to publish every appropriate comment, regardless of how much I may disagree with it. The only exceptions I can think of would be extreme ad hominem attacks, racial insults, threats and the like. Please be aware that it may take me a little while to get notified of a comment and publish it. Thanks for your understanding.<br /><br />So, as I was saying, here it is well over a year since I've published anything on this blog. Not that I haven't been thinking about it. Not that I haven't wished to throw my two cents in about this or that earthshaking advance in neurophilosophy. Not that I've been any less pissed off at watching the philosophers of mind, and many others, become shameless sycophants of neurobiologists, philosophical thought pander to laboratory experiments, and philosophical nihilism creep in on the hind paws of physicalism. It's just that I've spent a lot of my energy doing other things. Did you know, for example, that there are certain fields in which it is actually the practice to <span style="font-style: italic;">pay</span> authors for the articles they publish? Yes, this unusual ritual prevails, for example, in fiction, essay, and many other kinds of writing. Imagine: you receive a letter from the <span style="font-style: italic;">Philosophical Review</span> in which you are not only told that your essay has been accepted, but that you will receive actual <span style="font-style: italic;">money</span> for the two years of hard work it took you to write the piece. Incredible, I know, but outside the beknighted realm of purely theoretical academic discourse, it's the norm. I mean, thanks for the Socratic tradition of offering your words of wisdom for nothing, in contrast to those shallow, relativistic Sophists; but just the same, if my essay is worth publishing, it's worth paying for.<br /><br />Indeed, I suspect that the author of the piece that has drawn me back to this blog for a moment, was not too badly compensated for filling a small part of the <span style="font-style: italic;"><span style="font-style: italic;">New York </span>Times</span> Op-Ed page. And I not only do not have a problem with that, as you may have figured out - I actually applaud the author for obtaining (I presume) an honorarium for the service of providing a philosophical perspective to the readers of the <span style="font-style: italic;">Times</span>. For the author, Adam Shriver, identifies himself as "a doctoral student in the philosophy-neuroscience-psychology program at Washington University". The fact that being a philosopher these days means being able to attach "neuroscience" to your discipline, title, major, article, view, affiliation, or whatever you happen to have on offer, is regrettable, but let's just say, for the moment - he's a philosopher. And there is no question that what Shriver offers us is philosophical to the bone. Or at least, the meat.<br /><br />The title of Shriver's piece, possibly contributed by some <span style="font-style: italic;">Times</span> editor, is "Not Grass-Fed, but at Least Pain-Free". The former hyphenated adjective refers to the practice of allowing farm animals to graze freely rather than be starved, force-fed, or fed artificial or unnatural products; it is sort of Livestock 101 for organic farming advocates. I guess the latter ("pain-free") is a gesture in the direction of "cruelty-free", a popular label on products which purport to observe some ethical norms in the treatment of animals, especially in the matter of cosmetics testing. The title suggests that the general thrust of the piece will be to argue that although it may not be practical to observe every last principle of humane livestock farming, one should at least avoid practices that cause the animals pain.<br /><br />That's what I would have guessed, anyway. And if you are an extraordinarily literaral person, you might say that the title is accurate. I, however, am not particularly literal; perhaps that's another reason I've turned some of my attention to writing fiction lately, aside from the purely pecuniary one. So I find the title disturbing for what it fails to convey; just as I find the piece disturbing as much for what it fails to say as for what it does. Indeed, I find it repugnant, morally and otherwise. But it will take a little work to tell you why.<br /><br />Soon-to-be Dr. Shriver's perspective may be summed up as follows:<br />(1) "We are most likely stuck with factory farms..." since they produce most of our red meat.<br />(2) But animals in factory farms suffer a lot of discomfort.<br />(3) It is bad to feel pain. (Take that phrase as literally as you possibly can for the moment.)<br />(4) "It is still possible to reduce the animals' discomfort - through neuroscience." Cited in this regard are studies which show that it is possible to genetically engineer animals to block certain pain pathways (including studies underway in Shriver's own department). We will expand on this shortly.<br />(5) The meat from such genetically engineered animals would be safe to eat.<br />(6) Since new research shows that the suffering from pain can be dissociated from pain-avoidance behavior, the animals could be so engineered as not to casually injure themselves.<br />(7) In light of all this, we are morally obligated to use these genetic engineering techniques (once they are commercially available): "If we cannot avoid factory farms altogether, the least we can do is eliminate the unpleasantness of pain in the animals that must live and die on them. It would be far better than doing nothing at all."<br /><br />Now, before I begint to apply my forceps to this view, allow me to note something which is, unfortunately, quite pertinent. I began, not without reason, by talking about money, and the limited opportunity to make some by writing philosophy. One creative way around this is to write an Op-Ed piece for the <span style="font-style: italic;">NY Times</span>. But how much more creative might it be if one could gang up with some neuroscientists, help make some commercial applications of their genetic engineering experiments socially acceptable, and maybe scoop up a few bucks as the new methods come to market? Well, before anyone gets bent out of shape, I'm not suggesting that Adam Shriver is in it for the money. Zhou-Feng Chen, to whose research Shriver alludes, is at his school, but not in his department. (Who knows who's on his committee, though? There are few accidents in the sickening mire of academic poltics.) Moreover, in a <a href="http://www.newscientist.com/article/mg20327243.400-painfree-animals-could-take-suffering-out-of-farming.html?page=2">news article</a> and interview that covers much the same ground as the <span style="font-style: italic;">Times</span> piece, Shriver has let on to the magazine <span style="font-style: italic;">New Scientist</span> that he is "</span> <span style="font-family:trebuchet ms;">a long-time vegetarian" and thinks that "eliminating factory farms would be the best option". According to the article, he says </span><span style="font-family:trebuchet ms;">"I would be happy to jettison my idea"</span><span style="font-family:trebuchet ms;"> on the (implausible) condition that "someone can prove that we really are on the verge of moving to that kind of society". Not only that, but Shriver has published <a href="http://www.ingentaconnect.com/content/routledg/cphp/2006/00000019/00000004/art00002">an article</a> in <span style="font-style: italic;">Philosophical Psychology</span> (V.19 #4 Aug 2006) in which he defends the idea that animals are sentient and "the belief that nonhuman animals experience pain in a morally relevant way is reasonable, though not certain" (from the Abstract; you are welcome to purchase the article for yourself for a cool $36.48 plus tax). Of course, if they are not sentient there might not be a very good ethical foundation for breeding them to be pain-free; but who knows which view is leading which? Anyway, it seems that Shriver is in some sense to be counted among the cow-huggers of the world, genuinely wants to do right by our bovine friends, and wouldn't sink his teeth into a veal cutlet even if the poor little things were treated like King Tut (who didn't live much longer than they do). Nevertheless, I feel compelled to point out that, whether it is Shriver or some colleagues for whom he is doing a completely pro bono service, there are big bucks to be made from patents, consulting fees and what-have-you if any such pain-eliminating genetic therapy were to become commercially viable. So without making any undue assumptions about philosophers, suffice it to say that eliminating the </span><span style="font-family:trebuchet ms;">the social resistance to such techniques could certainly facilitate the accumulation of </span><span style="font-family:trebuchet ms;">fortunes, which will only be obtained if public outrage does not make it too costly to market this therapy.<br /><br />In any case, many a ship has foundered on the shoals of good intentions, and I'm sorry to say that this boat is well on its way to the bottom. <span style="font-style: italic;">New Scientist</span> refers to the "yuck factor" in explaining why we react negatively to the idea of pain-free animals, and opines in <a href="http://www.newscientist.com/article/mg20327242.300-painfree-animals-would-not-be-guiltfree.html">an editorial</a> that "logically speaking, pain-free animals make sense. But only in a world that has already devalued animal lives to the point where factory farming is acceptable. Our visceral reaction to pain-free animals is actually a displaced reaction against the system that makes them necessary.</span>"<span style="font-family:trebuchet ms;"> That's a catchy line; only it sort of sidesteps the issue by suggesting that we have to value the "lives" of animals to avoid the logic of pain-free meat. Similarly, the magazine quotes Marc Bekoff (UC Boulder) to the effect that "The fact that they are alive, even if not sentient, warrants against using them in ways that result in their death." In my view, any argument that depends on assigning such value to animal <span style="font-style: italic;">lives</span> is weak, because though they are <span style="font-style: italic;">alive</span>, they don't necessarily have "lives" in (say) a Kantian or Aristotelian sense, or value in (say) a Millian sense. Life and value as it relates to humans in the classic ethical theories may be quite different from what we call life and value in animals, because, e.g., they may not have a sense of themselves as having value in the way that we do, or exercise free will in the sense that we do, or perceive themselves as temporally continuous as we do, or see themselves as the subject of rights and responsibilities as we do, etc. And our perceiving <span style="font-style: italic;">them</span> as being similar to us in these ways may be nothing more than a projection. I think we have to live with the possibility that this is the case, and figure out why, <span style="font-style: italic;">anyway</span>, it is a terrible idea to treat animals in certain ways that would be extremely objectionable if they were applied to humans.<br /><br />For even if animals do not have "lives" and value in our sense, <span style="font-style: italic;">we</span> do; <span style="font-style: italic;">we</span> have lives which are diminished in some way by treating animals as a mere means to an end. </span><span style="font-family:trebuchet ms;">That is, what we have to ask ourselves is why we value <span style="font-style: italic;">our own</span> lives so little as to reason that by altering a basic feature of another creature's genetic makeup we can somehow make ourselves morally better. When you think about it, the idea is really quite incoherent. It suggests the following principle: a practice that is morally repugnant with respect to some living creature can be made less repugnant by changing the creature's mental state such that <span style="font-style: italic;">it</span> does not find the practice unpleasant. But wasn't the problem in the first place that <span style="font-style: italic;">we</span> (some of us at least) found the practice repugnant? Yes, of course - but wasn't the reason we found it repugnant just that we believed the creature found it unpleasant? No, emphatically not; we may not have had a thought about the creature's suffering, but found it repugnant nonetheless.<br /><br />Suppose I see a fur trapper clubbing some baby seals to death. I run over and demand that he stop. "But don't you realize," he says, "that I am only clubbing those seals who have the genetic mutation depriving them of pain sensation." "Oh, well then", I say with a broad smile, "that's much better. I see you truly care about our flippered friends after all. Go right ahead then, just don't make a mistake and club one that feels pain!" Of course almost no one who reacts badly to the initial situation is going to be moved to change their feeling about it after being informed that a clubbing on the head is just like playing with beachball for some baby seals. This shows just how absurd it is to think that what is at stake here is merely the pain of animals. It is true that undue suffering should be alleviated in animals; but far from true that artificially removing the sensation of pain from animals we are intentionally harming puts us on a higher moral platform. It is just as likely that it makes us worse; after my conversation with the trapper I might would feel doubly sorry for the seals that could not even react appropriately to being critically injured.<br /><br />The principle, "don't change our practice towards a subject; change the subject so it doesn't mind our practice" can lead down some even more bizarre paths. Suppose I could make it fine for a duck, indeed even beneficial, to have its bill cut off, by introducing a genetic mutation that makes ducks hop around like rabbits and nibble on lettuce instead of fishing. I am thinking the world would not exactly applaud this innovation, but rather deplore it twice, as a double insult to the animal. The truth is, we have turned a corner in which we can play fast and loose with ontology through genetic engineering, and our defective moral consciences will likely permit us to take what we can from this in order to diminish our sense of impropriety at otherwise heinous acts. Let us then clone thousands of humpbacks and blue whales with their pain sensors nicely removed so we may once again train our sonarscopes and exploding harpoons on them! Whaling is so fine a tradition, after all, and now we can practice it without our sea mammal friends suffering any pain! Rhino horns? Rip em out! - those rhinos were just pain-free clones anyway. What's a steel jaw trap to a bear that can't feel pain? Just another day in the forest, my friends!<br /><br />Yes, bring on the neuroscientists, with their solutions to the ethical problems of mankind. Indeed, on a clear day you can see beyond the horizon to that earthly paradise where we can do just about anything we please without a tinge of moral uneasiness. Consider the following suggestion: suppose we had a nerve gas </span><span style="font-family:trebuchet ms;">(some bodywide form of lydocaine, perhaps)</span><span style="font-family:trebuchet ms;"> that we could spray over opposing soldiers in battle, leaving them unchanged except to to take away their ability to feel pain - wouldn't it be morally incumbent on us to use it? I mean, let's just agree that we can't get rid of war, okay? But we can at least kill the pain. People are known to suffer a lot from bullet wounds, flying shrapnel and incendiary devices, after all. Spray 'em with the gas, then spray them with bullets and feel much better about it.<br /><br />Well, I can see the objection: if we did that, then they'd just keep coming at us and we'd never win. (Sounds like <span style="font-style: italic;">Night of the Living Dead</span>? So, zombies are a respectable topic of philosophical discussion these days; and perhaps now they are a respectable goal of philosophical neuroscience.) A bullet through the kneecap is generally a very effective deterrent to a soldier's further advance, but if they don't even feel it they might just keep hopping along on their one good leg until they perhaps kill us. Okay, then, let's change the strategy: why not spray <span style="font-style: italic;">our own</span> soldiers with pain-killing gas? Then we would not only benefit directly but win the war! Indeed, I'd be shocked if the Pentagon has not experimented with this sort of thing already. (<em>Shutter Island</em>?) If there is any downside to it, it would be that the pain-free soldier might not care that he is walking into machine gun fire, and would therefore take unnecessary risks. But the new techniques get around that problem by not suppressing the harm-avoidance genes, just the pain-feeling genes. The more capabilities we acquire with this technology, the more we can obtain designer creatures that have just those qualities we want them to have and lack just the ones we want them to lack.<br /><br />If you are okay with this experiment so far, you are to my mind too demented to carry on a meaningful conversation with. For the first question one should have is: Doesn't this somehow give moral legitimacy to war, such that even wars of aggression could be fought without the cost in human suffering that is one of the great historical motivations to stop wars from happening? Aren't we, in the name of eliminating pain, actually making it easier to continue practices that are normally thought to be wrong partly because they cause a lot of it? Am I being unfair, suggesting that Shriver's well-intentioned defense of pain-free cows lead down a slippery slope, from cattle to the battlefield, and even sanctions war as a means to a political end? I don't think so. But in case you are still not convinced...<br /><br /></span><span style="font-family:trebuchet ms;">Suppose I am a serial killer, who likes to mutilate my victims. Perhaps I could get off with a lighter sentence if I tell the judge: in order to minimize the suffering of my victims I administered morphine before mutilating them. Fine, I guess that has a kind of impeccable logic to it. If I ever happen to fall into the hands of such a demon I hope it's one with a large supply of morphine. But suppose, now, I find a doctor who tells me that he distributes morphine on request to anyone who identifies himself as a possible serial killer. "If you can't eliminate serial killers," reasons the doctor, "at least you can help the victims by making morphine available to them." That's not quite so impeccable. "Doctors" have been employed in all sorts of heinous circumstances to medicate torture victims and other unfortuntes. Do we thank them for them humane servces - or deplore their participation in evil schemes, regardless of what their role is? If a neuroscientist delivers gene therapy technology to a poultry farm that shackles geese for the production of <span style="font-style: italic;">pate de foie gras</span>, is this person a humanist - or an accomplice to a crime?<br /><br /></span><span style="font-family:trebuchet ms;">Of course, there are times when you want to eliminate pain artificially. Surgery is one. What benefit did people ever receive from feeling the pain of the surgical knife? I'm sure most of us reel in horror at stories of 19th century surgeries for which a shot of booze was the only anaesthetic. Besides, many modern types of operations are so long and invasive that no one could even bear them, and we would prefer to die instead. Eliminating surgical pain is an absolute good, because doing so makes it possible, or easier, to advance our personal agenda of being cured of some malady. But eliminating the pain of being shot with a bullet in a war does not typically advance the personal agenda of the one whose pain is eliminated. This <span style="font-style: italic;">could</span> be the case if, say, highly motivated revolutionaties could get the gas; or soldiers fighting for a cause they are willing to die for, of their own free will. But typically, a soldier is a recruit, a draftee, a mercenary, a person seeking a way out of poverty - someone who either had no choice, or simply hoped they could get some benefit from military service without suffering greatly. Sending these people into battle under the influce of morphine, or whatever, advances the agenda of someone else who wants to use their bodies to achieve a political goal. Something similar clearly applies to animals used for meat. It is not as if the practice, as a whole, of slaughtering animals is for the animals' benefit. It is for the benefit of our appetites and the pockets of agribusiness. To make it painless for the animal to undergo this slaughter is to make ourselves immune to the thought that something may be wrong with our practices. This is like a meta-wrong that does not merely outweigh the utilitarian benefit it promotes but surrounds it like a dark cloud. Lobotomies had their benefits too. Come to think of it, the sponsors of pain-free livestock might just be consistent enough to think it's a perfectly reasonable option today.<br /><br />Something is rotten in Denmark, and it may be a piece of painlessly produced meat. I suggest the following principle: elimination of pain is good relative to a situation in which a reasonable and rationally chosen goal of the subject is advanced by it; otherwise, it is either morally neutral, or a further harm in addition to whatever caused the pain. I think this saves most of our intuitions about pain. Pain elimination for medical reasons is generally good; the one in pain wants to recover and has good reason to want it. Pain relief for the suicide bomber is probably an additional evil, as the goal is not reasonable and pain relief may encourage the subject to pursue it. Pain relief in most situations where such relief advances no goal of the subject but makes the subject more compliant to undergo potentially harmful experiences is an evil, as it makes pain zombies out of formerly sentient subjects, and only advances goals to which the subject ought to rationally object. Pain relief is perhaps morally neutral when it neither advances any goals of the subject nor deprives the subject of any rationally selected good.<br /><br />This is all quite apart from: (a) side effects that come with pain relief, either through gene therapy or medication, which may increase the potential harm of such treatments; (b) the fact that deprivation of pain sensations can lead to further harm due to the subject's inability to recognize internal or external danger signs and avoid them (this problem is completely eliminated by the claim that animals could be engineered to want to avoid harm without having pain, since there is pain they can't avoid but will not complain about even though it might signal a serious problem); and (c) the use of unethical, harmful practices in experimentation on subjects in the pursuit of pain relief therapies. Each of these could require another essay, but I don't want to write a book about Shriver's proposal. I do, however, want to briefly address something I alluded to earlier, closely related to the second (b) of these points.<br /><br />To support his view, Shriver selects a couple of specific forms of discomfort that animals are forced to undergo in order to provide gustatory delights for the human race. One is the confinement of calves to produce veal; another is "severe gastric distress" caused by "unnatural high-grain diets". Keeping in mind that the ability of the genetically engineered animals to "recognize and avoid, when possible, situations where they might be brusied or otherwise injured" is supposed to be a key advantage of the new method. It is well known that people with the rare medical condition that deprives them of pain sensations (CIPA, Congenital Insensitivity to Pain with Anhydrosis) often end up losing limbs and sustaining other very serious injuries. No one would think such a condition would be beneficial to animals without the added claim that they can be engineered to avoid harm even without the motivation of having to avoid pain. But look at the conditions Shriver himself uses as examples. Unnatural confinement is not something the animal can avoid even if they <em>do</em> wish to avoid harm. Veal calves confined so tightly that they can't sit or lie down; geese shackled to prevent almost any movement whatsoever; pigs attached by a snout ring to a wall or fence; these kinds of barbaric practices produce distress that <em>cannot</em> be avoided by leaving the animal with the ability to recognize potential harm. Nor are they made any kinder by removing pain sensations. Diets of grain, injections of hormones and antibiotics, all sorts of practices that create internal conditions the animal cannot possibly avoid even with all the wonders of modern neuroscience: how is saving the harm-avoidance instinct supposed to help in the least with these? What it comes down to is really this: by having the animal take care of avoiding bruises, self-inflicted wounds, and the like, this technique saves the livestock farmer and the slaughterhouse from having to deal with thousands of needlessly injured animals who would thereby end up in the debit column on their balance sheets. The underlying point is not to give the animal a more normal life than a pain zombie would be expected to have, but to cut losses for the owner. That pretty much guts the moral argument for this technique even without all the bizarre consequences it entails.<br /><br />Shriver is a vegetarian; I'm not. He has a kind of moral lead on that one. I was a fairly strict vegan for about two decades, but now I eat poultry often enough, organic or free range and antibiotic free when I can get it; I eat fish, and I feed my kids red meat when they ask for it. I do not know of a convincing moral argument against killing animals for food. But I find the practices of the meat industry as a whole disturbing, morally repugnant and environmentally destructive. Of particular concern are the specialty foods that <span style="font-style: italic;">require</span> the mistreatment of animals through extraordinarily strict confinement. But slaughterhouses and livestock farms are not alone in mistreating other species. Thoroughbred race horses and circus elephants don't fare much better. Numerous acts of terror are committed against animals by poachers and people seeking mythical cures for all sorts of ailments - Asian tigers and African rhinos being well-kn own examples. Add to that the frequent abuse of domestic animals, the use of now illegal painful traps in the wild, and perhaps we should just start a breeding program to replace all existing animal species with pain-free substitutes. Or we can continue building the pressure for the abusive industries and individuals to change their practice. That's what we do with abusive practices towards humans, right? Let's not start developing pain-free women so they can be burned to death over a dowry or have their sexual parts surgically removed without causing great ethical dilemmas; let's start treating animals in a more humane way and put the pressure on the veal and foie gras producers and the other abusive practices.<br /></span></div>Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.com1tag:blogger.com,1999:blog-2489468916453210669.post-26828669698183848792008-10-28T17:07:00.000-04:002008-10-28T17:07:32.409-04:00Return of the Zombie<span style="font-family:trebuchet ms;">Please see <a href="http://brainscam.blogspot.com/2008/10/zombie-schmombie-richard-browns-efforts.html">my previous post</a> for a little background on the urgent philosophical question of whether zombies can beat zoombies and shombies in a ping pong match. At least we know that they can all beat Sarah Palin in a debate.<br /><br />I readily acknowledge both my tardiness and my wordiness (the two not being unrelated) in replying to Richard Brown. The world, or at least my path through it, is unfortunately so configured that blogging often has to take a back seat to things that I consider mundane and relatively dull. Oh well. The present issue came to life when Richard, on <a href="http://onemorebrown.wordpress.com/">his blog</a>, offered some ideas about creatures </span><span style="font-family:trebuchet ms;">(zoombies) </span><span style="font-family:trebuchet ms;">that are complete non-physical duplicates of normal law-abiding citizens like you and me, but fail to be conscious; and those that are physical duplicates, have no non-physical properties, and yet are conscious (shombies). Both of these beings are conceivable, according to Richard, or at least as conceivable as zombies, which are physical duplicates of ourselves that lack consciousness. The conceivability of zombies is supposed to support the argument that physicalism is wrong, because if we can conceive of a creature exactly like us but not conscious, it follows from this that it is not logically necessary that physical systems like ours must be conscious; and from this it follows that we cannot <span style="FONT-STYLE: italic">reduce</span> consciousness to some equivalent physical description. So if zombies are conceivable, materialism is wrong. But according to Richard, the conceivability of his two new creatures equally suggest that dualism is wrong. And according to me, the proliferation of these things suggests that we had all better run.<br /><br />Richard eventually put his thoughts into a form appropriate to the hallowed environment of a philosophy conference (that of the Long Island Philosophical Society), and I responded in similarly civilized fashion. And now that we've got that over with we can proceed to thrash about and flame each other on the Internet. (Just kidding - I think.) I will take up as many of Richard's responses to my reply as I can, while conceding in advance that he will probably outlast me (if not outwit me) in any blog debate. And given that Brown is the name he chose for his online identity I shall now revert to that appelation, while wondering aloud how a name like "one more Brown" gets to be a rigid designator.<br /><br />Brown's response to my critique begins with my defense of the idea that zombies are indeed conceivable. I suggested that I can imagine a being that is physically identical to me but unaware of the blue tint of the light in the room, and I can expand on that concept to conceive of a zombie (who is unaware of not only the bluish tint but everything else). Brown's response is:<br /><br /></span><span style="font-family:trebuchet ms;"><blockquote>"What we need is to imagine me being in the very same brain state and not being conscious of the blueish tint. This is exactly what is in question –that is, whether this is something that can be imagined– and so this is at best question begging."<br /></blockquote><div style="TEXT-ALIGN: left">David Chalmers, you will recall, was said to be begging questions by ruling out the possibility that "mind" is just a popular term for a physical system; if so, according to Brown, the nonexistence of zombies is a necessary truth and zombies are t<span style="font-size:+0;"><span style="font-family:trebuchet ms;">herefore </span></span>unimaginable. Now I am allegedly begging questions by assuming that I can imagine being in the same brain state whether aware or unaware of a bluish tint. But I think this is a misuse of the term "question-begging". Brown seems to think the (hidden) form of the argument is,<br /></div><blockquote style="TEXT-ALIGN: left"><span style="font-family:trebuchet ms;">1. Let's assume physicalism is wrong.</span><br />2. <span style="font-family:trebuchet ms;">If physicalism is wrong, then I can imagine that we have physical duplicates that are not mental duplicates.</span> <span style="FONT-FAMILY: trebuchet ms;font-size:100%;" ><br />3. If I can imagine that we have physical duplicates that are not mental duplicates then the mental does not logically supervene on the physical.</span><br />4. <span style="font-family:trebuchet ms;">Therefore physicalism is wrong.</span><br /></blockquote><div style="TEXT-ALIGN: left">But the second premise does not depend on the assumption that physicalism is wrong. It is an appeal to intuition, pure and simple. According to Brown, Kripkean semantics prohibit the assumption that this intuition is possible until we have first checked to see if physicalism might be correct. I am actually tempted to hand him this point because it would be the proverbial pyrrhic victory. For if I give him that, he equally has to give me the point that he cannot assume that zombies are not conceivable until we have already established what we are currently attempting to discuss. And with this stalemate at hand, we can proceed to lose our ticket to any intelligent discussion of issues which might eventually be decided by some empirical discovery. So it will be question-begging, for example, to say that the following worlds are conceivable: that in which there is no being to whom gave Moses the ten commandments; the one where large manlike creature called 'bigfoot' are nothing but a hoax; and the imaginary space in which Loch Ness is devoid of living creatures larger than a lake trout. These are question-begging in roughly the same sense that it is "question-begging" to say that a world in which there is no physicalist reduction of consciousness is conceivable, and thus that I can conceive of a world in which there is a being physically identical to myself but lacking consciousness. In all these cases, it may, as far as science is concerned, turn out that these names or definite descriptions ("god", "bigfoot", "Loch Ness monster" and "the physical facts that constitute consciousness") identify actual entities, and if we allow that, we cannot say we conceive of the worlds in question.<br /><br />If this isn't a spurious argument I'll eat my copy of <span style="FONT-STYLE: italic">Naming and Necessity</span>. Does Kripke say that we can't <span style="FONT-STYLE: italic">conceive</span> of the mind as non-physical? Quite the opposite. Does Putnam say I can't <span style="FONT-STYLE: italic">conceive</span> of water as XYZ? Quite the opposite. Here's Putnam: "My <span style="FONT-STYLE: italic">concept</span> of an elm tree is exactly the same as my concept of a beech tree... (This shows that the identifcation of meaning 'in the sense of intension' with <span style="FONT-STYLE: italic">concept</span> cannot be correct...)" (<span style="FONT-STYLE: italic">Mind, Language and Reality, Phil. Papers V.2</span>, p.226) What's the point? I can conceive of things that are necessarily false, e.g., "Beeches are just like elms". Not "I believe [falsely] that I can conceive of a world in which beeches are just like elms" but I <span style="FONT-STYLE: italic">conceive</span> of such a world, plain and simple. (Or I <em>imagine</em> it if you like, but conceiving does not <em>have</em> to include mental imagery.)<br /><br />Brown should get off this begging-the-question kick. Nothing about what I can or can't conceive today depends on what science discovers tomorrow. If I can't conceive of zombies once I have studied the physical reduction of consciousness (which has been added to Psych 101 texts in the year 2525) then fine, I can't do it. But to bring in a posteriori necessity to show that I can't <span style="FONT-STYLE: italic">conceive</span> today what might turn out to be false tomorrow is really cuckoo, a curious technical trick at best. If that were really the implication of the theory, it would be a reductio of Kripkean semantics. But that is not what the theory implies.<br /><br />There is another problem with Brown's methodology, which is captured in his statement that <span style="font-size:100%;">"</span><span style="font-size:100%;"><span style="font-family:trebuchet ms;">This is exactly what is in question –that is, whether this is something that can be imagined." Look, an artist covers a canvas in black paint and says, "This depicts a zombie". You are confused, no doubt, but what exactly can you say? "How? Why can't I see the zombie's shape? Is there anything else in the picture? Were you on drugs when you painted it?" These might be legitimate questions; what is not legitimate is to say, "No it isn't; I'm looking right at it and there is no zombie there." Does the artist even need to reply to this? She can laugh, because the statement is nonsense in this context; or she can say, "When you learn to see the world the way an artist sees it, you will perhaps see a zombie there; and if you don't, I can't help you." (In Goodman's terms, not every picture that represents a zombie is a zombie-picture.) The same holds true for mental pictures, conceptions, imaginings, etc. I know what a zombie is, I am not a hallucinating schizophrenic, I am an honest guy and I believe I am conceiving of a zombie. So I am conceiving of a zombie. Once the basic psychosocial background is given, my claim goes through automatically. It's not corrigible. It doesn't depend on facts or on Kripke. And it <span style="FONT-STYLE: italic">especially</span> does not depend on some inspection (<span style="FONT-STYLE: italic">per impossible</span>) of my conception to compare it in fine detail with the putative physical correlate that will be discovered some time hence. The details of a conception are <span style="FONT-STYLE: italic">stipulated</span>, not set in place like clockwork. Otherwise it has to be said that I cannot really conceive of an automobile, since I haven't the foggiest idea what goes on inside a transmission (though I doubt it is little men turning cranks).<br /><br />Last point, which came up in a discussion session at the conference: the point of the zombie argument is to deny the claim on <span style="FONT-STYLE: italic">logical supervenience</span>, the idea that the mental <span style="FONT-STYLE: italic">logically</span> supervenes on the physical. "Logical" here is the same as <span style="FONT-STYLE: italic">conceptual</span>; the point is to show that the mental is not conceptually identical to some physical substratum (see Chalmers, p,35). Brown, as far as I can tell, seems to think "logical supervience" is just materialism, but I doubt that. The target is not the brand of materialism that says that once the physical facts are known, the facts about consciousness can be scientifically deduced; the target is the brand that says that once the physical facts are known, the facts about concsiousness are <span style="FONT-STYLE: italic">logically entailed</span>; they simply <span style="FONT-STYLE: italic">fall out</span> of a correct description of the brain. As Kripke says, a consistent materialist would have to hold that a complete physical description of the world is a complete description <span style="FONT-STYLE: italic">tout court</span>; once we have it, it should just be obvious where consciousness lies in it, though it might not be called by that name. That is a logical supervenience position, and it is quite different from physicalism in general. Chalmers and I are both physicalists of a sort; we think that at some level, in the world as it is, consciousness is dependent on brain chemistry and structure. The zombie argument is not directed against this belief, and would not be effective against it. It is meant to show that we need not believe that consciousness is going to just "be there" when we announce the result of the ultimate brain scan. Scan all you want; at the end of the day you will still have to have some other kind of explanation for consciousness. The situation is (not coincidentally) somewhat like Kripke's view of rule-following: state every empirical fact you can find about the system, you will not find the rule there. Nor consciousness, if you proceed in that manner. So there is no entailment of consciousness by physical facts, and that is what logical supervenience is, and what the zombie argument is meant to cast doubt on.<br /><br />The next point in Brown's response refers to my comment that in cases of aspect-change no physical difference takes place, although a mental difference does:<br /></span></span><br /></div><blockquote style="TEXT-ALIGN: left">Alterman goes one to cite, as evidence, his convixtion (sic) that he has no reason tot hink that there is a microphysical change in his brain when he is looking at an ambiguous stimulus (like the duck-rabbit, or the Necker cube), but this is rather naive. There is evidence in both Humans and primates that there are changes in brain activation that correlate to the change in perception in these kinds of cases.</blockquote><div style="TEXT-ALIGN: left">Let's keep in mind what we are talking about here. I used the duck-rabbit example to support the point that we can <span style="FONT-STYLE: italic">conceive</span> of a zombie by enlarging on the intuitive idea that changes in mental state can occur without a change in the physical description of the system. When I observe the duck and then notice the rabbit it seems that no change takes place in the physical description of the system. Brown is arguing that this is an illusion, for brain scans show some "brain activation that correlate to the change in perception". I think there is less here than meets the eye. It stands to reason that some stimulation occurs when anything like perception, recognition, concentration, etc. takes place. Nobody disputes that, so it can't be the issue. The issue is whether it is <span style="FONT-STYLE: italic">conceivable</span> that a being physically identical to myself could exist without conscious activity. And since it is certainly <span style="FONT-STYLE: italic">conceivable</span> that no change takes place when I switch from one to the other, it is by enlargement conceivable that some being never undergoes such changes.<br /><br />But I am not inclined to leave it at that. For the "change" that Brown points to is nothing more than an indication of an increase in blood flow (or possibly electrical activity) to some area involved with perception. (Roughly the same areas are often involved in both external perception and recognition of mental images.) So what does that show? It certainly is a long way from suggesting that some brain activity is identical with the percept "there's a rabbit in this picture"! In fact, though I do not know which particular bit of research Richard has in mind, I would be willing to bet him lunch that it shows only that the act of searching in the picture for the new image (like the achievement of stereoscopic vision, to take another example) involves some brain activity; no way it can show that there is any difference in the organism while it perceives a duck vs. a rabbit. </div><div style="TEXT-ALIGN: left"> </div><div style="TEXT-ALIGN: left">But I am even willing to grant that such a difference <span style="FONT-STYLE: italic">might</span> be found; for example, it might be shown that certan vectors activated in one case have a historical (causal) relation to vectors activated in the perception of actual ducks, and the other in the perception of actual rabbits (or of realistic duck or rabbit pictures - it doesn't really matter which). So let it be the case that for every individual, nerve cell activation occurs in the duck-rabbit picture specifically in relation to the history for that individual of previous perceptions of the appropriate form. Unfortunately, the physicalist is <span style="FONT-STYLE: italic">still</span> in need of an identity much stronger than this. The burden on the physicalist is to give a brain specification that just <em>is</em> the cognition of rabbit-shape (or blue-tintedness) or a strong reason why it is likely that such a specification will be found. The burden on the anti-physicalist is just to give an intuitive reason why that is unlikely to happen. Which I did, but I am more than willing to go a step further, and put it like this: there is no reason to think anyone will ever find a neurological specification that is, so to speak, the transcendental condition guaranteeing the truth of the utterance "he sees a rabbit-picture" or "he sees a duck-picture". And if that won't happen, the fact that some blood flows to the area that manages changes in perception is of little interest. <br /><br />Brown next takes on another example I used to demonstrate the conceivability of zombies, that of sleepwalkers and blindsight. These people, he insists, are in states "</span><span style="font-family:trebuchet ms;">which obviously include a physical difference" from ordinary conscious states. Once again, that is not really relevant to the point of the example. We are talking about <span style="FONT-STYLE: italic">conceivability</span>; the example is meant to bolster the plausibility of the claim that zombies are conceivable (to provide "evidence" for conceivability, in the only intelligible sense of Brown's demand for it), and if it does that, it has the effect it is intended to have. It is in no way intended to show that people in such states are in physically identical brain states to non-sleeping, non-brain-damaged individuals who might perform the same actions. To show that might be sufficient to prove the conceivability of zombies, but it is far from necessary. I don't think I need to belabor this any more.<br /><br />I will have to skip over Brown's next few responses because I think they amount to sticking by the line that Kripkean semantics require us to not assume zombies are conceivable just because we think we can conceive them, and I have already responded to this in sufficient detail. So I move on to his response to what he calls my "stunning claim" that no theory of consciousness has even begun to offer a reductive program for phenomenal experience, such as color vision. Actually I was under the impression that no one would find this even interesting, much less "stunning", because it seems that even materialists have practically written off the effort, generally claiming that qualia are mere illusion and beneath the dignity of a physical theory to explain, while anti-materialists have been saying it consistently since Nagel (whose seminal article is almost entirely an exposition of this very point). So what is Brown's answer to my "stunning claim"? HOT! Yes, of all things, he points to David Rosenthal's (or someone's, in any case) "higher-order thought" theory of consciousness as a program for the physicalist reduction of phenomenal consciousness! Talk about stunning - I thought the very reason that HOT has not attracted many followers is precisely that it offers no hope of explaining phenomenal consciousness. But maybe Brown has been having private sessions with POMAL types who think otherwise.<br /><br />So what is the response of HOT to my request for </span><span style="font-family:trebuchet ms;">"a program for explaining conscious experience, or even the function of consciousness, as an outcome of... biophysical research"</span><span style="font-family:trebuchet ms;">? According to Rosenthal, at least, a conscious thought has a qualitative character because the HOT that accompanies it is in some quality-space. That not being very enlightening (even compared with the outright abandonment of attempts to deal with qualia in more hardnosed materialist theories like those of Churchland, Dennett, or Crick) Rosenthal goes on to explain why the HOT has the qualitative it has: it tracks the "similarities and difference" in perceptual space. That's it, the putative program in a nutshell. As for the function of consciousness, Rosenthal's view is that it doesn't really have one; we could get along quite well without it. (Apparently Rosenthal <span style="FONT-STYLE: italic">can</span> conceive of zombies; indeed, one could interpret what he says about the function of consciousness to suggest that it is no more than an evolutionary accident that we are <span style="FONT-STYLE: italic">not</span> zombies.) In spite of a great deal more verbiage (see Rosenthal's "Sensory Qualities, Consciousness and Perception" in his book, <span style="FONT-STYLE: italic">Consciousness and Mind</span>) there is not a whole lot more to this response to what I said was missing. </span></div><div style="TEXT-ALIGN: left"><span style="font-family:trebuchet ms;"></span> </div><div style="TEXT-ALIGN: left"><span style="font-family:trebuchet ms;">As Brown characterizes the HOT view of why red objects appear red and not green, </span></div><blockquote><span style="font-family:trebuchet ms;">"</span><span style="font-family:trebuchet ms;">they do so because we are conscious of ourselves as seeing red not green. You may not like this answer but it certainly does what Alterman says we we don’t have a clue about doing."</span><br /></blockquote><span style="font-family:trebuchet ms;"></span><br /><span style="font-family:trebuchet ms;">Actually, it is not so match a matter of whether one likes the answer as whether one finds it to be an "answer" to anything.</span><span style="font-family:trebuchet ms;"> It seems to me that this is as far from materialist dreams of a perfect theory as one is going to get. In spite of Rosenthal's often expressed sympathy for materialist analyses of <span style="FONT-STYLE: italic">non</span>-conscious thoughts, what he is doing is, broadly speaking, traditional philosophy of mind and language. He offers something like a conceptual analysis of conscious awareness, and gives a defense of it in terms of performance conditions and other standard POMAL ideas. Quite a distance from anything that is going on in the reductive programs that comprise the materialist discourse. I stand by my "stunning claim" - there ain't nothin' happening, in any branch of philosophy or cognitive science, that begins to shed light on how or why we experience reality largely as a succession of qualitative states.<br /><br />Brown states that he never questioned that conceivability entails possibility, as I said he did in my response. But he presents the main line on which his paper is based, the Kripkean semantics of natural kinds, as being "the typical argument that conceivability doesn't entail possibility". </span><span style="font-family:trebuchet ms;">I grant that he never explicitly says that he agrees with this use of Kripkean semantics; he employs it in another way, to question whether zombies are conceivable. On the other hand, he never disputes the first use; </span><span style="font-family:trebuchet ms;">indeed he says a number of things which suggest it, e.g., "it cannot be the case that intuitions about zombies are evidence for or against any theory of consciousness". I was reading this as implying that we could grant the possibility of zombies without the dualist gaining any ground. But I am happy to let Brown be the final arbiter of his own intentions, and leave that portion of my reply as a side-issue directed to those who use the Kripke line in the first way. (It does strike me as ironic that there would be two separate arguments against dualism based on a theory of Kripke's which he employs against materialism, but never mind. Since I don't agree with much that Kripke says about Wittgenstein I am not going to appeal to his authority in this case.)<br /><br />Brown's next point is that Chalmers, contrary to me, is indeed</span> <span style="font-family:trebuchet ms;">"claiming that there is a necessary link between our non-physical qualities and consciousness". I am not going to go through Chalmers' book to verify that this claim is never made, but it seems to me that the basis for Richard's statement is once again the Kripkean view that if "water" refers to H2O in this world, it does so in all worlds; so if "consciousness" refers to a non-physical property in this world, it does so in all worlds, and its non-physicality is therefore a necessary truth. There are various ways of responding to this. The simplest is to say that Chalmers' argument only leads to the point that it <span style="FONT-STYLE: italic">could be</span> a necessary truth that consciousness is a non-physical property. Another is that Chalmers simply does not think that consciousness is a non-physical property in every possible world; he thinks that it is contingently non-physical in this world. A more technical response would involve Chalmers' two-dimensional semantics and the "primary" versus "secondary" intensions of natural kind terms, but I can tell from Brown's latest post that this is only going to lead to a brand new debate. I would rather just refer readers to parenthetical remark which constitutes the last paragraph of p.59 in Chapter 2 of <span style="FONT-STYLE: italic">The Conscious Mind</span>, which to my mind offers an adequate reply to the basic premise of Richard's paper. (The reason it is adequate is because it spells out in the technical terms of two-dimensional semantics what I have been saying in more straightforward language throughout my comments: that it simply cannot be the case that we can't conceive of certain possibilities until someone has determined whether some empirical fact about the actual world is true.)<br /><br />A not terribly important side-issue regarding Brown's view is whether it makes any sense to postulate beings that are similar to me with respect to "all non-physical qualities", or beings that are "completely physical" and are conscious. Suffice it to say that I cannot find a way to allow either of these examples without thinking that the answer to whether physicalism is correct is already built in to the description. Brown seems to think that that doesn't matter, because it is just parallel to what the zombie theorist does. But I think it is not parallel, because the zombie example makes no theoretical assumptions and simply depends on intuition, while Brown's claim that it is question-begging is theory-driven, and the theory is used in a counterintuitive way that most of the disputants do not agree with.<br /><br />At the end of his remarks, Brown says that he can live with the limited goal I attribute to the zombie argument, that of establishing that there is no conceptual link between physics and consciousness. Hmmmm, I thought that that was what the whole debate was about. Chalmers himself believes that consciousness <span style="FONT-STYLE: italic">physically supervenes</span> on brain states, and only argues that it is not the case in all logically possible worlds that this is so. In his book, he presents not only the zombie argument but four other arguments (none of which, I believe, are original, though the presentation is) to the same effect. Why should we be so concerned with this? I am concerned with it because I don't think reductive programs are the way to go. I think a lot will be found out about how consciousness is connected with the biological structures of the brain - 40 Hz waves or whatever - but if the relationship between any particular physical instantiation and consciousness is contingent, we will learn more about consciousness through other methods - perhaps what we might call traditional philosophical analysis, perhaps some of what goes by the name of clinical psychology, perhaps aesthetics. Consciousness, in my view if not in Chalmers', has been most usefully explored in the work of Kant, Wittgenstein, Husserl, James, Freud, Jung, Kohler, and other writers of that nature, as well as in literature of great merit from Homer to Joyce. The whole tradition of cognitive science is at this point nothing but a footnote to those insights at this point. In my opinion, it never will be much more than that as far as this question is concerned.<br /><br /><br /><br /></span><span style="font-family:trebuchet ms;"></span>Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.com9tag:blogger.com,1999:blog-2489468916453210669.post-57029215116206829632008-10-19T09:44:00.014-04:002008-10-20T01:08:53.948-04:00Zombie, Schmombie - Richard Brown's Efforts to Ressurect Materialism<span style="font-family:trebuchet ms;">The indefatigable POMAL blogger and <a href="http://faculty.lagcc.cuny.edu/rbrown/">Richard Brown</a> has posted <a href="http://onemorebrown.wordpress.com/2008/10/17/some-quick-thoughts-on-aldermans-comments/">a reply</a> to comments on his Zoombies and Shombies paper, "The Reverse-Zombie Argument Against Dualism" (find a link <a href="http://onemorebrown.wordpress.com/2008/08/23/nothing-new-under-the-sun/">here</a>), made by a certain "Alderman". Unfortunately, I must object to the egregious act of plagiarism that said Alderman has performed on the comments I sent to Prof. Brown only a few days ago, copying them more or less word for word (how he got hold of them I can only imagine). Should I sue? Actually you can't sue for plagiarism, and I'm not sure what the copyright value of my comments would be, so I have a better solution: Dr. Brown should simply change the "d's" in "Alderman" to "t's" and everything will be alright.<br /><br />Brown (whose name is quite difficult to misspell, though I tried) certainly outdoes me by a country mile in posting to his blog, an admirable quality that is underrated in the philosophical community. Blogging is I think more in the spirit of philosophy in the Socratic tradition than the institutional control exercised by professional journals and presses. (Anybody who has received the typically biased and ignorant comments</span><span style="font-family:trebuchet ms;"> on a rejected article</span><span style="font-family:trebuchet ms;"> from journal reviewers will probably agree wholheartedly with the title of Brown's blog, <span style="font-weight: bold;">Philosophy Sucks!</span>) In the future, I will try to do better than the, hmmmm,,, 10 month gap between this and my last post. (Which is a bit less than the gap in my arts blog. Yikes.) In any case, kudos to Dr. Brown for his blogging efforts - not to mention his Cel-ray tonic. (Jeez, names really do get confusing, don't they? Maybe someone should do some philosophical work on this topic.)<br /><br />What follows is the complete text of my comments on Brown's paper, delivered yesterday (10/18/08) at the conference of the </span><span style="font-family:trebuchet ms;"> <a href="http://myweb.brooklyn.liu.edu/mcuonzo/lips.htm">Long Island Philosophical Society</a></span><span style="font-family:trebuchet ms;">. The papers and replies will eventually be published in <span style="font-style: italic;">Calipso</span>, the LIPS online journal, at which point I may remove it from here and put in a link. In the next post I will reply to Brown's replies to my reply to his paper. (And perhaps to some of the replies to his replies to my reply to his reply to Chalmers - which can be found on his blog.)</span><br /><br /><div style="text-align: center;"><span style="font-family:trebuchet ms;"><span style="font-weight: bold;">Zombies, Schmombies... Full Text from the Original Author</span></span><br /><br /><div style="text-align: left;"><span style="font-family:trebuchet ms;"> <span style="font-size:85%;">The materialist position about consciousness consists in the view that consciousness can be fully explained once we understand the physical materials and processes in the brain. Consciousness will emerge as a supervenient property that can ultimately be reduced to some underlying physical basis. For materialism to go through, it is not sufficient that consciousness be somehow related to or dependent on the brain; it must be nothing more than a brain function, whose supervenience is obscured by some unique aspectual or descriptive stance that stands in the way of our seeing the connection intuitively. In the some versions, such obscurities will eventually disappear, and we will be able to eliminate the introspective illusion of an inner self. Others see the aspectual stance as inherent in the situation. On either view, there is nothing in reality that can either be explained, except as a dependent phenomenon, or do any explaining, other than the physical world.</span></span><br /><br /><p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style="font-size:85%;">Most opponents of the materialist view rely heavily on one or more intuition pumps that allegedly bring out a gap between the knowledge and understanding of physical facts and an explanation of consciousness. The "zombie" argument is one such effort. Imagine a creature that has all the physical properties that we would expect a human being to have, and behaves in the ordinary way that human beings would in similar situations, but lacks any hint of consciousness. If this is conceivable (so the argument goes) then physical facts cannot be the logical, or conceptual, foundation of consciousness.</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">In "The Reverse-Zombie Argument Against Dualism" Richard Brown suggests that the zombie thought experiment provides no compelling evidence that physicalism is wrong. There appear to be at least three tracks to his argument, which I will try to bring out.<br /></span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">The first idea is the contention that zombies, as described by David Chalmers and others, may not actually be conceivable at all. It is easy to miss the logic of Brown's argument here, because at the end he leads us somewhat astray, in my opinion, with suggestions that point in a different direction. One is that proponents of zombieism ought to offer some "evidence" for the conceivability of zombies. A second, related one occurs when Brown says that he himself cannot conceive of a zombie; and again, when he demands "some reason to think that we are really conceiving of a zombie world as opposed to a world that is very similar to ours but not microphysically identical". These points all seem a bit odd, to say the least. Conceptual arguments involve the logic of concepts; any "evidence" for them would surely not be of the empirical sort, and plenty of support has been offered on the conceptual side. The arguments do not depend on the strength of any one person's imagination, but on whether anyone can find a logical contradiction in their use of concepts. And though gross imaginative errors may be</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">to some degree corrigible (I might say I'm imagining a duck but in fact be imagining a chicken), it makes no sense to say that someone who claims to be imagining a microphysical duplicate of me might "really" be imagining something that differs in some small way. (What does "really" really mean here?) But let me try to respond with a defense of the zombie imaginer before we move on to Brown's main argument. My "evidence" will consist in conceptual support for the point that conceiving of a zombie requires nothing more than adding and subtracting properties, something any normal person can do.</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">So first, I can imagine someone physically identical to myself who is in the same room but is not aware of the slightly bluish tint of the late afternoon light, or the background humming of the air conditioning, while I am aware of all that. For I can imagine myself not having been aware of any them, and yet being</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">physically identical to my actual self; just as when I see the duck and then see the rabbit in the same drawing, I have no reason to believe that a microphysical change took place, and even less reason to think that a determinate, repeatable microphysical change took place. Similar arguments could be brought for memory, imagination, and other components of consciousness.</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">Therefore I can imagine a being that is physically identical to myself but lacks consciousness. Second, we can arrive at the concept of a zombie by expanding on concepts like blindsight or sleepwalking. These documented empirical states involve acting and behaving in certain situations like a normal human being but completely lacking awareness of one's behavior or surroundings. A being who is always in such states would be a zombie.</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">This should suffice for evidence of the conceivability of zombies. It is always possible to submerge one's conceptual abilities by becoming enmeshed in a theory. If one believes that all properties are directly reducible to underlying physical characteristics, it becomes difficult to conceive of anything that is not so reducible. In this way, entities that lacked the Aristotelian notion of substance were inconceivable prior to 18th century empiricism. If someone finds it impossible in theory to separate physical structure from any higher-order property whatsoever, then they might react to the notion of a zombie as "inconceivable" in the sense of "beyond the capabilities of imagination". But imagination tied down by theory is not the relevant power for assessing the viability of zombie conceptions.</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">The more important aspect of Brown's position does not rely on imaginative prowess. His point is that we ought to grant the physicalist at least the possibility that consciousness is nothing more than a high-level effect of the biophysics of the brain. If we do that, then we grant the possibility that consciousness is a natural kind term for some complex configuration of physical parts and processes. On a Kripkean theory of reference, a natural kind terms refer to a natural kind by means of some property that constitutes its identity. "Water" refers to all and only substances that are actually H2O . Once we know that that is the case, we realize that it is necessarily the case, and that the statement "it's water, alright, but it's not H2O" contains a conceptual confusion. "Consciousness" may similarly refer to whatever the underlying physical basis of consciousness turns out to be. We may not know that identity now, but when we do we will realize that zombies - physical duplicates of ourselves but without consciousness - never really were conceivable in the first place. According to Brown, if we insist that zombies are conceivable, we simply beg the question against this argument.</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">The question I have about this argument is, who is really begging the question? The logic of Brown's argument is that dualists cannot force the issue against materialism by stating a priori that zombies are conceivable, since it may turn out a posteriori that the connection between brains and consciousness is a necessary one. By the same token, one could have argued in the 19th century that a thought experiment designed to show that light is not a substance but a wave begs the question against the a posteriori necessary truth that light is the propagation of photons. The form of the objection seems wrong, because we cannot say in advance that discovering a physical basis for consciousness will make zombies inconceivable. Consciousness could be more like the terms "evolution" or "radiation" than like "water" or "heat". The former are natural kind terms, but neither has an essence that can be expressed in an identity statement. I fail to see any reason why thought experiments should be constrained by the combined demands of a controversial theory of reference for natural kind terms and the empirical possibility that reductionist programs will be successful. To focus on the latter for a moment, after two centuries of psychophysical experiments we still have no reason to believe that consciousness can be reduced to biophysical</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">properties. As Chalmers carefully explains, none of the popular reduction programs have brought us any closer to bridging consciousness with the physical world. Take our current, fairly sophisticated understanding of color vision; how does it even come close to explaining why red objects appear red and not green? No physicalist story even gets off the ground on this kind of question. The same holds for consciousness in general: in spite of having mapped and experimented with dozens of brain areas, having sophisticated biochemical analyses of brain activity, and even manipulating some basic motor functions with digitally simulated brain signals, we don't have so much as a program for explaining conscious experience, or even the function of consciousness, as an outcome of any of this biophysical research. I think it is quite a leap to say that dualists beg the question by ignoring the possibility that the holy grail of materialism will someday be found.<br /></span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style=";font-size:85%;" > </span><span style="font-size:85%;">A second point Brown makes is that conceivability does not entail possibility.</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">The zombie argument depends on the following kind of reasoning. Suppose it were the case that the mental logically supervenes on the physical. Then it would be a metaphysical fact about the universe that whenever you have mind, you have a material foundation. But logical supervenience is an identity relation, so whenever you have the appropriate physical foundation, you must also have mind. Then the concept of a physical foundation without mind ought to be a contradiction of some sort, like the concept of space without distance or consciousness without thought. But the zombie argument is designed to show that this is not the case. Let it be granted, then, that the zombie argument demonstrates the conceivability of zombies. We can conceive of life without death, too, and many other things that may not in fact be physically possible. In the end, then, the zombie argument demonstrates nothing of interest to anyone except philosophers, and the search for a materialist explanation of consciousness can proceed.</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">I think Brown can reasonably object that while zombies may be metaphysically possible, this kind of conclusion</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">may not establish anything very useful in the debate on consciousness.</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">It establishes that one can be a dualist without violating any rules of metaphysics. But that is an achievement of very limited scope. For no modern dualist wants to be a dualist about substances; we all begin from essentially the same scientific conception of the universe. We believe there is nothing added to biological substrate of consciousness in the sense in which some god or unknown force disperses some ethereal quasi-matter which, combining with our brains, creates consciousness. On the contrary, we all agree that there is no substrate except matter, and the question is how, from matter, you get the qualitative view that is awkwardly expressed by the phrase "what it is like to be" a human, raptor, etc.<br /></span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">But the logical possibility may, on the other hand, be sufficient for what the modern dualist really wants to establish. The point is to argue against the program in which, by assembling enough information about the mechanics of brain processes, and relating that through tomography and other techniques to certain mental phenomena, we will eventually be able to reduce consciousness to brain processes. Someone who believes that there is no matter or force except the ones described by modern physics does not have to purchase that program. They can hold that it is the wrong level of explanation for mental processes. They can believe that mental predicates collect the phenomena that physically supervene on biological entities at too high a level to ever be reduced. They can hold that enormous differences in the underlying structures can accommodate the same mental phenomena, described by the same psychological terms and following the same psychological laws. On this view, the correct kinds of programs for understanding consciousness could be those of William James, Husserl, and Wittgenstein, and not those of Smart, Churchland and Dennett.</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style="font-size:85%;"> I</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">turn finally to the "zoombie" and "shombie" examples Brown offers. As he describes them, a "zoombie" is "a creature which is identical to me in every non-physical respect but which lacks any (non-physical) conscious experience". The idea seems to be that just as my zombie twin is identical</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">to me in every physical respect but lacks qualitative consciousness, my "zoombie" twin is identical to me in every non-physical respect but lacks qualitative consciousness. If the former suggests that consciousness is not a physical property, the latter suggests that it is not a non-physical property.</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">A "shombie" is "a creature that is microphysically identical to me, </span><span style=";font-size:85%;" > </span><span style="font-size:85%;">has conscious experience, and is completely physical". If shombies are conceivable, then dualists are at best guilty of rejecting the principle of inference to the simplest explanation that accounts for all the known facts. For why should we go about imagining exotic explanations for consciousness when it is perfectly conceivable that physics can explain it all?</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">According to Brown, these two thought experiments constitute something like a parity of reasoning argument against the zombie argument, and therefore against this particular kind of objection to physicalism. The zombie argument says that it is conceptually possible to disassociate the human body and behavior from conscious experience, and that therefore it is not incumbent on those who hold a naturalistic view of the universe to believe that consciousness is identical to some set of physical processes in the brain. The zoombie argument says that it is conceptually possible to dissociate all non-physical human qualities from conscious experience, and the shombie argument says that it is possible to associate all conscious experience with physical systems like the one in which our minds are embodied. Both thought experiments attempt to show that the zombie argument does not produce any conclusion against physicalism that cannot be produced against dualism by parity of reasoning.</span><span style=";font-size:85%;" > </span><span style="font-size:85%;">So either the zombie argument fails against physicalism, or the zoombie and shombie arguments are equally conclusive against dualism.</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">I agree that the zombie argument is not a conclusive argument against physicalism; but what it purports to show, at least, is that we are not forced to choose between a materialist theory of consciousness and a spooky view of the universe. If we can conceptually dissociate consciousness from the particular forms in which it is embodied, we can imagine a universe in which it is realized in other ways; and if we can do that, we can give up the idea that there must be a reductive, biophysical explanation of consciousness. I fail to see what parallel objective is achieved by positing "zoombies", since no one is claiming that there is a necessary link between our "non-physical" qualities and consciousness. Brown gives no indication of what he means by such qualities, but it cannot be things like mental or emotional states, because to assume those are non-physical would surely beg the question about consciousness. Perhaps we are talking about relational properties, value-bearing predicates, multiplicity and the like. But we can agree that there is no conceptual link between those properties and consciousness without inventing any new creatures. Since the basis for the sort of property dualism that people like Chalmers propose is not parallel to the metaphysical claims of the materialists, I don't see that this argument has a target.</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">"Shombies" allegedly show that we can imagine a creature that is "completely physical" having conscious experience. Brown again avoids unpacking the notion of "completely physical", but one thing we cannot say here is that no predicates other than physical ones apply to such creatures, since there is no such thing as an entity to which relational predicates, for instance, do not apply. It appears, then, that the idea of a "shombie" must be roughly that of a machine that has conscious experience. This sort of thought experiment has been tried many times, and I'm not sure what is added by calling it a "shombie". But it does bring out the foolishness of depending on either zombies or robots to prove anything about consciousness. One side says "I can imagine a conscious machine, so consciousness must be reducible to physics"; the other side says "I can imagine a non-conscious twin, so consciousness must not be reducible to physics". Personally I can imagine a talking cloud; am I entitled to the conclusion that we are in cloud-cuckoo land?</span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">Thought experiments, as Wittgenstein pointed out, are not analogous to real experiments, only with thought-materials. They are devices to make us think about what we would say in a very unusual situation; and this can give us insights into how our concepts are organized and how our language works. If we conceive of the mind-body problem along these lines, thought experiments might help us solve it. The zombie idea is therefore somewhat effective in refuting the idea of a conceptual link between matter and mental phenomena; not a small accomplishment in light of the very strong pull that our basic scientific convictions have on our thinking as a whole. But they cannot answer any naturalistic questions, such as whether the notion of conscious experience will eventually fall out of a detailed description of the operation of brain cells. This is a matter for scientific research, and the only reasonable answer we can give right now is that it is far from doing so at this stage of the game. The materialists want to press on because they are convinced there is no other way. The zombie argument suggests that they are wrong about that, but it does not prove that success is conceptually impossible. Brown's thought experiments are helpful is suggesting this corrective to anyone who uses a zombie to scare the materialists away from their research projects.<o:p></o:p></span></p> <p style="font-family: trebuchet ms;font-family:trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;"><o:p></o:p></span></p> <p class="MsoPlainText" style="font-family:trebuchet ms;"><span style="font-family: trebuchet ms;font-size:85%;" > </span><span style="font-family: trebuchet ms;font-size:85%;" > </span><span style="font-size:85%;"><span style="font-family: trebuchet ms;">Anton Alterman</span><o:p></o:p></span></p> <p class="MsoPlainText" style="font-family:trebuchet ms;"><span style=";font-size:85%;" > </span><span style="font-size:85%;"><o:p></o:p></span></p> <p style="font-family: trebuchet ms;" class="MsoPlainText"><span style=";font-size:85%;" > </span><span style="font-size:85%;">LIPS Conference, St. John's University, Queens, New York, October 18, 2008</span><o:p></o:p></p><br /><span style="font-weight: bold;"><br /><span style="font-weight: bold;"></span></span></div><span style="font-weight: bold;"></span><span style="font-family:trebuchet ms;"></span></div>Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.com4tag:blogger.com,1999:blog-2489468916453210669.post-61896003063386755222007-12-15T11:25:00.000-05:002007-12-15T11:38:22.520-05:00Churchland Again: How to Duck Some Objections<span style="font-family:trebuchet ms;">Other minds have been <a href="http://duckrabbit.blogspot.com/2007/12/monk-in-land-of-churches.html">debating my Churchland post</a> </span><span style="font-family:trebuchet ms;">over at DuckRabbit</span><span style="font-family:trebuchet ms;">, attributing to a certain H.A. Monk (a name I have assiduously but unsuccessfully tried to excise from this blog, since it is internally related to my identity on my other blog, <a href="http://parrotslamppost.blogspot.com/">The Parrot's Lamppost</a>) various assertions that concede a bit too much to both materialist and Cartesian views on the mind-body problem. Though the discussion seems to have ended up in a debate on ducks and rabbits (which I thought would have been settled long ago on that site; in any case, see my "Aspects, Objects and Representations" - in Carol C. Gould, ed. <span style="font-style: italic;">Contructivism and Practice: Toward a Historical Epistemology</span>, Rowman and Littlefield, 2003 - for yet another contribution to the debate) Duck's original post offers a number of points worth considering. (Have a look also at N.N.'s contribution at <a href="http://methodsofprojection.blogspot.com/">Methods of Projection</a>. N.N. picked the right moniker, too, maybe because there are also two n's in "Anton".) Here is a version of what I take to be Duck's central criticism of what I said about Churchland:<br /></span><blockquote><span style="font-size:85%;"><span style="font-family:trebuchet ms;">It's true that the materialist answer "leaves something out" conceptually; but the reply cannot be that we can bring this out by separating the third-personal and first-personal aspects of coffee-smelling, and then (by "turn[ing] off a switch in his brain") give him only the former and see if he notices anything missing. That the two are separable in this way just is the Cartesian assumption common to both parties. (Why, for example, should we expect that if he simply "recognize[s] the coffee smell intellectually" his EEG wouldn't be completely different from, well, actually smelling it?) I think we should instead resist the idea that registering the "coffee smell" is one thing (say, happening over </span><i style="font-family: trebuchet ms;">here</i><span style="font-family:trebuchet ms;"> in the brain) and "having [a] phenomenological version of the sensation" is a distinct thing, one that might happen somewhere else, such that I could "turn off the switch" that allows the latter, without thereby affecting the former. That sounds like the "Cartesian Theater" model I would have thought we were trying to get away from.</span></span> </blockquote><span style="font-family:trebuchet ms;">While I appreciate the spirit of this comment, I must say that I think it does not merely <span style="font-style: italic;">concede</span> something to Churchland, it is more or less exactly what Churchland is saying, though you might want to add "seen through an inverting lens". Churchland indeed wants to deny that "the two are separable in this way"; in fact he takes an imaginary interlocutor sharply to task for asking him to provide a "substantive explanation of the 'correlations' [between "a given qualia" and "a given activation vector"]" because this "is just to beg the question against the strict identities proposed. And to find any dark significance in the 'absence' of such an explanation is to have missed the point of our explicitly <span style="font-style: italic;">reductive</span> undertaking" (<span style="font-style: italic;">Philosophical Psychology</span> 18, Oct .2005, p.557). In other words: if what we have here is really an <span style="font-style: italic;">identity </span>relation - two modes of presentation of things that are exactly, numerically the same - how dare you insist that I should explain <span style="font-style: italic;">how</span> they are related. They are related by being the same thing, Q.E.D.!<br /><br />My post was largely directed as fishy moves like this. The problem is that we have two things that we can - and lacking any evidence to the contrary, <span style="font-style: italic;">must</span> - identify (pick out, refer to) by two completely different procedures; yet Churchland wants to assert that they are identical. What notion of identity is at work here is hard to say. </span><span style="font-family:trebuchet ms;">Since Churchland rejects the notion of metaphysical necessity</span><span style="font-family:trebuchet ms;"> it cannot be "same in all PW's". But it must be more than "one only happens when the other happens" since that is a mere correlation. Even "one happens if and only if the other happens" could mean nothing more than </span><span style="font-family:trebuchet ms;">that some natural law binds the occurrence of the two things together, which does not give us numerical identity.</span><span style="font-family:trebuchet ms;"> He wants to say "</span><span style="font-family:trebuchet ms;">blue qualia are <span style="font-style: italic;">identical to</span> such-and-such coding vectors</span><span style="font-family:trebuchet ms;">", and we have to take this as meaning more than that there is evidence for their regular coinstantiation. But to make it theoretically sound, or even plausible, in light of the fact that we recognize the two ideas in totally different ways, he must offer two things, at least: (1) an explanation of why these apparently distinct facts (qualia/coding vectors) are actually one and the same phenomenon (what makes the one thing manifest itself in such dissimilar ways); and (2) experimental evidence of an empirical correlation between them. Yet he also tells us that we are "begging the question" if we ask for an explanation! And as for the empirical correlation, it is not just that no one has sat down and examined a subject's cone cell "vectors" and asked them, "Now what color do you see?"; the fact is that the whole idea of "coding vectors" is a mathematical abstraction from a biological process that almost certainly only approximates this mathematical ideal, even before we get to the question of how regularly the outputs of the process end up as the particular color qualia that are supposed to have been encoded.<br /><br />I am not saying there is no evidence at all for the analysis Churchland offers (based on the so-called "Hurvich-Jameson net" at the retinal level and Munsell's reconstruction of possible color experiences at the phenomenological level), but that there is <span style="font-style: italic;">not even</span> evidence of a strict correlation. Some of the things that Churchland discusses - for example, the fact that this analysis of color vision is consistent with the stabilization of color experience under different ambient lighting conditions (p.539) - strongly suggest that <span style="font-style: italic;">something</span> about the analysis is right, but do not constitute direct empirical evidence for it. What we are really being offered is a notion of identity that has as its basis neither metaphysics, nor scientific explanation, nor sufficient quantitative evidence to establish a strict correlation. We can be excused for saying "no thanks" to this libation.<br /><br />And if this unanalyzed notion of the identity of phenomenological and biological facts is also being proffered in the name of some other philosophical position - say, Wittgenstein's - we should be no less skeptical. Merely proclaiming the lack of distinction between phenomenology and physiology, inner and outer, mind and world, something and nothing, etc. does not establish anything as a viable philosophical position on consciousness. Even adding the observation that one gets rid of philosophical problems this way does not establish it as a viable position. One gets rid of problems also by saying that god established an original harmony of thought and matter. If you can just swallow this apple whole, you'll find that the core goes down very easily.<br /><br />Whoops, what happened to my erstwhile Wittgenstein sympathies? Well, maybe the apple I don't want to swallow is really this interpretation of Wittgenstein. Duck and I agree that being sympathetic to Wittgenstein does not require dismissing all scientific investigation of the brain (or the world in general) as irrelevant. But I don't think we agree on why. Duck quotes the following passage from the <span style="font-style: italic;">PI</span> :<br /></span><blockquote><span style="font-size:85%;"><span style=";font-family:trebuchet ms;color:black;" >'Just now I looked at the shape rather than at the colour." Do not let such phrases confuse you. [So far so good; but now:] Above all, don't wonder "What can be going on in the eyes or brain?" ' (<span style="font-style: italic;">PI</span> p.211)</span></span><br /><span style="font-size:85%;"><span style="font-family:trebuchet ms;"></span></span></blockquote><span style="font-family:trebuchet ms;">What is Duck's view of this recommendation? He is not quite sure, but finally decides that philosophers' conceptual investigations will keep scientists honest, so they avoid causing problems for us philosophers:</span><span style="font-size:85%;"><span style="font-family:trebuchet ms;"><br /></span><blockquote><span style="font-family:trebuchet ms;">In a way this is right... Don't wonder that... </span><i style="font-family: trebuchet ms;">you thought that was going to provide the answer to our conceptual problem.</i><span style="font-family:trebuchet ms;"> But surely there </span><i style="font-family: trebuchet ms;">is</i><span style="font-family:trebuchet ms;"> something going on in the brain! Would you tell the neuroscientist to stop investigating vision? Or even think of him/her as </span><i style="font-family: trebuchet ms;">simply</i><span style="font-family:trebuchet ms;"> dotting the i's and crossing the t's on a story already written by philosophy? That gets things backwards. Philosophy doesn't provide answers by itself, to conceptual problems or scientific ones. It untangles you when you run into them; but when you're done, you still have neuroscience to do. Neuroscience isn't going to answer free-standing philosophical problems; but that doesn't mean we should react to the attempt by holding those problems up out of reach. Instead, we should get the scientist to tell the story properly, so that the problems don't come up in the first place.</span></blockquote></span><span style="font-family:trebuchet ms;">For my part I don't think this is the point of Wittgenstein's various proclamations about the independence of philosophy from science. Wittgenstein was concerned that physicalistic grammar intrudes into our conceptual or phenomenological investigations, making it impossible to untangle and lay out perspicuously the grammar of phenomena. This is the root of what we call "philosophical problems". It is not the scientist who we have to get to "tell the story properly", it is the philosopher. The scientist does not have a fundamental problem with importing the grammar of phenomenology, thereby tying her physical investigations into knots. It is the other way around: the magnetic pull of physical concepts constantly threatens to affect conceptual investigation. To take a slightly oversimplified example, we say we can "grasp" a thought, but it is an imperceptible step further along the path of this metaphor that allows us to think we can capture it concretely - say, in a proposition, or a sentence of "mentalese" - in a sense that depends quite subtly on our ability to "grasp" a hammer or the rung of a ladder (picking it out as a unique object, self-identical through time, involved in a nexus of cause-effect relations, etc.). True, it takes quite a leap before you are ready to say, "The thought 'the cat is on the mat' just <span style="font-style: italic;">is</span> this neuronal activation vector'", but that is one logical result of this sort of thinking. That we are ready to call this the solution to a philosophical problem just puts the icing on the cake; it is the dismissal of philosophy per se, in more or less the way we can dismiss morality by pointing out that we are all just physical objects made of atoms anyway, and who could care what happens to <span style="font-style: italic;">that</span>?<br /><br />When Wittgenstein says, "don't wonder, 'What can be going on in the eyes or the brain?'" he is using duck-rabbit-type phenomena to show that conceptual or psychological problems may not be tracked by any physical difference at all. In fact, there is a passage just after the one cited by Duck in which WIttgenstein lays it out as clearly as anyone could ask. He suggests a physical explanation of aspectual change via some theory of eye tracking movements, and then immediately moves to say,<br /><blockquote><span style="font-size:85%;">"You have now introduced a new, physiological criterion for seeing. And this can screen the old problem from view, but not solve it". And again, he says, "what happens when a physiological explanation is offered" is that "the psychological concept hangs out of reach of this explanation" (p.212).<br /></span></blockquote>The point is very straightforward, and it is certainly compatible with what I have been saying about Churchland. The physical level of explanation just flies past the psychological concepts without recognizing or accounting for them. But in Duck's view, I am guilty of reintroducing the bogey of dualism and the "Carteisan theater" (I'm planning a post on Dennett soon so I'll avoid this bait right now):<br /></span><span><span style="font-size:85%;"><span style="font-family:trebuchet ms;"><br /></span><blockquote><span style="font-family:trebuchet ms;">So what's the moral? Maybe it's this. In situations like this, it will always seem like there's a natural way to bring out "what's missing" from a reductive account of some phenomenon. We grant the conceptual possibility of separating out (the referent of) the reducing account from (that of) the (supposedly) reduced phenomenon; but then rub in the reducer's face the manifest inability of such an account to encompass what we feel is "missing." But to do this we have presented the latter as a conceptually </span><i style="font-family: trebuchet ms;">distinct</i><span style="font-family:trebuchet ms;"> thing (so the issue is not </span><i style="font-family: trebuchet ms;">substance</i><span style="font-family:trebuchet ms;"> dualism, which Block rejects as well) – and this is the very assumption we should be protesting. On the other hand, what we </span><i style="font-family: trebuchet ms;">should</i><span style="font-family:trebuchet ms;"> say – the place we should end up – seems in contrast to be less pointed, and thus less satisfying, than the "explanatory gap" rhetoric we use to make the point clear to sophomores, who may very well miss the subtler point and take the well-deserved smackdown of materialism to constitute an implicit (or explicit!) acceptance of the dualistic picture.</span></blockquote></span></span><span style="font-family:trebuchet ms;">Absolutely, a physical explanation or description of consciousness is "conceptually distinct" from a phenomenological one. I can see no other possible interpretation of the passage about the eye-movement explanation of "seeing-as" phenomena. Does this make Wittgenstein a "dualist"? Certainly not in the Cartesian sense. True, Wittgenstein not only studied architecture and engineering and cited Hertz and Boltzmann in his early work; he also read (and failed to cite) Schopenhauer and James and had a deep appreciation of "the mystical", which he further identifies with "the causal nexus"; he says in the <span style="font-style: italic;">TLP</span> that philosophy should state only facts, and that this shows how much is left out when all the facts have been stated. But is he now going so far as to suggest that there are different worlds, of scientific and mental reality? I seriously doubt it; and neither am I. There are different levels of explanation, or in his own terminology, different language games. This is not a Cartesian dualism but a point about the structure of thought. It is the same point that much of the <span style="font-style: italic;">Blue Book</span> is based on.<br /><br />I have not said much about my view of consciousness in this blog. But we're only just getting started, I've got time. I will say this, though: the resolution of the mind-body problem cannot be as simple as, for example, the New Realist (or "neutral monist") school hoped it would be. There, various aspects of reality were said to consist of a single "stuff" (read "substance", with various proposals for what this would be circulating at the time) which took on physical or psychological "aspects" depending on our interest, point of view, or whatever. This is a nice, compact view, but it does not do justice to the issue. There is a brain without which there is nothing in the world called "thinking", and a world without which nothing in a brain can count as "thought". There is every reason to believe that every event that ever counted as a thought took place in a brain, and that something was going on in the brain without which that thought would not have happened. This all has to be accounted for, and it is not sufficient to say that there are different aspects to some general substance or process. Sure, there are different aspects to everything, but this won't get us very far with the mind-body problem. How did an "aspect" of something that is also matter end up as consciousness? The problem is only pushed back. How can an "aspect" of whatever be self-aware, control its own actions, or compose a piano sonata? These are very peculiar aspects. If we could put them under an electron microscope we would not find out what we want to know about them.<br /><br />I suspect that something like the following is the case: the various phenomena we call "the mind" are asymmetrically dependent on the brain, but the relationship is so loose that there is never anything like the "identity" relationship Churchland wants, nor a mere difference in points of view between the physical and phenomenological "aspects". We recognize certain psychological phenomena and talk about them and analyze them, and there is no such thing as a specifiable set of neural events that are necessary and sufficient for the instantiation of these phenomena - perhaps not even as types, and certainly not as specific thoughts, volitions, etc. There may be some wave oscillations in the brain that correspond to conscious states, but they are not those conscious states. There are particular portions of the brain that are primarily involved in certain aspects of our intellectual activity - emotions, language, memory, etc. - but there is not a specifiable neural "vector" that is "identical" to Proust's sensation of the taste of his mother's "sweet madeleines", much less to the flood of memories it evokes. Perhaps in Churchand's utopia we can replace <span style="font-style: italic;">Swann's Way</span> with some mathematical specifications of its underlying neural activity without any particular loss, but I am not holding my breath.<br /><br />Why do I think this, or even have a right to hold it out as a reasonable objection? Just because I think psychological concepts are not he rigid, well-articulated concepts that you find in much analytic philosophy. There is a way you can talk about things that are not uniquely or cleanly definable (Wittgenstein: "You are not saying <span style="font-style: italic;">nothing</span> when you say 'stand roughly <span style="font-style: italic;">there</span>...'"; a quote that is <span style="font-style: italic;">roughly</span> accurate!). Talking about them is intellectually interesting in philosophy, important in clinical psychology and ethics, satisfying in the arts. It has been recognized by some neuroscientists and philosophers (Varela and others) that unless you have some kind of scientific phenomenology to begin with, you can't hope to reduce anything to neurology. But that position presupposes that there <span style="font-style: italic;">is</span> something like a science of folk psychological concepts, on something like the lines that Husserl, Sartre and others tried to give us. And Wittgenstein too, in a certain sense: only his phenomenology of mind is imbued with the understanding that part of the "science" we are looking for involves the recognition of the vagueness or circumstantial relativity of concepts.<br /><br />So how about a vague specification of cone cell coding vectors? "There is a 95% correlation between this coding vector and observed reports of red sensations." I could live with that. But it still doesn't give us a claim to "identity", nor does it justify saying that these are different "aspects" of the same event. They are different things that generally must happen<br />in order to recognize something as red. But I can say I dreamed of a red balloon and no one will say, "Oh, but there were no cone cell vectors, you <span style="font-style: italic;">couldn't</span> have." And of course even my memory of a red balloon is a memory of something viscerally red, with no conce cell activity to show for it.<br /></span>Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.com6tag:blogger.com,1999:blog-2489468916453210669.post-85243010883301730712007-12-05T11:00:00.000-05:002007-12-15T11:38:49.721-05:00Brain Freeze, or Churchland on Color Qualia<span style="font-family:trebuchet ms;">It's been two months since I posted anything here, which is not how it was supposed to go. I have some excuses: replies to three papers at two recent philosophy conferences, a lack of breaking news on the cog sci front, and some personal stuff that I won't get into. Anyway, the last of the conference papers was concerned with a relatively recent paper by Paul Churchland, in which he argues for the "identity" of color "qualia" (an obnoxious Latinate neologism that philosophers use to </span><span style="font-family:trebuchet ms;">refer to our mental experience of colors</span><span style="font-family:trebuchet ms;">) with "cone cell coding triplets" or "vectors" - an analytic description of how the eye reacts on the cellular level to light of various wavelengths. Churchland further asserts that based on this analysis he can make certain <span style="font-style: italic;">predictions</span> about our color <span style="font-style: italic;">experience</span> in unusual cases, a feat that, according to him, is usually assumed to be beyond the power of materialist identity theories. That is the main point here; the identity of (a) the experience, and (b) the biochemical basis of the reaction, is said to not only account for ordinary experiences like seeing red, but for experiences which most people have not had. Churchland describes how to produce such experiences and provides various full-color diagrams to assist. The predictive power of the theory allegedly shows that the qualia-coding vector relationship is not a mere correlation but an actual identity.<br /><br />It is not impossible that some philosophers have carelessly suggested that materialism cannot be true because it cannot make predictions about experience. But to rest the case against materialism on this narrow basis is a very bad idea, for the simple reason that there are straightforward and well known areas in which knowledge of the physical structure of the body allows you to make specific phenomenological predictions. For example, recently it was discovered that glial cells, which make up much of the central nervous system, contribute to severe or chronic pain by stimulating the pain-transmitting neurons. Prediction: find a drug that deactivates the glial cells, and with or without more traditional pain-relief methodologies (e.g., those which interfere with the transmission of signals across nerve synapses or attempt to freeze the nerve itself) the patient will feel less pain. There is a perfectly good phenomenological prediction from neurological facts.<br /><br />And there are even easier cases. We know that the lens of the eye delivers an inverted image, which is subsequently righted by the brain. This suggests that our brains, without our conscious effort, favor a perspective that places our heads above our feet. (It is also possible that it is simply hard-wired to invert the image 180 degrees, but for various reasons that theory does not hold water.) Prediction: make someone wear inverting glasses, and they will see un upside down image at first (the brain inverts it out of habit), but eventually the brain will turn it right side up. It works!<br /><br />And it gets even easier. After all, there were times long ago when we did not know anything about the internal structure of sense organs. Our auditory capabilities rest on the action of thousands of tiny receptors lodged in hair cells in the Organ of Corti, part of the cochlea of the inner ear. Prediction: dull the function of these receptors and and the subject will experience a loss of hearing. Wow, another phenomenological prediction. I'm sure you could go hog wild with this. Poke your left eye out and you will see in diminished perspective, an amazing prediction in itself. Practice seeing through one eye for a long time and your sense of perspective should increase. Such predictions differ a lot from an example that Churchland presents in another context, that trained musicians "hear" a piece differently than average audiences. That is also a predictable phenomenological fact, but it involves a change in the mental software, through accustomization and training, and does not obviously involve any sensual change. To see a new color or to have fewer distinct sounds reach the brain from the cochlea are sensual changes; to hear more deeply those sounds that do reach the ear, to organize them more efficiently and recognize more relationships between is not a sensual change but an intellectual one that we might metaphorically characterize as "hearing more than others". In fact musicians <span style="font-style: italic;">hear</span> the same thing others hear but <span style="font-style: italic;">understand</span> what they hear in a more lucid way. The sensual phenomena I have mentioned are actual changes in what reaches the brain for processing or in processing at a subliminal level, and do not depend on how we train ourselves to organize the information we receive.<br /><br />I admit that my predictions are not very interesting; they operate at a more macro level than Churchland's strange color qualia, though not as macro as the following: cut out someone's tongue and they won't taste much. That's about like: cut out someone's brain and they won't think much. That may sound pretty obvious, but it wasn't always. Churchland is playing on the fact that intimate knowledge of how vision works is a relatively recent and still growing science. Thus it sounds like quite an amazing feat that he should be able to "predict" color "qualia".<br /><br />But actually, although his predictions are more refined than mine, digging deeper into more subtle properties of the visual system, they are no more predictions of "qualia" than the general statement: interfere with some physical property of a sensory apparatus and you will change the sensations experienced by the subject. Refining this down to a specific phenomenological experience does not get closer to predicting "qualia", it merely makes a more specific prediction based on a fairly well fleshed out physical theory. It is roughly at the level of first discovering certain facts about the eye and then discovering that those facts are consistent with seeing a green after-image when exposed to a flashbulb. "I predict a green qual!" Okay, that's a little better than "I predict the stock market with crash - some time..." But it doesn't really do much for materialism. (And I'm not even talking about "eliminative" materialism here, which I said I'd refuse to take seriously, just the more typical materialist identification of experience with physical facts.)<br /><br />Why? We could gloss Churchland's prediction as follows: "I predict that if you look at this in the right way you will have that experience that is commonly understood to be going on when a person utters the words, 'I see green'". And what is that? Just the very thing that non-materialists bring up as an "explanatory gap". Churchland can't predict we will have particular <span style="font-style: italic;">qualia</span> because he doesn't have even so much as a theory as to what the relationship is between <span style="font-style: italic;">qualia</span> and their scientific background. He seems to think that a correlation which has predictive accuracy is <span style="font-style: italic;">eo ipso </span>an identity relation. But this is just another brain scam. One might say: qualia are a suspect kind of entity anyway, so why should I need a theory to account for them? Fine, but what you can't say is: these qualia you talk about, they just <span style="font-style: italic;">are</span> these coding vectors, and then act like you've explained <span style="font-style: italic;">qualia</span>. For example, suppose you were to say: these UFO's you talk about, they just <span style="font-style: italic;">are</span> marsh gas. Okay, you've explained <span style="font-style: italic;">away</span> UFO's. But you surely haven't <span style="font-style: italic;">explained</span> UFO's. You've submitted the thought: until and unless you give me some specific physical evidence that there are these things, "UFO's", that cannot be explained by <span style="font-style: italic;">any other consistent set of physical facts</span> except that secret aircraft controlled by animate beings are navigating our skies, I deny that UFO's exist as a category of object requiring independent explanation. Similarly, one can say: I can explain everything there is to explain about sensation without reference to "qualia", so why should I be obliged to give you a separate explanation of them? But that is not what is being offered. Rather, we are told, color qualia exist; they are cone cell coding vectors.<br /><br />"Laughter exists; it is... [insert physical description of lung contractions and facial expressions]"<br />"Orgasm exists; it is... [insert physical descriptions of male or female anatomical changes during orgasm]"<br />"Aesthetic appreciation exists; it is... [insert data from brain scans of people listening to Mozart]"<br />"Religious rapture exists; it is... [insert data from brain scans of people talking in tongues]" (this has actually been studied, by the way)<br /><br />When is Churchland going to wake up and smell the coffee? I'm not sure, but I don't think we should test it by asking him whether he's awake or not; better check his brain scan and let him know. Then do an EEG and see if he's smelling the coffee. With sufficient training he could be taught to look at the EEG and say, "Why, I was smelling coffee!" (This is the flip side of Churchland's utopia, in which we are all so well-informed about cognitive facts that introspection itself becomes a recognition of coding vectors and the like.) Now for the tricky part: turn off the switch in his brain that produces the coffee-smelling qual, and tell him that every morning, rather than having that phenomenological version of the sensation, he will recognize the coffee smell intellectually and be shown a copy of his EEG. And similarly, one by one, for all his other qualia.<br /><br /></span><span style="font-family:trebuchet ms;">Don't say: well, he doesn't deny these qualia exist, after all; he just thinks they are identical to blah-blah-blah</span><span style="font-family:trebuchet ms;">... If he thinks they are <span style="font-style: italic;">identical</span> to blah-blah-blah then he should not object in the least if we can produce blah-blah-blah without those illusory folk-psychological phenomena we think are the essence of the matter. So, on with the experiment. Where do you think he will balk? When we offer to substitute a table of coding vectors for the visual quals of his garden in springtime? An EEG for the taste of grilled tuna? Maybe a CAT scan of soft tissue changes rather than the experience of orgasm? I'd really like to know just how far he is willing to go with this. Would he wear one of those virtual reality visors, having in the program only charts and graphs and other indicators of brain and body function? Maybe Churchland is the only one among us who really understands how to have fun. Personally, I'll keep my red roses, my grilled tuna taste, and... the other stuff, thanks.<br /><br /><br /></span>Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.com4tag:blogger.com,1999:blog-2489468916453210669.post-27067379634502564372007-10-06T00:33:00.000-04:002007-12-15T11:30:42.713-05:00AI, Cog Sci, Pie in the Sky<span style="font-family:trebuchet ms;">So I've been working my way through this long article on robotics that appeared in the July 29 edition of the <span style="font-style: italic;">Sunday Times</span>, and I'm thinking the author, Robin Marantz Henig, is being very measured and balanced in dealing with these nasty questions, like "Can robots have feelings?" and "Can they learn?" etc. And yet I can't avoid the nagging conceit that in spite of her good intentions, she just doesn't get it.<br /><br />Get what? Get what cognitive scientists really want. Get the idea of what Andy Clark, quoting computer scientist Marvin Minsky, calls a "meat machine". Artificial intelligence/meat machine: two sides of the same coin. Robots think; brains compute. It's a bit confusing, because it sounds like we're talking about two different things, but they are logically identical. Nobody said that wires can't look like neurons, or neurons can't look like wires; if we just used gooey wires inside robots, people who opened them up might say, "Oh, of course they have feelings, Madge, what do you <span style="font-style: italic;">think</span>?" Maybe when we start creating real-life Darth Vaders, with some PVC-coated copper inside the skull (how long can it be until this happens?) people won't jump quite so quickly on the train going the other way: "Oh, of course all we are is elaborate computers, Jim, what do you <span style="font-style: italic;">think</span>?" But the seed will have been planted, at least. With a little help from the connectionist folks we might begin one of those epistemological shifts to a new way of thinking, sort of like when people began to accept evolution as a natural way of looking at species. This is the picture that cognitive scientists really want.<br /><br />Ms. Henig describes her encounters with a series of robots at the M.I.T. lab of Rodney Brooks: Mertz, whose only performance was to malfunction for the author; Cog, a stationary robot that was "programmed to learn new things based on its sensory and motor inputs" (p.32); Kismet, which was designed to produce </span><span style="font-family:trebuchet ms;"> emotionally</span><span style="font-family:trebuchet ms;"> appropriate</span><span style="font-family:trebuchet ms;"> </span><span style="font-family:trebuchet ms;">"facial" expressions</span><span style="font-family:trebuchet ms;">; Leo, which was allegedly supposed to understand the beliefs of others, i.e. it had a "theory of mind"; Domo, equipped with a certain amount of "manual" dexterity; Autom, linguisitcally enabled with 1,000 phrases; and Nico, which could recognize its "self" in a mirror. (You can get more intimately acquainted with some of these critters by going to the <a href="http://robotic.media.mit.edu/">Personal Robots Group</a> at the <a href="http://www.media.mit.edu/">MIT Media Lab web site</a>. Before they try to create consciousness in a can, the roboticists should try fixing their Back button, which always leads back to the MIT site rather than their own page.) Throughout her discussion, Henig expresses both wonder at the tendency of people to interact with some robots as if they were conscious beings (a result of cues that set off our own hard-wired circuitry, it is surmised) as well as disillusionment with the essentially computational and mechanical processes responsible for their "humanoid" behavior. It is the latter that I am referring to when I say I don't think she's quite clued in to the AI mindset.<br /><br />The first hint at disillusionment comes when she describes robots as "hunks of metal tethered to computers, which need their human designers to get them going and smooth the hiccups along the way" (p.30). This might be the end product of one of my diatribes, but how does it figure just 5 paragraphs into an article called "The Real Transformers", which carries the blurb: "Researchers are programming robots to learn in humanlike ways and show humanlike traits. Could this be the beginning of robot consciousness - and of a better understanding of ourselves?" Is Henig deconstructing her own article? She certainly seems to be saying: hunks of metal could only <span style="font-style: italic;">look</span> like they're conscious, they can't really be so! Whereas I take it that computationalists suggest a different picture, of a slippery slope from machine to human consciousness, or at least a fairly accurate modeling of consciousness by way of the combined sciences of </span><span style="font-family:trebuchet ms;"> computer science, mechanics, </span><span style="font-family:trebuchet ms;">neuropsychology, and evolutionary biology. (Sounds awfully compelling, I must admit.)<br /><br />Henig does say that the potential for merging all these individual robot capacities into a super-humanoid robot suggests that "a robot with true intelligence - and with perhaps other human qualities, too, like emotions and autonomy - is at least a theoretical possibility." (p.31) Kant's doctrine of autonomy would have to be updated a bit... And can we add "meaning" to that list of qualities"? (I'd like to set up a poll on this, but it seems pointless until I attract a few thousand more readers...) The author seems inclined to wish that there were something to talk about in the area of AC (Artificial Consciousness :-) but then to express disappointment that "today's humanoids are not the sophisticated machines we might have expected by now" (p.30). Should we be disappointed? Did anybody here see <span style="font-style: italic;">AI</span>? (According to the article Cynthia Breazeal, the inventor of Kismet and Leo, consulted to the effects studio on <span style="font-style: italic;">AI</span> - though not on the boy, who was just a human playing a robot playing a human, but on the Teddy bear.)<br /><br />Cog, says Henig, "was designed to learn like a child" (p.32). Now here come a series of statements that deserve our attention. "I am so careful about saying that any of our robots 'can learn'", Brooks is quoted as saying. But check out the qualifiers: "They can only learn certain things..." (that's not too careful already) "...just like a rat can only learn certain things..." (a rat can learn how to survive on its own in the NYC subways; how about Cog?) "...and even [you] can only learn certain things" (like how to build robots, for example). It seems to be inherent in the process of AI looking at itself to imagine a bright future of robotic "intelligence", take stock of the rather dismal present, and then fall back on a variety of analogies to suggest that this is no reason to lose hope. Remember when a Univac that took up an entire room had less capabilities than the chip in your cell phone? So there you go.<br /><br />Here we go again: "Robots are not human, but humans aren't the only things that have emotions", Breazeal is quoted as saying. "Dogs don't have human emotions either, but we all agree they have genuine emotions." (Obviously she hasn't read Descartes; which may count in her favor, come to think of it.) "The question is, What are the emotions that are genuine for the robot?" (p.33) Hmmm... er, maybe we should ask the Wizard of Oz? After reading this statement I can't help thinking of Antonio Damasio's highly representational account of emotions. For Damasio, having an emotion involves having a representation of the self and of some external fact that impacts (or potentially impacts) the self; the emotion consists, roughly, in this feedback mechanism, whereas actually <span style="font-style: italic;">feeling</span> the emotion depends on consciousness, i.e., on recognition that the feedback loop is represented. On this model, why not talk about emotions appropriate to a robot? Give it some RAM, give it some CAD software that allows it to model its "self" and environs, and some light and touch sensors that permit it to sense objects and landscapes. Now program a basic set of attraction/avoidance responses. Bingo, you've got robot emotions. Now the <span style="font-style: italic;">feeling</span> of am emotion, as Damasio puts it - that will be a little harder. But is it inconceivable? It depends, because this HOT stuff (Higher-Order Thought, for those socially well-adjusted souls out there who don't spend your lives reading philosophy of mind lit) can get very slippery. Does the feeling require another feeling in order to be felt? And that require another feeling, etc.? I suppose not, or no one would pause for 2 seconds thinking about this theory. One HOT feeling is enough, then. Great. RAM 2 solves the problem; the robot now has a chip whose function is to recognize what's being represented on the other chip. This is the C-chip (not to be confused with C-fibers) where Consciousness resides, and it produces the real feelings that we (mistakenly, if Damasio is right) call "emotions". So, we're done - consciousness, feelings at least, are represented in the C-chip, and therefore <span style="font-style: italic;">felt</span>. Now we know <span style="font-style: italic;">what it's like</span> to be a robot: it's like having second-order representation of your emotions in a C-chip. And now we can end this blog...<br /><br />Unless we are concerned, with Henig, that <span style="font-style: italic;">still</span> all we have are hunks of metal tethered to computers. Let's move on. Leo, the "theory of mind" Bot, M.I.T calls "the Stradivarius of expressive robots". Leo looks a bit like a Pekingese with Yoda ears. If you look at the demo on the web site y</span><span style="font-family:trebuchet ms;">ou can see why Henig was excited about seeing Leo</span><span style="font-family:trebuchet ms;">. A researcher instructs Leo to turn on buttons of different colors, and then to turn them "all" on. Leo appears to learn what "all" means, and responds to he researcher with apparently appropriate nods and facial expressions. Leo also seemed capable of "helping" another robot locate an object by demonstrating that the Bot had a false belief about its location. Thus, Leo appears to have a theory of mind. (This is a silly way of putting it, but it's not Henig's fault; it's our fault, for tolerating this kind of talk for so long. Leo has apparently inferred that another object is not aware of a fact that Leo is aware of; is this a "theory of mind"?) But, says Henig, when she got there it turned out that the researchers would have to bring up the right application before Leo would do a darned thing. Was this some kind of surprise? "This was my first clue that maybe Leo wasn't going to turn out to be quite as clever as I thought." (p.34) If I were an AI person I would wonder what sort of a worry this was supposed to be. I would say something like: "Look, Robin, do you wake up in the morning and solve calculus problems before you get out of bed? Or do you stumble into the kitchen not quite sure what day it is and make some coffee to help boot up your brain, like the rest of us? Why would you expect Leo to do anything before he's had his java?" Well, complains the disappointed Henig, once Leo was started up she could see on computer monitors "what Leo's cameras were actually seeing" and "the architecture of Leo's brain. I could see that this wasn't a literal demonstration of a human 'theory of mind' at all. Yes, there was some robotic learning going on, but it was mostly a feat of brilliant computer programming, combined with some dazzling Hollywood special effects." (p.34). Leo was not even recognizing objects per se, but magnetic strips - Leo was in part an elaborate RFID reader, like the things Wal-Mart uses to distinguish a skid of candy from a skid of bath towels. Even the notion that Leo "helped" the other Bot turns out to have been highly "metaphoric" - Leo just has a built in group of instruction sets called "task models" that can be searched, compared to a recognizable configuration of RFID strips, and initiated based on some criteria of comparison.<br /><br /><span style="font-style: italic;">And what exactly do humans do that's so different?</span> You know what the AI person, and many a cognitive scientist, is going to say: after 10's of millions of years of evolution from the first remotely "conscious" living thing to the brain of Thales and beyond, the adaptive mechanisms in our own wiring have become incredibly sophisticated and complex. (So how do you explain Bush, you ask? Some questions even science can't answer.) But fundamentally what is going on with us is just a highly evolved version of the simple programming (! - I wouldn't want to have to write them!) that runs Leo and Cog and Kismet. What conceivable basis could we have for thinking otherwise?<br /><br />Henig goes on to talk mainly about human-robot interaction, and why the illusion of interacting with a conscious being is so difficult to overcome. Here, as you might expect, the much-ballyhooed "mirror neurons" are hauled out, along with brain scans and other paraphenalia. I don't have too much to say about this. There are certainly hard-wired reactions in our brains. One could argue that what makes humans different from all possible androids is that we can override those reactions. A computer can be programmed to override a reaction too, but this merely amounts to taking a different path on the decision tree. It overrides what it is programmed to override, and overrides that if it is programmed to do so, etc. But someone will say that that is true of us too; we merely have the illusion of overriding , but it is just another bit of hard-wired circuitry kicking in. Since this spirals directly into a discussion of free will I'm going to circumvent it. I think evolved, genetically transmitted reaction mechanisms may well play a part in our social interactions, and if some key cues are reproduced in robots it may trigger real emotions and other reactions. What happens once that button is clicked is a matter that can be debated.<br /><br />The article concludes with a variety of surmises on consciousness, citing Dennett, philosophy's own superstar of consciousness studies, and Sidney Perkowitz, an Emory University physicist who has written a book on the human-robot question. Consciousness, says Henig, is related to learning and emotion, both of which may have occurred already at the M.I.T. lab, though only Brook seems to think the robots actually "experienced" emotions in the sense that Damasio requires. Dennett says that a robot that is conscious in the way we are conscious is "unlikely"; John Haugeland said the same thing in 1979; robots "just don't care", he says (see "Understanding natural Language"). And these are some of the people who are most inclined to describe the mind as a in some sense a computational mechanism located in the structure of the brain. But people who would go much further are not hard to find. "We're all machines", Brooks is quoted as saying. "Robots are made of different sorts of components than we are... but in principle, even human emotions are mechanistic". (p.55) He goes on: "It's all mechanistic. Humans are made up of biomolecules that interact according to the laws of physics and chemistry." (I'm glad he didn't say "the laws of biology".) "We like to think we're in control, but we're not." You see, it's all about free will. These cog sci guys want to drag us into a debate about free will. No, I take that back, they have <span style="font-style: italic;">solved</span> the problem of free will and they want us to see that. Or possibly, they have been reading Hobbes and want to share the good news with us. Whatever.<br /><br />Henig's elusive, ambivalent position on robotic consciousness is easy to sympathize with, and as anyone who has read this post thoughtfully can tell, the ultimate point of my article is not to take her to task for being naive or ambivalent. It is that perspectives like the one coming from Brooks have insinuated themselves into our culture - into the media, philosophy, and cocktail parties - and legitimized the notion that whatever is left of the mind-body problem will just be taken care of by the accumulated baby steps of Kismets and Leos and Automs. Statements like the ones Brooks makes are tokens of the inability of people to think outside their own intellectual boxes. There is plenty of scientific evidence for the fact that mental processes go on below the level of consciousness (blindsight, etc.); there is not the remotest shred of evidence that these processes are mainly computational, or that computations, however complex, can yield outputs that have more than a superficial similarity to any kind of animal consciousness. There is every reason to believe that every fact and event in the universe has a scientific explanation; there is not the slightest reason to believe that explanation of consciousness is more like the Cartesian-Newtonian mechanisms behind the motion of mid-sized objects at slow speeds than it is like the probabilistic fields of quantum electrodynamics. We don't have a clue how consciousness works; not at the neural level, and certainly not at the computational level. We are in the same position that Mill was in the 19th century when he said that whatever progress we might hope for in the area of brain research, we are nowhere near knowing even whether such a research program will produce the results it seeks, much less what those results might be. We very likely do not even have two psychologists, neurologists or philosophers who agree with one another on what an emotion is, much less whether a robot could have one.<br /><br />What's more, at present we have no philosophical or other justification for the notion that when we are trying to solve the mind-body problem, or talk about the mind or consciousness at all, what we are looking for should be thought of at the level of explanation of basic science or computation rather than traditional philosophy or psychology. People have brought all sorts of tools to the study of literature - lately, even "evolutionary literary studies" have gained a foothold, to say nothing of Freudian, Marxian, linguistic, deconstructionist or anthropological approaches. Does any of this demonstrate that the best understanding of literature we can obtain will be through these approaches, which subvert the level of literary analysis that studies the author's intentions, rather than through traditional literary criticism or philosophical approaches to fictionality? I don't know that philosophers or literary critics are in general ready to concede this point, though obviously various practitioners of postmodernism and other such trends would like to have it that way. Then why would we concede that the best approach to the mind-body problem is through AI, IT, CS, or other two-letter words? We might be better off reading William James (who was hardly averse to scientific study of the mind) than reading Daniel Dennett. Or reading Husserl than reading Damasio. We'd certainly be better off reading WIttgenstein on private language than Stephen Pinker on the evolutionary basis of cursing.<br /><br />Put all the C-chips you want into Leo or Nico. Putting in a million of them wouldn't be that hard to do these days. Give them each 1,000,000 C-chips, 10 petabytes each; what will that do? Get them closer to consciousness? They're still hunks of metal tethered to computers, and for all we can tell, nothing that any AI lab director says is going to make them anything more.<br /></span>Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.com4tag:blogger.com,1999:blog-2489468916453210669.post-10293295120127943242007-09-16T13:34:00.000-04:002007-12-15T11:34:37.481-05:00What Is It Like to Be a Parrot Named Alex?<span style="font-family:trebuchet ms;"><span style="font-family:trebuchet ms;">(9/27/07: Very minor changes made, and a number of potentially misleading typos corrected.)<br /><br />What to do for news? The war in Iraq goes numbingly on and on; the Presidential election is already old and still a year off; there are no national scandals, or perhaps so many that none stands out; and </span>most recent natural and <span class="blsp-spelling-error" id="SPELLING_ERROR_0">manmade</span> disasters pale in comparison with those of yesteryear (at least in their sensational aspect - famine and disease are not big headline-grabbers, though perhaps more devastating in reality). Perhaps someone important has died recently? There you go! By the lights of <span style="font-style: italic;">New York Times</span> editors, the untimely death of Alex the Parrot is definite breaking news. First, <a href="http://www.nytimes.com/2007/09/11/science/11parrot.html">an o-bird-<span class="blsp-spelling-error" id="SPELLING_ERROR_1">uary</span></a>, then an <a href="http://www.nytimes.com/2007/09/12/opinion/12wed4.html">editorial</a>, followed by numerous letters, and now - <span style="font-style: italic;">squawk</span> - a <a href="http://www.nytimes.com/2007/09/16/weekinreview/16john.html">"Week-in-Review" article</a> on the implications of Alex's famous efforts at learning, speaking, and conceptualizing. (Not to mention previous articles, like the "A Thinking Bird? Or Just Another Bird-Brain?" (10/9/99), <a href="http://www.123compute.net/dreaming/knocking/alex.html">reproduced</a> on 123computer.net.) This is more space than they have devoted to most non-human individuals, as well as to about 99.999% of human individuals. Something big must be happening! Could it be - another opportunity to question whether consciousness really amounts to much more than rote responses? If you are willing to see that question as being implied by its reverse - Is rote learning a form of consciousness? - then the answer is Yes.<br /><br />I was planning to post the last of three introductory pieces for this blog under a title like, "What Is It Like to Be Anton <span class="blsp-spelling-error" id="SPELLING_ERROR_2">Alterman</span>?" or "What Is It Like to Be Or Not to Be?", and I somewhat regret not doing that (especially since I want to take credit for the latter title, which is rich in possibilities). Of course many such satirical titles have been used in professional journal articles in response to Thomas <span class="blsp-spelling-error" id="SPELLING_ERROR_3">Nagel's</span> (in)famous "What Is It Like to Be a Bat?", but one more wouldn't hurt. However, in the interest of being relevant and timely I opted to join the Alex debate, without changing very much what I intended to say.<br /><br />For those of you who have not followed it, Alex was a gray parrot who was being trained by Dr. Irene <span class="blsp-spelling-error" id="SPELLING_ERROR_4">Pepperberg</span> in using words to indicate concept recognition. Alex was reportedly able to identify (within limits) colors, numbers, shapes and materials, to combine these concepts in interesting ways, to understand simple, stripped-down English language syntax, to respond to some situations in a way that apparently mimicked human emotive responses, and to verbally indicate his expectations of reward for performing correctly ("Want a nut!"). In some cases Alex was able to formulate responses that bordered, or appeared to border, on combining known concept-words to form new concepts. Alex did not have a huge vocabulary - about 100+ words - but in applying these words selectively in response to questions, he gave the impression that he understood the references of not only object names but property words and emotional terms.<br /><br />There has been much scientific brouhaha about Alex. Typically, people involved in cognitive research have insisted that Alex's reactions are just sophisticated stimulus-response </span><span style="font-family:trebuchet ms;">behavior. Whereas human language is rule-based and reflects some internal representation - a "compositional" module from which we produce the infinite transformations of basic syntax that we know as human language. </span><span style="font-family:trebuchet ms;">Even Alex's trainer, Dr. <span class="blsp-spelling-error" id="SPELLING_ERROR_5">Pepperberg</span>, denied that Alex was using language, describing it rather as complex communication of some other sort.</span><span style="font-family:trebuchet ms;"> By implication, Alex was not demonstrating that this important aspect of human consciousness is available to creatures with pea-sized brains. Consciousness of the human type is still safe from animals, just as Descartes wanted it to be; and apparently talking parrots are no exception.<br /><br />Why the resistance? One thing that occurs to me is that the huge debate over human consciousness would take on a very different shape if it could be identified in a more pure and simplified form in creatures who are perhaps more closely related to dinosaurs than to humans. What is it like to be Alex? Well, ask him! So the answer you get would not be even as sophisticated as that of a 3 year old child. But who wants sophisticated? That's only going to distort the unvarnished report of the nature of being so-and-so. Consider the idea that we train Alex, or his successor, to have just exactly enough grammar and vocabulary to answer the question: "What is it like to be you, Alex?" It is pretty clear what kind of answer we would get: "Want a nut!" Does this mean Alex just can't learn enough to answer the question? Why is that not a satisfactory answer?<br /><br />Someone (I hope) is reading this blog. Ask yourself: What is it like to be you? What kinds of answers are at your disposal? You can describe experiences and sensations you enjoy, desires and drives that you have, pains, emotions, things you know or believe, worries, creative impulses; in short, the things that go to make up the bulk of your cognitive life. Is that a better answer to the question "What is it like...?" Why so? Being Alex is pretty simple. "Want a nut" is a large part of it. But Alex would also say "I love you" to Dr. <span class="blsp-spelling-error" id="SPELLING_ERROR_6">Pepperberg</span> at the end of the day, or "I'm going away now" to indicate resistance to a training session. So, with some help, Alex might have been ready to add to the "What is it like...?" response: "Sometimes I don't want to train and I say the thing Irene says when she goes away, and sometimes I realize that I am going to be alone and I don't like that so I say the thing that seems to mean Irene will come back." Pretty basic; Alex has some likes/dislikes beyond cashews, and he knows (and possibly feels) the difference between company and no company.<br /><br />Okay, that's what it's like to be Alex. Have you figured out what it's like to be you? I mean, what it's <span style="font-style: italic;">really</span> like, not these more sophisticated Alex-type responses. You have, of course, direct access to your own phenomenal consciousness, and this, the story goes, has its intuitive feel, or style, or shape, or... <span style="font-style: italic;">quality</span>, that's it. As Michael Tye tells us </span><span style="font-family:trebuchet ms;"> <span style="font-style: italic;">ad <span class="blsp-spelling-error" id="SPELLING_ERROR_7">nauseum</span></span></span><span style="font-family:trebuchet ms;"> in the first chapter of his book <span style="font-style: italic;">Ten Problems of Consciousness</span>, "for feelings and perceptual experiences, there is always something it is <span style="font-style: italic;">like</span> to undergo them" (p.3). Or take Pater <span class="blsp-spelling-error" id="SPELLING_ERROR_8">Carruthers</span> (our token HOT theorist for this post) <a href="http://eprints.assc.caltech.edu/151/01/Animal_consciousness_might_not_matter.pdf">who writes</a>: "Phenomenally conscious states are states that are <span style="font-style: italic;">like something</span> to undergo; they are states with a subjective <span style="font-style: italic;">feel</span>, or phenomenology; and they are states that each of us can immediately recognize in ourselves..." ("Why the Question of Animal Consciousness Might Not Matter Very Much", <span style="font-style: italic;">Philosophical Psychology</span> <span style="font-weight: bold;">18</span> (2005) 83-102, p.84). Well, come on, now, you're an articulate and reflective sort, what the heck is it <span style="font-style: italic;">like</span> after all? Tom, Peter, Michael, you said it, it's <span style="font-style: italic;">like</span> something to be in your (directly accessible) state of consciousness. So, what's it <span style="font-style: italic;">like</span>? How does it <span style="font-style: italic;">feel</span>? Cat got your tongue?<br /><br />Bats, says <span class="blsp-spelling-error" id="SPELLING_ERROR_9">Nagel</span>, operate on sonar, and "this appears to create difficulties for the notion of what it is like to be a bat" (<span style="font-style: italic;">Mortal Questions</span>, p.168). Moreover, according to <span class="blsp-spelling-error" id="SPELLING_ERROR_10">Nagel</span>, we cannot even know what it is like for people who are deaf and blind from birth to be the way they are (p.170). Many of those people, nevertheless, can use language about as well as the rest of us, so clearly <span class="blsp-spelling-error" id="SPELLING_ERROR_11">Nagel</span> is writing off their ability to communicate linguistically as a conveyor of "what it's like". But no such difficulty attends knowing what your <span style="font-style: italic;">own</span> experience is like; the only difficulty, according to <span class="blsp-spelling-error" id="SPELLING_ERROR_12">Nagel</span>, is that we can't express it in "objective" language such that the next guy can grok it. (If you haven't read Robert Heinlein's <span style="font-style: italic;">Stranger in a Strange Land</span> please proceed to the nearest bookstore; and pick up a copy of Kurt Vonnegut's <span style="font-style: italic;">Cat's Cradle</span> too, and maybe Thomas Pynchon's <span style="font-style: italic;">The Crying of Lot 49</span> while you're at it, as I have no compunction about using neologisms of philosophical interest from popular literature. Anyway, grokking is kind of like allowing a meme into one of your mental ports. <span style="font-style: italic;">Capiche</span>?) So there you have it: let Alex talk all he wants; let you talk all you want; let even Tom <span class="blsp-spelling-error" id="SPELLING_ERROR_13">Nagel</span> talk all he wants, ain't nobody going to express <span style="font-style: italic;">what it's like to be them</span> in such a way that the next animate, linguistically enabled being can read it and <span style="font-style: italic;">know</span> what it is like to be them.<br /><br />But if that's the case, why deny that Alex was using language? It seems that the phenomenology behind the use of words remains forever hidden and indecipherable; the information is simply lost. Riemann discovered he could reconstruct mathematical landscapes from the zeros of the zeta function (see Marcus du Sautoy, <span style="font-style: italic;">The Music of the Primes</span>); but apparently, no one can reconstruct phenomenological landscapes from their base level expressions. The equations of language simply fail as the conveyors of information about the curves and lumps from which they originate. If our phenomenological reports are the equations of our system, their original coordinates are simply lost. We have a scrambled vista of individual reports, but this gives us no leg up on the original landscape that shows <span style="font-style: italic;">what it's truly like</span>. At best we can build our own landscape by association with the terms of someone <span class="blsp-spelling-error" id="SPELLING_ERROR_14">else's</span> report. With Alex's report, our ability is that much more limited. A bat's report, forget it: <span style="font-style: italic;"><span class="blsp-spelling-error" id="SPELLING_ERROR_15">screeeeeeyyyyeeeee</span></span> builds about as flat a landscape as Riemann's zeros, with no possibility of recovering the geographic information. But the bat idea is, as <span class="blsp-spelling-error" id="SPELLING_ERROR_16">Nagel</span> himself admits, kind of superfluous; all we <span style="font-style: italic;">ever</span> get through language - indeed through any form of communication - is, you might say, an instruction to translate these words into associations from your own experience. And the less you can associate - very little with Alex, just about zip with a bat - the less you can even do that. The landscape-in-itself, that untouchable "what it's like" of the other, remains all zeros, for all conscious creatures whatsoever.<br /><br />Now all I want to say is that this whole conception is incoherent (in the vernacular, rubbish - but we don't say that in nice philosophical discussions, even across the social tables at <span class="blsp-spelling-error" id="SPELLING_ERROR_17">APA</span> meetings. But let them try bringing the meeting to Brooklyn for a change...) You cannot describe a problem as a gap in the capabilities of objective language (nor as beyond the limit of our cognitive capabilities, but I'll deal with <span class="blsp-spelling-error" id="SPELLING_ERROR_18">McGinn</span> and his school (?) some other time). The phrase <span style="font-style: italic;">what it is like</span> does not describe generically some objective thing that can't be objectively described in any specific instance. It is not a placeholder for something that we await better forms of expression for. There is no such place. There is no "there" there, or rather no "what" there. The phrase "what it is like",</span><span style="font-family:trebuchet ms;"> to borrow another of Wittgenstein's analogies</span><span style="font-family:trebuchet ms;">, is a linguistic wheel that turns without moving any part of the mechanism. </span><br /><span style="font-family:trebuchet ms;"><br />To go deeper we have no choice but to consider how the expression "what it's like" is being used here. It seems that there is supposed to be some objective quality, <span style="font-style: italic;">likeness</span>, that inheres in conscious states, but which we lack the means to express in words; if only we could, we could say <span style="font-style: italic;">what</span> it's <span style="font-style: italic;">like</span>. But when Tom <span class="blsp-spelling-error" id="SPELLING_ERROR_19">Nagel</span> or Michael Tye reflect on their own consciousness, what exactly is the <span style="font-style: italic;">likeness</span> that they find there a likeness <span style="font-style: italic;">to</span>; what is that very thing that they recognize and that is objectively different from what a bat would find, if bats were self-reflective? "Well, that's he problem, you see; we can't say! Maybe you just don't understand the problem. Everyone else seems to understand it. How can we help you?" This is such a cop-out, it does nothing but extend the attempt to wrap the reader in nonsensical uses of common expressions. The assumption that there just <span style="font-style: italic;">is</span> something that it's <span style="font-style: italic;">like</span> to have this or that form of consciousness should be recognized as one of the most peculiar and frankly nutty ideas in philosophy; but it goes on and on as the eye of the hurricane in the consciousness debate. Examples and counterexamples fly through the air all around it, blowing inverted spectra and swamp men all over the place, and <span style="font-style: italic;">likeness</span> just calmly stands stock still in the middle of it, laughing at the maelstrom. But the phrase does not denote anything; not, of course, that there is no such thing as consciousness (fie on those who hope to turn this into an argument for <span class="blsp-spelling-error" id="SPELLING_ERROR_20">eliminativism</span> or behaviorism) but there is not some particular way that consciousness feels, seems, <span style="font-style: italic;">is</span>. We think: if only there were some brilliant enough mind to find the way to express it, some Shakespeare/Einstein of consciousness, it could be done, and everyone would say, "Oh yes, of course, <span style="font-style: italic;">that's</span> what it's like" and the problem would be solved. (Maybe we should start a committee?)<br /><br />The "hard" problem of consciousness is often confuted (on my reading) with the problem of expressing the nature of particular phenomenal states. What's it <span style="font-style: italic;">like</span> to see green? Well, for one thing, it's about the same thing to see green whether you are normally color-sighted and seeing a green light, or you have inverted spectra and are looking at what most people see as a red light. And, I think, this goes right to the bottom: it is the same for a llama to <span style="font-style: italic;">see green</span> as it is for you. And the same for a bat, if they can. (To my knowledge, bats are not actually blind, nor do they use sonar exclusively, but if I'm wrong, <span class="blsp-spelling-error" id="SPELLING_ERROR_21">NBD</span>.) So the difference in <span style="font-style: italic;">likeness</span> must lie somewhere else. You would not think so from reading Tye and others who have taken to ascribing a <span style="font-style: italic;">likeness</span> to each and every type of perception or experience; which gets us nowhere, and is highly misleading as to the original point of <span class="blsp-spelling-error" id="SPELLING_ERROR_22">Nagel's</span> article. There is supposed to be something it is <span style="font-style: italic;">like</span> to be in a general state of consciousness. That this leads to some communicative gap is more believable, at least, than that there is some sort of problem with seeing green or feeling pain. If the point of the whole discussion of phenomenal consciousness were that there is not some linguistic expression that just <span style="font-style: italic;">is</span> the glassy essence of an experience, to be passed from mind to mind like genes that can reproduce entire individuals, it would have been obvious very quickly that there is no such form of language, that there never will be, and that this is not a "problem" so much as a misunderstanding of how the language of sensations functions. (I can't resist putting in a little plug here: Wittgenstein dealt with this gap between phenomenon and expression at length in his 1929-30 <span class="blsp-spelling-corrected" id="SPELLING_ERROR_23">manuscripts</span>, and it played a crucial role in the transition to his later philosophy and the private language argument; this is the subject of my thesis, <span style="font-style: italic;">Wittgenstein and the Grammar of Physics</span>, <span class="blsp-spelling-error" id="SPELLING_ERROR_24">CUNY</span> Graduate Center 2000).<br /><br /></span><span style="font-family:trebuchet ms;">"But what about black and white Mary, doesn't she learn something new when she sees green grass, and isn't that "something" an objective bit of knowledge about how the universe is? And if so, doesn't that lead to the same problem? Because Mary can't say what the difference is, but she definitely learned <span style="font-style: italic;">something</span>, not <span style="font-style: italic;">nothing</span>." You are thinking: things seem very different to Mary, her world is suddenly phenomenologically richer in a very obvious way, and we can't deny that that is some real difference in the physical universe. So who wants to deny anything? :-) We did not need Mary to demonstrate this. We had duck-rabbit to demonstrate it a long time ago. There is a real difference in the universe when we saw only duck and now we see rabbit; it is not <span style="font-style: italic;">nothing</span>, is it? We are in a different mental state (I am conceding, here, for the sake of argument, the customary notion of a "mental state", <span style="font-style: italic;">pace</span> Wittgenstein's objections to this use of the term), and on the assumption that the universe contains only natural laws, measurable forces and physical objects, that is either a definite material difference or it's an illusion (which is itself a <span class="blsp-spelling-corrected" id="SPELLING_ERROR_25">material</span> difference, etc.)<br /><br />We also have Alex to demonstrate it. Do you think that when Alex started to discriminate colors verbally his world became phenomenologically richer? I do, and not because words were added to his world. I don't believe that Mary actually "learns" anything when she is presented with an unexpected flood of color sensations. But when she later begins to conceptualize what she perceives, she does. But of course, she can also then <span style="font-style: italic;">say</span> what the difference is; if you can't <span style="font-style: italic;">say</span> a concept then I don't know what you can say. Alex was not very special just for squawking "red circle wool" and "green triangle metal". We have insufficient evidence to say that Alex really had concepts, but he gave enough of an impression that he did that we want to attribute to him a world richer than that of a feathered machine. Like color-concept-Mary, it appears, at least, that Alex became in some sense <span style="font-style: italic;">aware</span> of differences that were stored in the raw data of perception.; just as connecting the dots on a line graph can bring out relationships that were not apparent before. Alex's discovery is real; so is Mary's. But neither is some willowy subjective <span style="font-style: italic;"><span class="blsp-spelling-error" id="SPELLING_ERROR_26">qual</span></span> that can't be expressed objectively. Both have added to their consciousness an awareness of the color spectrum. That's it. That's <span style="font-style: italic;">what it's like</span> in this case. Nothing like those <span class="blsp-spelling-error" id="SPELLING_ERROR_27">Tractarian</span> edicts about the logic of language that "can't be said". If "now I see colors" is not a direct expression of what it's like to be in the new perceptual state, then we have the wrong idea of what an "expression" can accomplish.<br /><br />"What is it <span style="font-style: italic;">like</span>?" is used to ask for something that can be said. Applied to something that <span style="font-style: italic;">by nature</span> can't be said, the question is nonsensical. "Where is Thursday?" Hmmmm... "Okay, what is Thursday <span style="font-style: italic;">like</span>?" Err, I wake up at 7:15, give the kids breakfast, drop them off at school, then I take a shower... Or rather, I have the foggy -head feeling, then I feel the cool sweet tangy taste of orange juice, then the anxiety-to-get-to-work-on-time feeling... Is that what you're looking for? Try asking a construction worker or cab driver, "What's it like to be you?" You'll get an answer of some sort. What's wrong with the answer? Nothing, it's the kind of answer you're <span style="font-style: italic;">supposed</span> to get when you ask a question like that. Even philosophers can answer the question. "Well, we all agree that there is <span style="font-style: italic;">something</span> that attends human conscious experience that is different from what attends avian conscious experience, if avian experience is conscious at all, and that <span style="font-style: italic;">should</span> have some expression, but it <span style="font-style: italic;">doesn't</span> (so far). <span style="font-style: italic;">That's</span> the problem." Well, we all know that that the word "hot" does not feel hot! Is <span style="font-style: italic;">that</span> the problem? And the word "green" does not always look green (in fact it could look red). Is <span style="font-style: italic;">that</span> the problem?? And the words "normal, perceptually enabled, self-aware human waking consciousness" do not feel like conscious experience. Is <span style="font-style: italic;">that</span> the problem??? If not, <span style="font-style: italic;">what is the problem like</span>?<br /></span><br /><span style="font-family:trebuchet ms;">At the level of consciousness as a whole, there just <span style="font-style: italic;">is</span> nothing that it's <span style="font-style: italic;">like</span>. That is not to say consciousness is nothing; there is just nothing that it's <span style="font-style: italic;">like</span>. Not because there is nothing similar enough (that would be a normal use of the expression and would not lead to any philosophical problems). It's because being <span style="font-style: italic;">like</span> in the way it is used here is actually just a meaningless colloquial expression, a familiar linguistic crutch: "You know, it's like, I don't know what he wanted, but he was like, mad at me, so I like sat there and wondered what the hell am I supposed to say?" This is really how the expression is being used here! Not the proper use that we employed in putting the question to the cab driver. Or else we are imagining that </span><span style="font-family:trebuchet ms;">we wake up one day as a bat, </span><span style="font-family:trebuchet ms;">while somehow retaining our own consciousness as background information, and go "Oh, how weird, I'm not enjoying this at all, I want things to be <span style="font-style: italic;">like</span> my former conscious states!" (To think that philosophy glides along on such B-movie fantasies is sobering.) What role does "<span style="font-style: italic;">like</span>" have here? We actually have two things before us to compare, so there's nothing wrong with it. So we can also say, "Human consciousness is not <span style="font-style: italic;">like</span> bat consciousness?" And now the grand conceit that throws everything off: "So <span style="font-style: italic;">what</span> is it <span style="font-style: italic;">like</span>?" And what could the answer possibly be? Only things of this sort: "It's like Martian consciousness; I know, because I was abducted by aliens and temporarily had a Martian brain implanted, into which was downloaded all the data of my own mind for comparison, and you know what? Martian consciousness was very much like our own." But if you want some different type of answer, where the "what" is a placeholder for some really brilliant analysis that describes for all to see just "what" it's like, you're a naughty person out to set a grammatical trap for unsuspecting philosophers. And almost everyone who has discussed consciousness in the last 25 years has landed in this trap.<br /><br />What is it like to be Alex? Or a bat? That we can't answer these questions is not an indication of a problem, philosophical or otherwise, and certainly does not point to a gap in materialism. Materialists generally dismiss the Nagel problem without a great deal of fanfare, wondering (if materialists wonder) what exactly the problem is. As well they should. Materialists have missed their chance to hand over one of the supreme ironies of contemporary philosophy: they could have quite profitably quoted the Wittgenstein remark you have all been waiting for me to produce: "The phenomenal quality of consciousness is not a <span style="font-style: italic;">something</span>, and <span style="font-style: italic;">not a nothing either</span>." Of course they would never quote Wittgenstein until they had tenure. But it would be perfect. Wittgenstein was a materialist. (Yes! Like you, and me...) He just was not a <span style="font-style: italic;">naive materialist</span>, the kind that believes we can eventually substitute talk about the brain, and its structures and processes, for talk about the mind. But he would surely have said that talk about "what it's like" is a grammatical error based on the false inference that there is "something" consciousness is like because there is not "nothing" it's like. The materialist should say: sure it's like something: it's like having this set of neurological processes in these kinds of neurological structures. And that's a perfectly good answer. It has virtually no philosophical import whatsoever for any question in philosophy of mind, epistemology, cognitive psychology, aesthetics, ethics or anything else; but it is one of the few answers that provide a sense to the question "What is it like?" But if you reject this, and the cab driver's answer, and all other reasonable answers, then of course you must say: no, there is not <span style="font-style: italic;">something</span> it's like, but it's not like <span style="font-style: italic;">nothing</span> either.<br /><br />"So I thought Alterman was out to persuade us that cog sci approaches to consciousness are hopeless. But among other things, he offers us a passionate defense of one of the great eliminativist propositions, that qualia don't exist. Then he tells us that the only <span style="font-style: italic;">something</span> that consciousness can be like is gray matter. Boy, is he confused." Well, I knew that was coming. But all I will say right now is this: it is a virtual certainty that if you posit a certain ontology as the very essence of consciousness, and that ontology is vacuous, the next thing that will happen is that someone will come along and say, "Hey, here's a much better ontology, it's called physical objects, which appear here as neurons, and it surely makes your wan and evanescent ontology of qualia otiose, not to mention boring and stupid". Just as the unacceptably spooky Cartesian immaterial substance gave way to phenomenologies of various sorts, inexpressible but somehow objectified qualia are an open door through which cognitive scientists can run at full speed, with wires, test tubes and forceps flashing, declaring that there are no such ghostly objects, and that "neurophilosophy" will save the day for consciousness studies. Nothing has done more damage to the effort to understand consciousness than the notion of a subjectively objective quality of "what it's like". This phrase should be banned from the language, except as a historical reminder of how the discussion of consciousness was distorted for three decades.<br /><br />Let me close with a few further thoughts about Alex. I think Alex was using language, in at least the sense that the person described by St. Augustine in the passage that opens the <span style="font-style: italic;">Philosophical Investigations</span> was using language. Wittgenstein's point, of course, is that <span style="font-style: italic;">human</span> language cannot be reduced to the simple game of ostensive reference, where the rest of the field "takes care of itself". Alex's accomplishments may have actually been slightly beyond that of Augustine's infant self (not his real infant self, which would have been 100 times more sophisticated than Alex, but the one he describes). But even if they were not, it is clear, as even the <span style="font-style: italic;">NY Times</span> writers brought out, that Alex's behavior prompts us to ask what we ourselves are doing when we use language. Indeed, I don't think I can say it better than Verlyn Klinkenberg does in the <span style="font-style: italic;">Times</span> editorial: "To wonder what Alex recognized when he recognized words is to wonder what we recognize when we recognize words." (George Johnson, BTW, ends his article with a brief musing on the Nagel problem.) "Using language" is not necessarily an on/off situation. Wittgenstein says that the Augustine picture represents "a language simpler than ours". That is not <span style="font-style: italic;">no</span> language, it is a language simpler than ours. So perhaps Alex was using a language <span style="font-style: italic;">much</span> simpler than ours. How much? Whales communicate through sounds, and I don't know that their sounds have no syntax (indeed it would make little sense if they had <span style="font-style: italic;">none</span>); but that is certainly a language <span style="font-style: italic;">much</span> simpler than ours. Whale talk does not involve concepts. What about dog and cat talk? "You're on my turf, pipsqueak, get out of here before I give a lesson you'll remember!" That's us, awkwardly translating into complex grammar and concepts what canines and felines express with "grrrrr..." and "yeeeoooooow"! The economy of their language shouldn't prevent us from calling it language <span style="font-style: italic;">at all</span>. Alex surely one-upped these language users, being able to use human noises to express simple desires and indicate recognitions.<br /><br />But to say that Alex was using even a simple language is to say something that somewhat, though not completely, undermines the notion of an innate generative grammar. It is a far more radical idea that parrots have an innate ability to use human language, even to the extremely minimal degree that Alex did, than that humans do. But if Alex could be trained to use even a small subset of one human language, and could moreover demonstrate some of the combinatorial and syntactic capabilities that seem so pecualiar to human verbal communication, the innateness hypothesis seems unnecessary. A few hours a day with a parrot doesn't even compare with the constant verbal coaxing we give to a young child, so if the prune-brained parrot can learn that much, surely we can account for human language cognition as rote learning. Now, the Chomskian view of language has a mixed relationship to the cog sci view of the mind. On the one hand, cog sci needs some machinery to explain human linguistic capabilities, and the notion of a highly evolved module that encodes these capabilities like a wet compiler is very appealing. But its appeal is more to computationalists like Jackendoff than to neuroscience types like the Churchlands. For example, Paul Churchland complains (see e.g., <span style="font-style: italic;">The Engine of Reason, the Seat of the Soul</span>) that Chomsky requires the rules to be <span style="font-style: italic;">represented</span> in the mind, and representation, we know, is a dirty word to Churchland. Neural nets are all we need, he says; a pox on your representational rules engine. So it is not clear whether Alex, if he challenges Chomsky, challenges cognitive science in general, though he may challenge some forms of computationalism.<br /><br />I suppose that innatists of any variety could get around this by saying that every creature of higher order than a clam may have evolved some minimal generic linguistic capability, which could be harnessed, through sufficient training, and assuming some innate vocal capabilities, to any human language. They would never get very far, but their lower order innate generative grammar would account for the possibility of an Alex. At some point, animal grammar would radically drop off, but the location of that point can be debated. But this whole reply seems a bit ad hoc, to me. It would be better to stick to your guns and deny that Alex could actually use language at all. That, however, seems to depend on an artificially rigid definition of what using language consists in, and is thus equally ad hoc. One could of course deny that language use has anything fundamental to do with consciousness, and insist it is therefore extraneous to the debate. This is a very dubious hypothesis, which I'm not even going to try to come up with a rationale for. Thus, any effort to erect an intellectual blockade between human and animal consciousness by virtue of a difference in lingusitic capabilities is probably doomed to fail. Animals are conscious, or so I hold; and they use language. These two things may scale to one another, or they may not, since it has not been argued (by me, anyway) that the relationship between language and consciousness is necessary, directly proportional, or anything like that. But if we are talking about human consciousness in particular, we would probably do well to focus more on language use, and how it evolved, than on brain scans.<br /><br />I will leave the Alex discussion with the thought that the nature of a bat's consciousness may be far more accessible than we think. Perhaps Alex chose to leave his body to science. In which case his vocal apparatus would be available to others. There are plenty of people who love bats; and I'm sure they would like nothing better than to have one that talks. I am dreaming of Tom Nagel waking up one day to find, hanging upside down from his bookshelf, a bat, who greets him with the words: "It's like this..." and concludes: "And I want that published!"<br /></span>Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.com5tag:blogger.com,1999:blog-2489468916453210669.post-87930881127764993612007-08-27T12:59:00.000-04:002007-12-15T11:36:56.608-05:00Science, Philosophy and the Mind<span style="font-family:trebuchet ms;">I left for vacation (in Alaska) shortly after publishing my introductory post, and did not have access to the media I would normally look at to keep this blog current and relevant, nor to my reference materials on consciousness and cognitive science. But we're just getting started, and I have a few more preliminaries to add anyway, so perhaps it is just as well.<br /><br />It is pleasant to see that I have had a couple of readers already, and certain issues that clearly need to be addressed have already been raised. So the first thing I want to do here is discuss the relationship betweeen philosophy and science in a very general way. This is not the place for an extended theoretical defense of my position; I merely state it so that readers have an idea where I'm coming from. I have referred to Wittgenstein and his position that there is a gap between the conceptual and linguistic tasks of philosophy and the factual and theoretical tasks of science. While my position on cognitive science and consciousness is partly informed by Wittgenstein's view, I do not subscribe to what might be a naive, or perhaps a correct interpretation of it. That is, I do not believe that science and philosophy are absolutely unrelated enterprises. My early college career was spent in scientific study, an interest I actively maintain, and I might note that Wittgenstein too had a lifelong interest in scientific developments (indeed the <span style="font-style: italic;">Tractatus</span> directly reflects some of Hertz's ideas). But perhaps he believed that concepts are more distinct from facts than I do. I think concepts are very liquid, and conceptual truths, though they are not factual truths, are informed by our changing knowledge of the natural world. The way I would put the relationship is this: <span style="font-style: italic;">science can narrow down the range of possible conceptual truths, alter the course of philosophical investigation by closing off some lines of thought, and sometimes suggest new philosophical strategies by analogy with physical strategies</span> (and this is not always a bad thing, though more on this later).<br /><br />A common example of a scientific truth can be used to show what I am talking about. "Heat is the motion of molecules" is an example of what is usually called a scientific reduction from the macro to the micro level. Heat is a macroscopic physical phenomenon that has scientific application and is subject to measurement and scientific study. It was discovered that heat occurs if and only if, and to the extent that, there is motion at the molecular level, so that one can equate greater molecular motion with a rise in temperature. Thus one physical phenomenon was "reduced" to another. In this manner, (a) certain scientific speculation about the physical concept of heat was cut off; (b) since the concept of physical heat now had a new physical basis, the phenomenological concept of heat could no longer have exactly the same meaning it did before, or play the same role in philosophical speculation, or be confused with the physical concept (and if you don't think of "heat" as a philosophical concept, the same could be said at some point for "energy", though the reasons are more complex than this simple "reduction"); (c) a strategy for the "reduction" of philosophical concepts was suggested. Thus a scientific finding had a direct and permanent impact on philosophical speculation. Similarly, the study of light, color, and the biology of vision could not but have an impact on the way we talk about color, light, vision, or perception in philosophy. It would be madness to speculate about the nature of "colors" and simply ignore the scientific facts. Such discoveries continually alter the scope and direction philosophical speculation.<br /><br />This applies to consciousness too. For example, it is known that certain areas of the brain control certain mental functions, and that consciousness itself is not evenly distributed throughout the brain. It follows ineluctably that consciousness is not equally dependent on every mental function. People can lose significant functionality in the area of memory, recognition, sensory awareness, linguistic capability, and other critical forms of intelligence and still be "conscious" in the sense we normally mean it. On the other hand, people with some forms of epilepsy can apparently have most or all of these functions intact and not be entirely conscious (e.g., not respond to ordinary stimuli) for a period of time. It follows that these functions do not entirely depend on consciousness. These again are scientific results, the ignorance of which would simply lead philosophy down blind alleys.<br /><br />But in spite of all this, there is no reason to believe that these bits of knowledge we have acquired about the brain suggest that we are on the way - indeed, that there <span style="font-style: italic;">is</span> a way - to "reduce" consciousness to brain function. It is still far from clear that we will at some point be able to speak about physical entities and processes, eliminating, without remainder, all chatter about minds, intelligence, thoughts, ideas, beliefs, desires, motives, imaginings, and the like. It is the fervent hope of materialists of all sorts that this should be the case; that "folk" psychological concepts should be at most a shorthand for talking about what we know to be neural occurences. The most sophisticated developments in cognitive psychology fall so far short of reducing anything that we don't even know what such a reduction would look like. For the most part, what they amount to is that when certain mental functions are performed, there is increased blood flow or electrical activity in certain parts of the brain. This is good for brain mapping, but not for figuring out what consciousness is. Extensions to these mappings are not much help either. For example, you can tell by mapping that some of the same regions light up when you imagine, remember, or dream of an object as when you encounter it first hand (have "knowledge by acquaintance" of it). We should hope that not too much money was expended on research that proves this, since most thoughtful people would have predicted something like it. But let it be granted that such discoveries are advances of some sort. Are they advances towards reducing the mind to brain functions? I don't see how. What is the path from this to eliminating the necessity of speaking of imagination when we talk of artistic creation or scientific theorizing, or even in theories of knowledge, language, or indeed consciousness? If we are to really believe in the cog sci program, we must think we are on a path which will eventually lead to the consignment of Kant's discussion of imagination, Peirce's discussion of belief, Locke's discussion of the will, or Wittgenstein's discussion of privacy, to the dustbin of quaint but terribly outmoded theories, whose truths (if any) can be better stated in terms of neural activity. As I said, some factual discoveries could sideline some avenues of discourse. But I see no reason to believe that a single important philosophical debate will be solved by cognitive science. The nature of consciousness as they are looking for it simply terminates in a physical or physiological description, never hooking up directly to any interesting philosophical theory or program. The scenario in which little by little we stop speaking of beliefs or conscious will, just as we (should have) stopped speaking of an anthropomorphic god, bodily humors, phlogiston, or the "elements" as air, fire and water, is a mere pipe dream of an overzealous scientific research program. There is neither scientific evidence nor philosophical reason to believe it. (I suppose it would be a cheap shot here to call it self-negating, since we would have to <span style="font-style: italic;">believe</span> there are no beliefs to justify the eliminativist program!)<br /><br />It seems that philosophers who support the cog sci program for consciousness are in the grip of an analogy like the following. Philosophers used to speculate about the physical world; little by little, philosophers themselves, and later on people who we identify as scientists, made discoveries that more or less replaced philosophical speculation with hard science. Similarly, philosophical speculation about consciousness will be replaced by some combination of neuroscience and computational theory, with perhaps some help from linguistics (a more scientifically credentialed enterprise than philosophy) and mathematics. But note that when someone asks, "how do earthquakes occur?" or "what are stars made of?", they are normally looking for one, and only one, kind of answer: a true description of a physical process. But when someone asks: "how can unconscious matter combine to create consciousness?", or "what is it to have the belief that tomorrow is Wednesday?", not to mention "what is artistic creativity?", they can be asking several different kinds of questions. Either they want a description of a chemical or neurological process, or a psychodynamic explanation as provided in contemporary post-Freudian psychology, or a philosophical discussion. Someone who is interested in one kind of explanation is going to feel cheated if they leave with another. Nor is this a sign of a primitive state of any of these disciplines. Any area of inquiry is in its infancy compared with some imagined state of it in the distant future, but it cannot be said that physics, psychology or philosophy are in their infancy in any absolute sense. "Folk" psychology and its philosophical development is not a poor stand-in for the knowledge we wish we had through neuroscience. I don't want to use the obvious phrase and call it a different "level of explanation", because that only sounds like grist for the Quinian mill, in which levels of explanation simply go away, or become "naturalized", as science develops. Think of it this way, instead: we already have, and have had for a long time, the ability to describe human action strictly in terms of mechanics and biochemistry. Instead, we still describe it in terms of motivations, will, desire, belief and the like. Why did the level of "reduction" already available to us not replace the outmoded talk involving mental terms? Hmmmmm.... I'm sure the physicalists have an answer, but prima facie, there's no reason to think the Next Big Step will be any more "eliminative" than the last.<br /><br />It would be fair to ask at this point: Just what would you require, Mr. Alterman, before you would be ready to say that such a reduction is at hand, or at least conceivable in the ordinary progress of scientific investigation? Fair enough; here is one answer: <span style="font-style: italic;">I would like to see someone describe, in purely mathematical and physical terms, what it means for two people to have the same thought</span>. That is, take Fred and Freida, and say they each have a simple thought, like "I have to take out the trash", or "I believe my cat is bigger than your ocelot" or "Billy just learned how to do long division". These are not such complex thoughts. So what I want is to know what it would mean, or what sort of program could possibly explain, how to provide a physical-mathematical description of these thoughts such that by examining the brains of Fred and Freida we would discover an instantiation of exactly that unique, purely physical, and completely general description. (In the old lingo, I want a physical reduction of token-token identity.) In my opinion, we are not just far from having a program of this sort; we cannot even conceive what it would mean to have this kind of reduction. But without it, we do not have an eliminative materialist theory of consciousness; nor, to put it more bluntly, a physicalistic theory of consciousness of any sort. And it is not that we do not have it in the sense that we do not have a molecular transporter; we can at least conceive of what a molecular transporter would be and do, if not how it would accomplish its task. We cannot conceive of what a physical reduction of consciousness would be; what would a general neural correlate of "learned long division" be like? Where would we begin to look? The thought is just spooky, not even on the agenda of science. And my position is that it never will be, and that it involves deep misunderstandings.<br /><br />This is not an anti-scientific view; nor, as you might guess, do I subscribe to some post-Cartesian form of substance dualism. "Dualism" is a bad word as long as it is associated with substances, or processes, or any form of parallelism whereby the "mental" happenings are conceived as analogous to the "physical" happenings: the brain is doing its work, and the "mind" (mysteriously conceived) is doing its work, and the two are somehow doing it together, but are not one and the same thing. This rationalist program is way too tired, not to mention theistically inspired, for me to take seriously. (There are other forms of rationalism, such as the kind promoted by Llinas, and somewhat supported by research, that locates fixed structures and assumptions in the mind as a result of evolutionary choices. This is a different sort of discussion, which I will not pursue right now.) Playing around with the word "substance" to make it fit something that is not conceived of as being constituted by rocks, water, burning hydrogen, subatomic particles, or other recognized physical substances is just a path to confusion. Substance dualism is a non-issue; yet consciousness is real, and yet not "reducible" to physical objects and processes. This is the paradox we have to address.<br /><br />So why am I not a materialist? Is there a third way? Here I must revert to Wittgenstein, who dealt with his sort of confusing antinomy dozens of times, all to little avail, as evidenced by much of the writing on consciousness. Take, for example, his discussion of the "if-feeling", where he accepts the idea that there may be such a feeling, but rejects the notion that it somehow "accompanies" the word or thought. Then <span style="font-style: italic;">is it</span> the word or thought itself? No. Then it merely accompanies it, or course? No. Then it doesn't exist, it is a mere error? No. Well, <span style="font-style: italic;">what</span> then? Well, there is a feeling, but it is not an <span style="font-style: italic;">it</span>! In the same way, Wittgenstein denied that there are mental <span style="font-style: italic;">processes</span>. In the same sense that he said we should reserve the term mental "<span style="font-style: italic;">state</span>" for something like depression or anger, not the belief that today is Monday. In the same sense that he asked if there was a <span style="font-style: italic;">something</span> in the beetle box, and said no, there is not a <span style="font-style: italic;">something</span> there, and not a <span style="font-style: italic;">nothing</span> either! It seems that no matter how many times Wittgenstein discussed these kinds of confusions, no matter how many thousands of philosophers read them, the same inane dichotomy is posed again and again as it you can make some philosophical hay out of it. You're not a dualist? You must be a materialist! There are either two things there, or there are not two things there, and you say there are not two things there, so you must be a materialist, QED!<br /><br />What seems to be the problem here? I think it is "how high the seas of language run"; it is people trying to piece together a theory of consciousness and finding it is "like trying to repair a spider web with your bare hands". "Heat" is a much simpler concept than "thought" or "awareness" or "sensation". It has one very strong usage, and if there are others, they can be sidelined when we give a very strong reductive explanation of the central usage. When we talk about "heat" what we are normally, literally talking about can be fully described as "the motion of molecules". When we talk about "thought" or "attention" or "imagination", what is it that can be fully described by a very strong theory of the motion of neurons and fluids? Who has an answer as to what the "it" is here that can allegedly be so described? No one. This is why, perhaps, Varela and his followers focused so hard on having a phenomenology to reduce, before actually trying to do a reduction <span style="font-style: italic;">to</span> neurology. The point has almost completely escaped the Churchlands and most other cog sci types. But once we have the phenomenology - and Husserl is not a bad place to start, though not a complete program either - what do we have? An "it" that can be "reduced"? I don't think so. We have a phenomenology, and we have the scientifically motivated assumption that sensory facts have physical explanations, but we are far from having any valid reason for thinking that the "phenomenology" has a directly corresponding physical basis. This is where Varela and his school are wrong. Having a phenomenology (or a "phenomenological language", of the kind Wittgenstein once sought and others have actually developed) will provide interesting connections at the macro level between various neural processes and mental phenomena. They might be much richer than anything we have today. But again, the gap between that and a reduction of the mental to the physical is light years wide.<br /><br />At most, I think it will eventually be recognized that while the desired reductive theory of consciousness is a worthy goal, it is not a practical program and may never be. I myself am not ready to concede that it is a worthy goal, but even if one does that, it hardly justifies the collapse of philosophy of mind into cog sci programs, as described in my previous post. Nor does it mean that brain research programs should be defunded (except to the extent that they are morally obnoxious, as in their treatment of human or non-human subjects - a matter for a different sort of blog). It means that philosophy should finally put aside the Russellian and logical positivist paradigm of philosophy following "the model of science"; though Russell at least distinguished between scientific <span style="font-style: italic;">method</span> and <span style="font-style: italic;">results</span>, suggesting we follow the former. Today's philosophy programs, ever-conscious of trendy bandwagons that might attract funds and build national reputations, have attempted to follow, and indeed even produce, the <span style="font-style: italic;">results</span>. This is a rejection of philosophy itself, and an embarrassment to the profession. Once again, if this blog has even a small impact in altering this self-abnegation, I will consider it a success.<br /><br />I expect to have one more preliminary post before I get current and start examining some recent results. This will be on the position that is most identified with the opposition to physicalistic monism, the idea that there is "something it is like" to have a paricular form of consciousness, that this is perspectival or subjective, and that it therefore cannot be stated in the objective language of materialism, or at least we have no idea how that would be done. If it were that easy to undermine the materialist line, the battle would have been won long ago. Unfortunately, this response is itself fundamentally flawed, for much the same reason that materialism itself is flawed. But I will get to that soon. Lastly, I will just mention that I expect to be reviewing the philosophical literature on consciousness and commenting on it as appropriate as long as I keep up this blog, so that hopefully, eventually, it will become clear where I stand not only on cog sci but on the philosophical debate as a whole.<br /></span>Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.com8tag:blogger.com,1999:blog-2489468916453210669.post-41382327395895248602007-08-16T09:06:00.000-04:002007-12-15T11:37:42.719-05:00Philosophy and the Brain Scam<span style="font-family:lucida grande;"><span style="font-family:arial;">Hello, folks. My name is Anton Alterman, and I am starting this blog to provide an ongoing commentary and forum for discussion on the issue of the mind-body problem, with particular focus on efforts to solve it by way of cognitive science. I have not yet worked out all the settings for this blog so let me just do a quick bio and then move on to some philosophical points to set the tone for what is coming up.<br /><br />My interest in philosophy and psychology goes way back, but my professional studies began in 1989 at the CUNY Graduate Center. I entered a Master's/Ph.D. program there, completed my course work in 3 years (while working full time as a computer professional, which I still do). Two years later I had completed my comprehensive and language exams, and a year after that finished my thesis proposal. I had been interested in Wittgenstein since taking an undergraduate philosophy course at Northwestern University in the early 1970's; Ed Sankowski was the professor. After the course I found a cheap copy of <span style="font-style: italic;">On Certainty</span> in a university bookstore, and though I can now say that I barely understand what he was up to, it sparked a lifelong interest in Wittgenstein, the mind, and knowledge. I tried to read the <span style="font-style: italic;">Tractatus</span> and <span style="font-style: italic;">Philosophical Investigations</span> on my own, but without a background in the work of Russell, formal logic, or positivism, I found them daunting.<br /><br />At the GC I returned to the study of Wittgenstein through course work with Arthur Collins, Juliet Floyd, Charles Landesman and others. But my main influence at the time was Marx Wartofsky, who had little patience with Wittgenstein. He did, however, spark in me an almost equal interest in the pragmatists, particular C.S. Peirce and WIlliam James. In a book by a contemporary Wittgenstein scholar I found a reference to a lengthy Wittgenstein manuscript (it was background material for what became the <span style="font-style: italic;">Remarks on the Philosophy of Psychology</span>) in which there was a discourse of more than 100 pages focusing on William James's ideas. I thought this presented a perfect opportunity to do some original scholarship and also interest Marx in directing my thesis, and wrote a proposal for a study of the connection between Wittgenstein and James.<br /><br />Unfortunately this proved to be both too large and too small a topic. There was a lot of work on it already, though not on that manuscript. It was probably a mistake not to have focused just on that; instead, I began to explore the entire relationship and all the secondary literature on it. Before I had completed much of this work, Marx passed away, and I began to work with Arthur Collins. I had become disenchanted with the pragmatism connection, and he had no interest in it; moreover, I had begun to think that I had some insight into Wittgenstein that could be the subject of a different sort of thesis. To make a very long journey into a trip to the candy store, in 2000 I successfully defended my thesis on Wittgenstein, which now focused on the 1929 manuscripts and his phenomenology. So far I have only tried to publish a rewrite of one central chapter, without success; but I expect it will eventually be published in some form. I have published some other pieces, which I won't go into right now.<br /><br />I should mention that before I completed my thesis I began teaching part time at Baruch College, CUNY. I started out in the usual manner teaching Intro and Ethics courses. Fortunately, in some ways at least, the faculty at the time recognized that I had a pretty robust background in a variety of areas, and eventually assigned me courses in aesthetics, modern, contemporary, and19th century philosophy, and the philosophy of technology. I continued in this position for eight years, after which changes in the administration led to a parting of ways. At that point, issues in my personal life also encouraged me to take a break from teaching and pursue other tasks and interests. The break continues... how long, I do not know right now.<br /><br />While at the GC I studied not only Wittgenstein but the entire gamut of contemporary analytic philosophy. I took or audited courses with some of the leading lights in the philosophy of mind and language, including Jerry Fodor, David Rosenthal, Steve Schiffer and Jerrold Katz. In general I found myself in sharp disagreement with them on many issues, not least how to do philosophy in general. But my disputes with them did not prepare me for what I would face after completing my thesis and searching for a fulltime teaching position. What I did not realize as I stepped into those turbulent waters was that in the 11 years since I had begun my graduate studies, a virtual tidal wave of cognitive science had swept through academia, swamping every traditional approach to the philosophy of mind and replacing it with a kind of discipline that was not, so far as I could tell, philosophy at all. There I was, with exceptional grades, excellent references, superior teaching evaluations, some minor publications, an impressive list of conference papers, and a thesis that I sincerely believed had corrected longstanding misconceptions about Wittgenstein and pointed to a comprehensive interpretation of his philosophy - and could not so much as obtain an <span style="font-style: italic;">interview</span>, much less a job! What was going on?<br /><br />After a number of comments from well known and unknown philosophers, ranging from the subtle to the extremely blunt, what was going on became painfully clear. Jobs in the philosophy of mind and language were no longer available to anyone who did not profess to be doing cognitive science. At the same time, Wittgenstein had become so much a part of the philosophical tradition that hardly a school lacked a major figure who wrote about Wittgenstein or acknowledged his influence. Consider names like Hilary Putnam, John McDowell, Stanley Cavell, Kendall Walton, Paul Horwich, Michael Dummett, Jaakko Hintikka, Crispin Wright... even Daniel Dennett asserts that WIttgenstein was a major influence! Add to that a large number of people who direct much of their energy to Wittgenstein - people like Cora Diamond, David Stern, P.M.S. Hacker - and the tremendous number whose interests may lie elsewhere but still publish interpretive books and articles, apply Wittgenstein's ideas to literature or other fields, or write critiques of his work, and it is clear that academic philosophy is laced with Wittgensteinian discourse. Nevertheless, virtually <span style="font-style: italic;">no one</span> is being hired in the philosophy of mind or language if they profess to be Wittgensteinian or even have their primarily field of expertise in Wittgenstein. (I believe there may be a partial exception with regard to the philosophy of mathematics, but since there are few scholars or hiring opportunities in this area it is hard to say.)<br /><br />Why is this he case? Why do philosophers across a very wide spectrum profess interest in Wittgenstein, but almost without exception refuse to hire Wittgenstein scholars to positions in the philosophy of mind and language? There may be a variety of answers, but one simple fact stands out boldly. Wittgenstein was perhaps the leading exponent of the view that philosophical problems cannot be solved through scientific investigation. The "cannot" is a <span style="font-style: italic;">logical</span> cannot: philosophical issues are <span style="font-style: italic;">conceptual</span> issues, not issues of factual knowledge or ignorance, and are therefore simply <span style="font-style: italic;">closed</span> to scientific investigation. This may seem a strange position for one who wrote early on that the correct thing to do in philosophy would be to state only "the propositions of natural science", not to mention one who later held that there are no specifically philosophical problems.<br />We can clear up the misunderstandings generated by these positions another time; the point here is that whether because or in spite of them, the later Wittgenstein surely believed that no such thing as "cognitive science" was going to provide "solutions" to any "philosophical problems".<br /><br />Now, whether Wittgenstein would have agreed or not, I take it that <span style="font-style: italic;">if</span> there is such a thing as a philosophical problem, the mind-body problem certainly qualifies as the essential entry in this category. More effort has been expended on it, with fewer convincing results, than on virtually any other problem in intellectual history. It is with the hope of solving this (alleged) problem that universities are extending named chairs and large salaries to philosophers who profess to be doing cognitive science, many of them with their own theories to peddle, and an arm's length list of publications. The hope, or chimera, of being the owner of the key research that finally tells us how the brain is related to the mind, whether we have free will, consciousness or intentions, how we implement the computational paradigm, and the like - this is what drives academic research in the philosophy of mind today.<br /><br />Stangely enough, it is not a new perspective. It is one that was already denigrated by J.S. Mill in the mid-19th century, and which popped up again and more or less pooped out in the 1940's and 1950's (not least due to the influence of the <span style="font-style: italic;">Philosophical Investigations</span> and Ryle's very Wittgensteinian book <span style="font-style: italic;">The Concept of Mind</span>). It has virtually no actual successes in terms of solving any identified philosophical problems; nor is there even the least agreement among those who profess to be engaging in this research as to how one would recognize a successul solution if it presented itself. If it is not a correct approach, it is arguably the most extensive and profound example of the confusion that Ryle called a "category mistake" in the history of Western intellectual life. If so, the fact that it has so far prevented me from obtaining a fulltime academic position is really of very minor consequence compared with the destructive impulse it has engendered in our cultural moorings as a whole. For in my view, this perspective is continually foisted on us from many sides, be it the reporting in the <span style="font-style: italic;">New York Times</span> Science Section, or the claims of robotics and AI, or the legal arguments that now clog the courts in cases where murderers and rapists are said to have acted not on their own recognizance but from neural mechanisms beyond their control. (I am not here denying that there may be some such legitimate arguments in individual cases; I am pointing to a general trend of attempting to exonerate virtually any violent criminal by virtue of underlying neurophysiological mechanisms.) Thus, though a personal motivation to start this blog certainly exists, it would not be of interest to do so but for the much wider and perhaps insidious influence of the cognitive science viewpoint in our intellectual and cultural life.<br /><br />My intention, therefore, is to examine the claims and applications of cognitive science with respect to the mind primarily as they are represented in the press, the blogoshpere, and other popular forums. I will occasionally engage with philosophical discourse in this area, but it will not be my main focus. Nevertheless, and almost needless to say, I expect that the arguments I take up in this forum will apply more or less directly to one or more philosophical perspectives in cognitive science (or rather, attempts of cognitive science to present itself as philosophy). In this regard, I should say up front that there are two points of view that I will not pretend to take very seriously, regardless of their popularity. One is the view that the notion of a "mind" or "consciousness" as it is popularly understood is simply a gross illusion, or erroneous folk "theory", that will be eliminated once a proper scientific understanding of the brain is obtained. That such positions are taken seriously is a very sad comment on contemporary analytic philosophy; to which many more sad comments could be added, and may be in the course of this blog. Roughly, my response is that people who don't believe there <span style="font-style: italic;">is</span> such a thing as a "mind" have no business trying to "reduce" it (what "it"?) to brain functions, and ought to just go about their business studying axons and dendrites and neurotransmitters and leave us alone. The subject of the mind-body problem is the mind, just as the subject of the problem of gravity waves is gravity; not the "elimination" of gravity by studying real things like particles. If your solution to the mind-body problem is that there is no mind to be a problem, then thank you for your input, your job is done, and please report to the lab.<br /><br />The other position I will not take seriously - though perhaps slightly more so than the first - is that philosophy itself is simply an empty intellectual exercise, and even if cognitive science is wrong, it is better than mere mental masturbation. This perspective certainly won't be articulated by any working philosophers, but is a reaction of the student body and general public that sustains the more respectable academic opinion that science will answer many of the traditional questions of philosophy (yes, even ethics and aesthetics). The possible responses to this view are much more involved than I can get into now. If philosophy is a useless endeavor, why are scientists trying so hard to solve our problems for us? Why do scientists from Mach to Einstein to Stephen Pinker feel such a compulsion to express their scientific views in philosophical terms (usually, though not always, quite naively, and with little thought to the problems and contradictions their philosophical views entail)? Why do artists, legal scholars, linguists and historians, people with very concrete tasks to accomplish, delve deeply into philosophical literature and attempt to contribute to it? The fact is that you can't take a significant step in any intellectual enterprise without encountering, and in some manner solving, philosophical problems. They are the rivers, the bogs, the deserts, the vines and the weeds that challenge us when we try to move forward; and until the planet of our intellect stops growing altogether, and every road is paved and straight, they will never go away, but present themselves in new guises and contexts. This is why it is worthwhile to pursue the cognitive scientists, meet them on their own ground, observe what they think they are doing, and respond for the benefit of our intellectual development. I hope to accomplish a small part of this here.<br /></span></span>Tony Altermanhttp://www.blogger.com/profile/18136925406940818982noreply@blogger.com7