Saturday, December 15, 2007

Churchland Again: How to Duck Some Objections

Other minds have been debating my Churchland post over at DuckRabbit, attributing to a certain H.A. Monk (a name I have assiduously but unsuccessfully tried to excise from this blog, since it is internally related to my identity on my other blog, The Parrot's Lamppost) various assertions that concede a bit too much to both materialist and Cartesian views on the mind-body problem. Though the discussion seems to have ended up in a debate on ducks and rabbits (which I thought would have been settled long ago on that site; in any case, see my "Aspects, Objects and Representations" - in Carol C. Gould, ed. Contructivism and Practice: Toward a Historical Epistemology, Rowman and Littlefield, 2003 - for yet another contribution to the debate) Duck's original post offers a number of points worth considering. (Have a look also at N.N.'s contribution at Methods of Projection. N.N. picked the right moniker, too, maybe because there are also two n's in "Anton".) Here is a version of what I take to be Duck's central criticism of what I said about Churchland:
It's true that the materialist answer "leaves something out" conceptually; but the reply cannot be that we can bring this out by separating the third-personal and first-personal aspects of coffee-smelling, and then (by "turn[ing] off a switch in his brain") give him only the former and see if he notices anything missing. That the two are separable in this way just is the Cartesian assumption common to both parties. (Why, for example, should we expect that if he simply "recognize[s] the coffee smell intellectually" his EEG wouldn't be completely different from, well, actually smelling it?) I think we should instead resist the idea that registering the "coffee smell" is one thing (say, happening over here in the brain) and "having [a] phenomenological version of the sensation" is a distinct thing, one that might happen somewhere else, such that I could "turn off the switch" that allows the latter, without thereby affecting the former. That sounds like the "Cartesian Theater" model I would have thought we were trying to get away from.
While I appreciate the spirit of this comment, I must say that I think it does not merely concede something to Churchland, it is more or less exactly what Churchland is saying, though you might want to add "seen through an inverting lens". Churchland indeed wants to deny that "the two are separable in this way"; in fact he takes an imaginary interlocutor sharply to task for asking him to provide a "substantive explanation of the 'correlations' [between "a given qualia" and "a given activation vector"]" because this "is just to beg the question against the strict identities proposed. And to find any dark significance in the 'absence' of such an explanation is to have missed the point of our explicitly reductive undertaking" (Philosophical Psychology 18, Oct .2005, p.557). In other words: if what we have here is really an identity relation - two modes of presentation of things that are exactly, numerically the same - how dare you insist that I should explain how they are related. They are related by being the same thing, Q.E.D.!

My post was largely directed as fishy moves like this. The problem is that we have two things that we can - and lacking any evidence to the contrary, must - identify (pick out, refer to) by two completely different procedures; yet Churchland wants to assert that they are identical. What notion of identity is at work here is hard to say.
Since Churchland rejects the notion of metaphysical necessity it cannot be "same in all PW's". But it must be more than "one only happens when the other happens" since that is a mere correlation. Even "one happens if and only if the other happens" could mean nothing more than that some natural law binds the occurrence of the two things together, which does not give us numerical identity. He wants to say "blue qualia are identical to such-and-such coding vectors", and we have to take this as meaning more than that there is evidence for their regular coinstantiation. But to make it theoretically sound, or even plausible, in light of the fact that we recognize the two ideas in totally different ways, he must offer two things, at least: (1) an explanation of why these apparently distinct facts (qualia/coding vectors) are actually one and the same phenomenon (what makes the one thing manifest itself in such dissimilar ways); and (2) experimental evidence of an empirical correlation between them. Yet he also tells us that we are "begging the question" if we ask for an explanation! And as for the empirical correlation, it is not just that no one has sat down and examined a subject's cone cell "vectors" and asked them, "Now what color do you see?"; the fact is that the whole idea of "coding vectors" is a mathematical abstraction from a biological process that almost certainly only approximates this mathematical ideal, even before we get to the question of how regularly the outputs of the process end up as the particular color qualia that are supposed to have been encoded.

I am not saying there is no evidence at all for the analysis Churchland offers (based on the so-called "Hurvich-Jameson net" at the retinal level and Munsell's reconstruction of possible color experiences at the phenomenological level), but that there is not even evidence of a strict correlation. Some of the things that Churchland discusses - for example, the fact that this analysis of color vision is consistent with the stabilization of color experience under different ambient lighting conditions (p.539) - strongly suggest that something about the analysis is right, but do not constitute direct empirical evidence for it. What we are really being offered is a notion of identity that has as its basis neither metaphysics, nor scientific explanation, nor sufficient quantitative evidence to establish a strict correlation. We can be excused for saying "no thanks" to this libation.

And if this unanalyzed notion of the identity of phenomenological and biological facts is also being proffered in the name of some other philosophical position - say, Wittgenstein's - we should be no less skeptical. Merely proclaiming the lack of distinction between phenomenology and physiology, inner and outer, mind and world, something and nothing, etc. does not establish anything as a viable philosophical position on consciousness. Even adding the observation that one gets rid of philosophical problems this way does not establish it as a viable position. One gets rid of problems also by saying that god established an original harmony of thought and matter. If you can just swallow this apple whole, you'll find that the core goes down very easily.

Whoops, what happened to my erstwhile Wittgenstein sympathies? Well, maybe the apple I don't want to swallow is really this interpretation of Wittgenstein. Duck and I agree that being sympathetic to Wittgenstein does not require dismissing all scientific investigation of the brain (or the world in general) as irrelevant. But I don't think we agree on why. Duck quotes the following passage from the PI :
'Just now I looked at the shape rather than at the colour." Do not let such phrases confuse you. [So far so good; but now:] Above all, don't wonder "What can be going on in the eyes or brain?" ' (PI p.211)
What is Duck's view of this recommendation? He is not quite sure, but finally decides that philosophers' conceptual investigations will keep scientists honest, so they avoid causing problems for us philosophers:
In a way this is right... Don't wonder that... you thought that was going to provide the answer to our conceptual problem. But surely there is something going on in the brain! Would you tell the neuroscientist to stop investigating vision? Or even think of him/her as simply dotting the i's and crossing the t's on a story already written by philosophy? That gets things backwards. Philosophy doesn't provide answers by itself, to conceptual problems or scientific ones. It untangles you when you run into them; but when you're done, you still have neuroscience to do. Neuroscience isn't going to answer free-standing philosophical problems; but that doesn't mean we should react to the attempt by holding those problems up out of reach. Instead, we should get the scientist to tell the story properly, so that the problems don't come up in the first place.
For my part I don't think this is the point of Wittgenstein's various proclamations about the independence of philosophy from science. Wittgenstein was concerned that physicalistic grammar intrudes into our conceptual or phenomenological investigations, making it impossible to untangle and lay out perspicuously the grammar of phenomena. This is the root of what we call "philosophical problems". It is not the scientist who we have to get to "tell the story properly", it is the philosopher. The scientist does not have a fundamental problem with importing the grammar of phenomenology, thereby tying her physical investigations into knots. It is the other way around: the magnetic pull of physical concepts constantly threatens to affect conceptual investigation. To take a slightly oversimplified example, we say we can "grasp" a thought, but it is an imperceptible step further along the path of this metaphor that allows us to think we can capture it concretely - say, in a proposition, or a sentence of "mentalese" - in a sense that depends quite subtly on our ability to "grasp" a hammer or the rung of a ladder (picking it out as a unique object, self-identical through time, involved in a nexus of cause-effect relations, etc.). True, it takes quite a leap before you are ready to say, "The thought 'the cat is on the mat' just is this neuronal activation vector'", but that is one logical result of this sort of thinking. That we are ready to call this the solution to a philosophical problem just puts the icing on the cake; it is the dismissal of philosophy per se, in more or less the way we can dismiss morality by pointing out that we are all just physical objects made of atoms anyway, and who could care what happens to that?

When Wittgenstein says, "don't wonder, 'What can be going on in the eyes or the brain?'" he is using duck-rabbit-type phenomena to show that conceptual or psychological problems may not be tracked by any physical difference at all. In fact, there is a passage just after the one cited by Duck in which WIttgenstein lays it out as clearly as anyone could ask. He suggests a physical explanation of aspectual change via some theory of eye tracking movements, and then immediately moves to say,
"You have now introduced a new, physiological criterion for seeing. And this can screen the old problem from view, but not solve it". And again, he says, "what happens when a physiological explanation is offered" is that "the psychological concept hangs out of reach of this explanation" (p.212).
The point is very straightforward, and it is certainly compatible with what I have been saying about Churchland. The physical level of explanation just flies past the psychological concepts without recognizing or accounting for them. But in Duck's view, I am guilty of reintroducing the bogey of dualism and the "Carteisan theater" (I'm planning a post on Dennett soon so I'll avoid this bait right now):

So what's the moral? Maybe it's this. In situations like this, it will always seem like there's a natural way to bring out "what's missing" from a reductive account of some phenomenon. We grant the conceptual possibility of separating out (the referent of) the reducing account from (that of) the (supposedly) reduced phenomenon; but then rub in the reducer's face the manifest inability of such an account to encompass what we feel is "missing." But to do this we have presented the latter as a conceptually distinct thing (so the issue is not substance dualism, which Block rejects as well) – and this is the very assumption we should be protesting. On the other hand, what we should say – the place we should end up – seems in contrast to be less pointed, and thus less satisfying, than the "explanatory gap" rhetoric we use to make the point clear to sophomores, who may very well miss the subtler point and take the well-deserved smackdown of materialism to constitute an implicit (or explicit!) acceptance of the dualistic picture.
Absolutely, a physical explanation or description of consciousness is "conceptually distinct" from a phenomenological one. I can see no other possible interpretation of the passage about the eye-movement explanation of "seeing-as" phenomena. Does this make Wittgenstein a "dualist"? Certainly not in the Cartesian sense. True, Wittgenstein not only studied architecture and engineering and cited Hertz and Boltzmann in his early work; he also read (and failed to cite) Schopenhauer and James and had a deep appreciation of "the mystical", which he further identifies with "the causal nexus"; he says in the TLP that philosophy should state only facts, and that this shows how much is left out when all the facts have been stated. But is he now going so far as to suggest that there are different worlds, of scientific and mental reality? I seriously doubt it; and neither am I. There are different levels of explanation, or in his own terminology, different language games. This is not a Cartesian dualism but a point about the structure of thought. It is the same point that much of the Blue Book is based on.

I have not said much about my view of consciousness in this blog. But we're only just getting started, I've got time. I will say this, though: the resolution of the mind-body problem cannot be as simple as, for example, the New Realist (or "neutral monist") school hoped it would be. There, various aspects of reality were said to consist of a single "stuff" (read "substance", with various proposals for what this would be circulating at the time) which took on physical or psychological "aspects" depending on our interest, point of view, or whatever. This is a nice, compact view, but it does not do justice to the issue. There is a brain without which there is nothing in the world called "thinking", and a world without which nothing in a brain can count as "thought". There is every reason to believe that every event that ever counted as a thought took place in a brain, and that something was going on in the brain without which that thought would not have happened. This all has to be accounted for, and it is not sufficient to say that there are different aspects to some general substance or process. Sure, there are different aspects to everything, but this won't get us very far with the mind-body problem. How did an "aspect" of something that is also matter end up as consciousness? The problem is only pushed back. How can an "aspect" of whatever be self-aware, control its own actions, or compose a piano sonata? These are very peculiar aspects. If we could put them under an electron microscope we would not find out what we want to know about them.

I suspect that something like the following is the case: the various phenomena we call "the mind" are asymmetrically dependent on the brain, but the relationship is so loose that there is never anything like the "identity" relationship Churchland wants, nor a mere difference in points of view between the physical and phenomenological "aspects". We recognize certain psychological phenomena and talk about them and analyze them, and there is no such thing as a specifiable set of neural events that are necessary and sufficient for the instantiation of these phenomena - perhaps not even as types, and certainly not as specific thoughts, volitions, etc. There may be some wave oscillations in the brain that correspond to conscious states, but they are not those conscious states. There are particular portions of the brain that are primarily involved in certain aspects of our intellectual activity - emotions, language, memory, etc. - but there is not a specifiable neural "vector" that is "identical" to Proust's sensation of the taste of his mother's "sweet madeleines", much less to the flood of memories it evokes. Perhaps in Churchand's utopia we can replace Swann's Way with some mathematical specifications of its underlying neural activity without any particular loss, but I am not holding my breath.

Why do I think this, or even have a right to hold it out as a reasonable objection? Just because I think psychological concepts are not he rigid, well-articulated concepts that you find in much analytic philosophy. There is a way you can talk about things that are not uniquely or cleanly definable (Wittgenstein: "You are not saying nothing when you say 'stand roughly there...'"; a quote that is roughly accurate!). Talking about them is intellectually interesting in philosophy, important in clinical psychology and ethics, satisfying in the arts. It has been recognized by some neuroscientists and philosophers (Varela and others) that unless you have some kind of scientific phenomenology to begin with, you can't hope to reduce anything to neurology. But that position presupposes that there is something like a science of folk psychological concepts, on something like the lines that Husserl, Sartre and others tried to give us. And Wittgenstein too, in a certain sense: only his phenomenology of mind is imbued with the understanding that part of the "science" we are looking for involves the recognition of the vagueness or circumstantial relativity of concepts.

So how about a vague specification of cone cell coding vectors? "There is a 95% correlation between this coding vector and observed reports of red sensations." I could live with that. But it still doesn't give us a claim to "identity", nor does it justify saying that these are different "aspects" of the same event. They are different things that generally must happen
in order to recognize something as red. But I can say I dreamed of a red balloon and no one will say, "Oh, but there were no cone cell vectors, you couldn't have." And of course even my memory of a red balloon is a memory of something viscerally red, with no conce cell activity to show for it.

Wednesday, December 5, 2007

Brain Freeze, or Churchland on Color Qualia

It's been two months since I posted anything here, which is not how it was supposed to go. I have some excuses: replies to three papers at two recent philosophy conferences, a lack of breaking news on the cog sci front, and some personal stuff that I won't get into. Anyway, the last of the conference papers was concerned with a relatively recent paper by Paul Churchland, in which he argues for the "identity" of color "qualia" (an obnoxious Latinate neologism that philosophers use to refer to our mental experience of colors) with "cone cell coding triplets" or "vectors" - an analytic description of how the eye reacts on the cellular level to light of various wavelengths. Churchland further asserts that based on this analysis he can make certain predictions about our color experience in unusual cases, a feat that, according to him, is usually assumed to be beyond the power of materialist identity theories. That is the main point here; the identity of (a) the experience, and (b) the biochemical basis of the reaction, is said to not only account for ordinary experiences like seeing red, but for experiences which most people have not had. Churchland describes how to produce such experiences and provides various full-color diagrams to assist. The predictive power of the theory allegedly shows that the qualia-coding vector relationship is not a mere correlation but an actual identity.

It is not impossible that some philosophers have carelessly suggested that materialism cannot be true because it cannot make predictions about experience. But to rest the case against materialism on this narrow basis is a very bad idea, for the simple reason that there are straightforward and well known areas in which knowledge of the physical structure of the body allows you to make specific phenomenological predictions. For example, recently it was discovered that glial cells, which make up much of the central nervous system, contribute to severe or chronic pain by stimulating the pain-transmitting neurons. Prediction: find a drug that deactivates the glial cells, and with or without more traditional pain-relief methodologies (e.g., those which interfere with the transmission of signals across nerve synapses or attempt to freeze the nerve itself) the patient will feel less pain. There is a perfectly good phenomenological prediction from neurological facts.

And there are even easier cases. We know that the lens of the eye delivers an inverted image, which is subsequently righted by the brain. This suggests that our brains, without our conscious effort, favor a perspective that places our heads above our feet. (It is also possible that it is simply hard-wired to invert the image 180 degrees, but for various reasons that theory does not hold water.) Prediction: make someone wear inverting glasses, and they will see un upside down image at first (the brain inverts it out of habit), but eventually the brain will turn it right side up. It works!

And it gets even easier. After all, there were times long ago when we did not know anything about the internal structure of sense organs. Our auditory capabilities rest on the action of thousands of tiny receptors lodged in hair cells in the Organ of Corti, part of the cochlea of the inner ear. Prediction: dull the function of these receptors and and the subject will experience a loss of hearing. Wow, another phenomenological prediction. I'm sure you could go hog wild with this. Poke your left eye out and you will see in diminished perspective, an amazing prediction in itself. Practice seeing through one eye for a long time and your sense of perspective should increase. Such predictions differ a lot from an example that Churchland presents in another context, that trained musicians "hear" a piece differently than average audiences. That is also a predictable phenomenological fact, but it involves a change in the mental software, through accustomization and training, and does not obviously involve any sensual change. To see a new color or to have fewer distinct sounds reach the brain from the cochlea are sensual changes; to hear more deeply those sounds that do reach the ear, to organize them more efficiently and recognize more relationships between is not a sensual change but an intellectual one that we might metaphorically characterize as "hearing more than others". In fact musicians hear the same thing others hear but understand what they hear in a more lucid way. The sensual phenomena I have mentioned are actual changes in what reaches the brain for processing or in processing at a subliminal level, and do not depend on how we train ourselves to organize the information we receive.

I admit that my predictions are not very interesting; they operate at a more macro level than Churchland's strange color qualia, though not as macro as the following: cut out someone's tongue and they won't taste much. That's about like: cut out someone's brain and they won't think much. That may sound pretty obvious, but it wasn't always. Churchland is playing on the fact that intimate knowledge of how vision works is a relatively recent and still growing science. Thus it sounds like quite an amazing feat that he should be able to "predict" color "qualia".

But actually, although his predictions are more refined than mine, digging deeper into more subtle properties of the visual system, they are no more predictions of "qualia" than the general statement: interfere with some physical property of a sensory apparatus and you will change the sensations experienced by the subject. Refining this down to a specific phenomenological experience does not get closer to predicting "qualia", it merely makes a more specific prediction based on a fairly well fleshed out physical theory. It is roughly at the level of first discovering certain facts about the eye and then discovering that those facts are consistent with seeing a green after-image when exposed to a flashbulb. "I predict a green qual!" Okay, that's a little better than "I predict the stock market with crash - some time..." But it doesn't really do much for materialism. (And I'm not even talking about "eliminative" materialism here, which I said I'd refuse to take seriously, just the more typical materialist identification of experience with physical facts.)

Why? We could gloss Churchland's prediction as follows: "I predict that if you look at this in the right way you will have that experience that is commonly understood to be going on when a person utters the words, 'I see green'". And what is that? Just the very thing that non-materialists bring up as an "explanatory gap". Churchland can't predict we will have particular qualia because he doesn't have even so much as a theory as to what the relationship is between qualia and their scientific background. He seems to think that a correlation which has predictive accuracy is eo ipso an identity relation. But this is just another brain scam. One might say: qualia are a suspect kind of entity anyway, so why should I need a theory to account for them? Fine, but what you can't say is: these qualia you talk about, they just are these coding vectors, and then act like you've explained qualia. For example, suppose you were to say: these UFO's you talk about, they just are marsh gas. Okay, you've explained away UFO's. But you surely haven't explained UFO's. You've submitted the thought: until and unless you give me some specific physical evidence that there are these things, "UFO's", that cannot be explained by any other consistent set of physical facts except that secret aircraft controlled by animate beings are navigating our skies, I deny that UFO's exist as a category of object requiring independent explanation. Similarly, one can say: I can explain everything there is to explain about sensation without reference to "qualia", so why should I be obliged to give you a separate explanation of them? But that is not what is being offered. Rather, we are told, color qualia exist; they are cone cell coding vectors.

"Laughter exists; it is... [insert physical description of lung contractions and facial expressions]"
"Orgasm exists; it is... [insert physical descriptions of male or female anatomical changes during orgasm]"
"Aesthetic appreciation exists; it is... [insert data from brain scans of people listening to Mozart]"
"Religious rapture exists; it is... [insert data from brain scans of people talking in tongues]" (this has actually been studied, by the way)

When is Churchland going to wake up and smell the coffee? I'm not sure, but I don't think we should test it by asking him whether he's awake or not; better check his brain scan and let him know. Then do an EEG and see if he's smelling the coffee. With sufficient training he could be taught to look at the EEG and say, "Why, I was smelling coffee!" (This is the flip side of Churchland's utopia, in which we are all so well-informed about cognitive facts that introspection itself becomes a recognition of coding vectors and the like.) Now for the tricky part: turn off the switch in his brain that produces the coffee-smelling qual, and tell him that every morning, rather than having that phenomenological version of the sensation, he will recognize the coffee smell intellectually and be shown a copy of his EEG. And similarly, one by one, for all his other qualia.

Don't say: well, he doesn't deny these qualia exist, after all; he just thinks they are identical to blah-blah-blah... If he thinks they are identical to blah-blah-blah then he should not object in the least if we can produce blah-blah-blah without those illusory folk-psychological phenomena we think are the essence of the matter. So, on with the experiment. Where do you think he will balk? When we offer to substitute a table of coding vectors for the visual quals of his garden in springtime? An EEG for the taste of grilled tuna? Maybe a CAT scan of soft tissue changes rather than the experience of orgasm? I'd really like to know just how far he is willing to go with this. Would he wear one of those virtual reality visors, having in the program only charts and graphs and other indicators of brain and body function? Maybe Churchland is the only one among us who really understands how to have fun. Personally, I'll keep my red roses, my grilled tuna taste, and... the other stuff, thanks.


Saturday, October 6, 2007

AI, Cog Sci, Pie in the Sky

So I've been working my way through this long article on robotics that appeared in the July 29 edition of the Sunday Times, and I'm thinking the author, Robin Marantz Henig, is being very measured and balanced in dealing with these nasty questions, like "Can robots have feelings?" and "Can they learn?" etc. And yet I can't avoid the nagging conceit that in spite of her good intentions, she just doesn't get it.

Get what? Get what cognitive scientists really want. Get the idea of what Andy Clark, quoting computer scientist Marvin Minsky, calls a "meat machine". Artificial intelligence/meat machine: two sides of the same coin. Robots think; brains compute. It's a bit confusing, because it sounds like we're talking about two different things, but they are logically identical. Nobody said that wires can't look like neurons, or neurons can't look like wires; if we just used gooey wires inside robots, people who opened them up might say, "Oh, of course they have feelings, Madge, what do you think?" Maybe when we start creating real-life Darth Vaders, with some PVC-coated copper inside the skull (how long can it be until this happens?) people won't jump quite so quickly on the train going the other way: "Oh, of course all we are is elaborate computers, Jim, what do you think?" But the seed will have been planted, at least. With a little help from the connectionist folks we might begin one of those epistemological shifts to a new way of thinking, sort of like when people began to accept evolution as a natural way of looking at species. This is the picture that cognitive scientists really want.

Ms. Henig describes her encounters with a series of robots at the M.I.T. lab of Rodney Brooks: Mertz, whose only performance was to malfunction for the author; Cog, a stationary robot that was "programmed to learn new things based on its sensory and motor inputs" (p.32); Kismet, which was designed to produce
emotionally appropriate "facial" expressions; Leo, which was allegedly supposed to understand the beliefs of others, i.e. it had a "theory of mind"; Domo, equipped with a certain amount of "manual" dexterity; Autom, linguisitcally enabled with 1,000 phrases; and Nico, which could recognize its "self" in a mirror. (You can get more intimately acquainted with some of these critters by going to the Personal Robots Group at the MIT Media Lab web site. Before they try to create consciousness in a can, the roboticists should try fixing their Back button, which always leads back to the MIT site rather than their own page.) Throughout her discussion, Henig expresses both wonder at the tendency of people to interact with some robots as if they were conscious beings (a result of cues that set off our own hard-wired circuitry, it is surmised) as well as disillusionment with the essentially computational and mechanical processes responsible for their "humanoid" behavior. It is the latter that I am referring to when I say I don't think she's quite clued in to the AI mindset.

The first hint at disillusionment comes when she describes robots as "hunks of metal tethered to computers, which need their human designers to get them going and smooth the hiccups along the way" (p.30). This might be the end product of one of my diatribes, but how does it figure just 5 paragraphs into an article called "The Real Transformers", which carries the blurb: "Researchers are programming robots to learn in humanlike ways and show humanlike traits. Could this be the beginning of robot consciousness - and of a better understanding of ourselves?" Is Henig deconstructing her own article? She certainly seems to be saying: hunks of metal could only look like they're conscious, they can't really be so! Whereas I take it that computationalists suggest a different picture, of a slippery slope from machine to human consciousness, or at least a fairly accurate modeling of consciousness by way of the combined sciences of
computer science, mechanics, neuropsychology, and evolutionary biology. (Sounds awfully compelling, I must admit.)

Henig does say that the potential for merging all these individual robot capacities into a super-humanoid robot suggests that "a robot with true intelligence - and with perhaps other human qualities, too, like emotions and autonomy - is at least a theoretical possibility." (p.31) Kant's doctrine of autonomy would have to be updated a bit... And can we add "meaning" to that list of qualities"? (I'd like to set up a poll on this, but it seems pointless until I attract a few thousand more readers...) The author seems inclined to wish that there were something to talk about in the area of AC (Artificial Consciousness :-) but then to express disappointment that "today's humanoids are not the sophisticated machines we might have expected by now" (p.30). Should we be disappointed? Did anybody here see AI? (According to the article Cynthia Breazeal, the inventor of Kismet and Leo, consulted to the effects studio on AI - though not on the boy, who was just a human playing a robot playing a human, but on the Teddy bear.)

Cog, says Henig, "was designed to learn like a child" (p.32). Now here come a series of statements that deserve our attention. "I am so careful about saying that any of our robots 'can learn'", Brooks is quoted as saying. But check out the qualifiers: "They can only learn certain things..." (that's not too careful already) "...just like a rat can only learn certain things..." (a rat can learn how to survive on its own in the NYC subways; how about Cog?) "...and even [you] can only learn certain things" (like how to build robots, for example). It seems to be inherent in the process of AI looking at itself to imagine a bright future of robotic "intelligence", take stock of the rather dismal present, and then fall back on a variety of analogies to suggest that this is no reason to lose hope. Remember when a Univac that took up an entire room had less capabilities than the chip in your cell phone? So there you go.

Here we go again: "Robots are not human, but humans aren't the only things that have emotions", Breazeal is quoted as saying. "Dogs don't have human emotions either, but we all agree they have genuine emotions." (Obviously she hasn't read Descartes; which may count in her favor, come to think of it.) "The question is, What are the emotions that are genuine for the robot?" (p.33) Hmmm... er, maybe we should ask the Wizard of Oz? After reading this statement I can't help thinking of Antonio Damasio's highly representational account of emotions. For Damasio, having an emotion involves having a representation of the self and of some external fact that impacts (or potentially impacts) the self; the emotion consists, roughly, in this feedback mechanism, whereas actually feeling the emotion depends on consciousness, i.e., on recognition that the feedback loop is represented. On this model, why not talk about emotions appropriate to a robot? Give it some RAM, give it some CAD software that allows it to model its "self" and environs, and some light and touch sensors that permit it to sense objects and landscapes. Now program a basic set of attraction/avoidance responses. Bingo, you've got robot emotions. Now the feeling of am emotion, as Damasio puts it - that will be a little harder. But is it inconceivable? It depends, because this HOT stuff (Higher-Order Thought, for those socially well-adjusted souls out there who don't spend your lives reading philosophy of mind lit) can get very slippery. Does the feeling require another feeling in order to be felt? And that require another feeling, etc.? I suppose not, or no one would pause for 2 seconds thinking about this theory. One HOT feeling is enough, then. Great. RAM 2 solves the problem; the robot now has a chip whose function is to recognize what's being represented on the other chip. This is the C-chip (not to be confused with C-fibers) where Consciousness resides, and it produces the real feelings that we (mistakenly, if Damasio is right) call "emotions". So, we're done - consciousness, feelings at least, are represented in the C-chip, and therefore felt. Now we know what it's like to be a robot: it's like having second-order representation of your emotions in a C-chip. And now we can end this blog...

Unless we are concerned, with Henig, that still all we have are hunks of metal tethered to computers. Let's move on. Leo, the "theory of mind" Bot, M.I.T calls "the Stradivarius of expressive robots". Leo looks a bit like a Pekingese with Yoda ears. If you look at the demo on the web site y
ou can see why Henig was excited about seeing Leo. A researcher instructs Leo to turn on buttons of different colors, and then to turn them "all" on. Leo appears to learn what "all" means, and responds to he researcher with apparently appropriate nods and facial expressions. Leo also seemed capable of "helping" another robot locate an object by demonstrating that the Bot had a false belief about its location. Thus, Leo appears to have a theory of mind. (This is a silly way of putting it, but it's not Henig's fault; it's our fault, for tolerating this kind of talk for so long. Leo has apparently inferred that another object is not aware of a fact that Leo is aware of; is this a "theory of mind"?) But, says Henig, when she got there it turned out that the researchers would have to bring up the right application before Leo would do a darned thing. Was this some kind of surprise? "This was my first clue that maybe Leo wasn't going to turn out to be quite as clever as I thought." (p.34) If I were an AI person I would wonder what sort of a worry this was supposed to be. I would say something like: "Look, Robin, do you wake up in the morning and solve calculus problems before you get out of bed? Or do you stumble into the kitchen not quite sure what day it is and make some coffee to help boot up your brain, like the rest of us? Why would you expect Leo to do anything before he's had his java?" Well, complains the disappointed Henig, once Leo was started up she could see on computer monitors "what Leo's cameras were actually seeing" and "the architecture of Leo's brain. I could see that this wasn't a literal demonstration of a human 'theory of mind' at all. Yes, there was some robotic learning going on, but it was mostly a feat of brilliant computer programming, combined with some dazzling Hollywood special effects." (p.34). Leo was not even recognizing objects per se, but magnetic strips - Leo was in part an elaborate RFID reader, like the things Wal-Mart uses to distinguish a skid of candy from a skid of bath towels. Even the notion that Leo "helped" the other Bot turns out to have been highly "metaphoric" - Leo just has a built in group of instruction sets called "task models" that can be searched, compared to a recognizable configuration of RFID strips, and initiated based on some criteria of comparison.

And what exactly do humans do that's so different? You know what the AI person, and many a cognitive scientist, is going to say: after 10's of millions of years of evolution from the first remotely "conscious" living thing to the brain of Thales and beyond, the adaptive mechanisms in our own wiring have become incredibly sophisticated and complex. (So how do you explain Bush, you ask? Some questions even science can't answer.) But fundamentally what is going on with us is just a highly evolved version of the simple programming (! - I wouldn't want to have to write them!) that runs Leo and Cog and Kismet. What conceivable basis could we have for thinking otherwise?

Henig goes on to talk mainly about human-robot interaction, and why the illusion of interacting with a conscious being is so difficult to overcome. Here, as you might expect, the much-ballyhooed "mirror neurons" are hauled out, along with brain scans and other paraphenalia. I don't have too much to say about this. There are certainly hard-wired reactions in our brains. One could argue that what makes humans different from all possible androids is that we can override those reactions. A computer can be programmed to override a reaction too, but this merely amounts to taking a different path on the decision tree. It overrides what it is programmed to override, and overrides that if it is programmed to do so, etc. But someone will say that that is true of us too; we merely have the illusion of overriding , but it is just another bit of hard-wired circuitry kicking in. Since this spirals directly into a discussion of free will I'm going to circumvent it. I think evolved, genetically transmitted reaction mechanisms may well play a part in our social interactions, and if some key cues are reproduced in robots it may trigger real emotions and other reactions. What happens once that button is clicked is a matter that can be debated.

The article concludes with a variety of surmises on consciousness, citing Dennett, philosophy's own superstar of consciousness studies, and Sidney Perkowitz, an Emory University physicist who has written a book on the human-robot question. Consciousness, says Henig, is related to learning and emotion, both of which may have occurred already at the M.I.T. lab, though only Brook seems to think the robots actually "experienced" emotions in the sense that Damasio requires. Dennett says that a robot that is conscious in the way we are conscious is "unlikely"; John Haugeland said the same thing in 1979; robots "just don't care", he says (see "Understanding natural Language"). And these are some of the people who are most inclined to describe the mind as a in some sense a computational mechanism located in the structure of the brain. But people who would go much further are not hard to find. "We're all machines", Brooks is quoted as saying. "Robots are made of different sorts of components than we are... but in principle, even human emotions are mechanistic". (p.55) He goes on: "It's all mechanistic. Humans are made up of biomolecules that interact according to the laws of physics and chemistry." (I'm glad he didn't say "the laws of biology".) "We like to think we're in control, but we're not." You see, it's all about free will. These cog sci guys want to drag us into a debate about free will. No, I take that back, they have solved the problem of free will and they want us to see that. Or possibly, they have been reading Hobbes and want to share the good news with us. Whatever.

Henig's elusive, ambivalent position on robotic consciousness is easy to sympathize with, and as anyone who has read this post thoughtfully can tell, the ultimate point of my article is not to take her to task for being naive or ambivalent. It is that perspectives like the one coming from Brooks have insinuated themselves into our culture - into the media, philosophy, and cocktail parties - and legitimized the notion that whatever is left of the mind-body problem will just be taken care of by the accumulated baby steps of Kismets and Leos and Automs. Statements like the ones Brooks makes are tokens of the inability of people to think outside their own intellectual boxes. There is plenty of scientific evidence for the fact that mental processes go on below the level of consciousness (blindsight, etc.); there is not the remotest shred of evidence that these processes are mainly computational, or that computations, however complex, can yield outputs that have more than a superficial similarity to any kind of animal consciousness. There is every reason to believe that every fact and event in the universe has a scientific explanation; there is not the slightest reason to believe that explanation of consciousness is more like the Cartesian-Newtonian mechanisms behind the motion of mid-sized objects at slow speeds than it is like the probabilistic fields of quantum electrodynamics. We don't have a clue how consciousness works; not at the neural level, and certainly not at the computational level. We are in the same position that Mill was in the 19th century when he said that whatever progress we might hope for in the area of brain research, we are nowhere near knowing even whether such a research program will produce the results it seeks, much less what those results might be. We very likely do not even have two psychologists, neurologists or philosophers who agree with one another on what an emotion is, much less whether a robot could have one.

What's more, at present we have no philosophical or other justification for the notion that when we are trying to solve the mind-body problem, or talk about the mind or consciousness at all, what we are looking for should be thought of at the level of explanation of basic science or computation rather than traditional philosophy or psychology. People have brought all sorts of tools to the study of literature - lately, even "evolutionary literary studies" have gained a foothold, to say nothing of Freudian, Marxian, linguistic, deconstructionist or anthropological approaches. Does any of this demonstrate that the best understanding of literature we can obtain will be through these approaches, which subvert the level of literary analysis that studies the author's intentions, rather than through traditional literary criticism or philosophical approaches to fictionality? I don't know that philosophers or literary critics are in general ready to concede this point, though obviously various practitioners of postmodernism and other such trends would like to have it that way. Then why would we concede that the best approach to the mind-body problem is through AI, IT, CS, or other two-letter words? We might be better off reading William James (who was hardly averse to scientific study of the mind) than reading Daniel Dennett. Or reading Husserl than reading Damasio. We'd certainly be better off reading WIttgenstein on private language than Stephen Pinker on the evolutionary basis of cursing.

Put all the C-chips you want into Leo or Nico. Putting in a million of them wouldn't be that hard to do these days. Give them each 1,000,000 C-chips, 10 petabytes each; what will that do? Get them closer to consciousness? They're still hunks of metal tethered to computers, and for all we can tell, nothing that any AI lab director says is going to make them anything more.

Sunday, September 16, 2007

What Is It Like to Be a Parrot Named Alex?

(9/27/07: Very minor changes made, and a number of potentially misleading typos corrected.)

What to do for news? The war in Iraq goes numbingly on and on; the Presidential election is already old and still a year off; there are no national scandals, or perhaps so many that none stands out; and
most recent natural and manmade disasters pale in comparison with those of yesteryear (at least in their sensational aspect - famine and disease are not big headline-grabbers, though perhaps more devastating in reality). Perhaps someone important has died recently? There you go! By the lights of New York Times editors, the untimely death of Alex the Parrot is definite breaking news. First, an o-bird-uary, then an editorial, followed by numerous letters, and now - squawk - a "Week-in-Review" article on the implications of Alex's famous efforts at learning, speaking, and conceptualizing. (Not to mention previous articles, like the "A Thinking Bird? Or Just Another Bird-Brain?" (10/9/99), reproduced on 123computer.net.) This is more space than they have devoted to most non-human individuals, as well as to about 99.999% of human individuals. Something big must be happening! Could it be - another opportunity to question whether consciousness really amounts to much more than rote responses? If you are willing to see that question as being implied by its reverse - Is rote learning a form of consciousness? - then the answer is Yes.

I was planning to post the last of three introductory pieces for this blog under a title like, "What Is It Like to Be Anton Alterman?" or "What Is It Like to Be Or Not to Be?", and I somewhat regret not doing that (especially since I want to take credit for the latter title, which is rich in possibilities). Of course many such satirical titles have been used in professional journal articles in response to Thomas Nagel's (in)famous "What Is It Like to Be a Bat?", but one more wouldn't hurt. However, in the interest of being relevant and timely I opted to join the Alex debate, without changing very much what I intended to say.

For those of you who have not followed it, Alex was a gray parrot who was being trained by Dr. Irene Pepperberg in using words to indicate concept recognition. Alex was reportedly able to identify (within limits) colors, numbers, shapes and materials, to combine these concepts in interesting ways, to understand simple, stripped-down English language syntax, to respond to some situations in a way that apparently mimicked human emotive responses, and to verbally indicate his expectations of reward for performing correctly ("Want a nut!"). In some cases Alex was able to formulate responses that bordered, or appeared to border, on combining known concept-words to form new concepts. Alex did not have a huge vocabulary - about 100+ words - but in applying these words selectively in response to questions, he gave the impression that he understood the references of not only object names but property words and emotional terms.

There has been much scientific brouhaha about Alex. Typically, people involved in cognitive research have insisted that Alex's reactions are just sophisticated stimulus-response
behavior. Whereas human language is rule-based and reflects some internal representation - a "compositional" module from which we produce the infinite transformations of basic syntax that we know as human language. Even Alex's trainer, Dr. Pepperberg, denied that Alex was using language, describing it rather as complex communication of some other sort. By implication, Alex was not demonstrating that this important aspect of human consciousness is available to creatures with pea-sized brains. Consciousness of the human type is still safe from animals, just as Descartes wanted it to be; and apparently talking parrots are no exception.

Why the resistance? One thing that occurs to me is that the huge debate over human consciousness would take on a very different shape if it could be identified in a more pure and simplified form in creatures who are perhaps more closely related to dinosaurs than to humans. What is it like to be Alex? Well, ask him! So the answer you get would not be even as sophisticated as that of a 3 year old child. But who wants sophisticated? That's only going to distort the unvarnished report of the nature of being so-and-so. Consider the idea that we train Alex, or his successor, to have just exactly enough grammar and vocabulary to answer the question: "What is it like to be you, Alex?" It is pretty clear what kind of answer we would get: "Want a nut!" Does this mean Alex just can't learn enough to answer the question? Why is that not a satisfactory answer?

Someone (I hope) is reading this blog. Ask yourself: What is it like to be you? What kinds of answers are at your disposal? You can describe experiences and sensations you enjoy, desires and drives that you have, pains, emotions, things you know or believe, worries, creative impulses; in short, the things that go to make up the bulk of your cognitive life. Is that a better answer to the question "What is it like...?" Why so? Being Alex is pretty simple. "Want a nut" is a large part of it. But Alex would also say "I love you" to Dr. Pepperberg at the end of the day, or "I'm going away now" to indicate resistance to a training session. So, with some help, Alex might have been ready to add to the "What is it like...?" response: "Sometimes I don't want to train and I say the thing Irene says when she goes away, and sometimes I realize that I am going to be alone and I don't like that so I say the thing that seems to mean Irene will come back." Pretty basic; Alex has some likes/dislikes beyond cashews, and he knows (and possibly feels) the difference between company and no company.

Okay, that's what it's like to be Alex. Have you figured out what it's like to be you? I mean, what it's really like, not these more sophisticated Alex-type responses. You have, of course, direct access to your own phenomenal consciousness, and this, the story goes, has its intuitive feel, or style, or shape, or... quality, that's it. As Michael Tye tells us
ad nauseum in the first chapter of his book Ten Problems of Consciousness, "for feelings and perceptual experiences, there is always something it is like to undergo them" (p.3). Or take Pater Carruthers (our token HOT theorist for this post) who writes: "Phenomenally conscious states are states that are like something to undergo; they are states with a subjective feel, or phenomenology; and they are states that each of us can immediately recognize in ourselves..." ("Why the Question of Animal Consciousness Might Not Matter Very Much", Philosophical Psychology 18 (2005) 83-102, p.84). Well, come on, now, you're an articulate and reflective sort, what the heck is it like after all? Tom, Peter, Michael, you said it, it's like something to be in your (directly accessible) state of consciousness. So, what's it like? How does it feel? Cat got your tongue?

Bats, says Nagel, operate on sonar, and "this appears to create difficulties for the notion of what it is like to be a bat" (Mortal Questions, p.168). Moreover, according to Nagel, we cannot even know what it is like for people who are deaf and blind from birth to be the way they are (p.170). Many of those people, nevertheless, can use language about as well as the rest of us, so clearly Nagel is writing off their ability to communicate linguistically as a conveyor of "what it's like". But no such difficulty attends knowing what your own experience is like; the only difficulty, according to Nagel, is that we can't express it in "objective" language such that the next guy can grok it. (If you haven't read Robert Heinlein's Stranger in a Strange Land please proceed to the nearest bookstore; and pick up a copy of Kurt Vonnegut's Cat's Cradle too, and maybe Thomas Pynchon's The Crying of Lot 49 while you're at it, as I have no compunction about using neologisms of philosophical interest from popular literature. Anyway, grokking is kind of like allowing a meme into one of your mental ports. Capiche?) So there you have it: let Alex talk all he wants; let you talk all you want; let even Tom Nagel talk all he wants, ain't nobody going to express what it's like to be them in such a way that the next animate, linguistically enabled being can read it and know what it is like to be them.

But if that's the case, why deny that Alex was using language? It seems that the phenomenology behind the use of words remains forever hidden and indecipherable; the information is simply lost. Riemann discovered he could reconstruct mathematical landscapes from the zeros of the zeta function (see Marcus du Sautoy, The Music of the Primes); but apparently, no one can reconstruct phenomenological landscapes from their base level expressions. The equations of language simply fail as the conveyors of information about the curves and lumps from which they originate. If our phenomenological reports are the equations of our system, their original coordinates are simply lost. We have a scrambled vista of individual reports, but this gives us no leg up on the original landscape that shows what it's truly like. At best we can build our own landscape by association with the terms of someone else's report. With Alex's report, our ability is that much more limited. A bat's report, forget it: screeeeeeyyyyeeeee builds about as flat a landscape as Riemann's zeros, with no possibility of recovering the geographic information. But the bat idea is, as Nagel himself admits, kind of superfluous; all we ever get through language - indeed through any form of communication - is, you might say, an instruction to translate these words into associations from your own experience. And the less you can associate - very little with Alex, just about zip with a bat - the less you can even do that. The landscape-in-itself, that untouchable "what it's like" of the other, remains all zeros, for all conscious creatures whatsoever.

Now all I want to say is that this whole conception is incoherent (in the vernacular, rubbish - but we don't say that in nice philosophical discussions, even across the social tables at APA meetings. But let them try bringing the meeting to Brooklyn for a change...) You cannot describe a problem as a gap in the capabilities of objective language (nor as beyond the limit of our cognitive capabilities, but I'll deal with McGinn and his school (?) some other time). The phrase what it is like does not describe generically some objective thing that can't be objectively described in any specific instance. It is not a placeholder for something that we await better forms of expression for. There is no such place. There is no "there" there, or rather no "what" there. The phrase "what it is like",
to borrow another of Wittgenstein's analogies, is a linguistic wheel that turns without moving any part of the mechanism.

To go deeper we have no choice but to consider how the expression "what it's like" is being used here. It seems that there is supposed to be some objective quality, likeness, that inheres in conscious states, but which we lack the means to express in words; if only we could, we could say what it's like. But when Tom Nagel or Michael Tye reflect on their own consciousness, what exactly is the likeness that they find there a likeness to; what is that very thing that they recognize and that is objectively different from what a bat would find, if bats were self-reflective? "Well, that's he problem, you see; we can't say! Maybe you just don't understand the problem. Everyone else seems to understand it. How can we help you?" This is such a cop-out, it does nothing but extend the attempt to wrap the reader in nonsensical uses of common expressions. The assumption that there just is something that it's like to have this or that form of consciousness should be recognized as one of the most peculiar and frankly nutty ideas in philosophy; but it goes on and on as the eye of the hurricane in the consciousness debate. Examples and counterexamples fly through the air all around it, blowing inverted spectra and swamp men all over the place, and likeness just calmly stands stock still in the middle of it, laughing at the maelstrom. But the phrase does not denote anything; not, of course, that there is no such thing as consciousness (fie on those who hope to turn this into an argument for eliminativism or behaviorism) but there is not some particular way that consciousness feels, seems, is. We think: if only there were some brilliant enough mind to find the way to express it, some Shakespeare/Einstein of consciousness, it could be done, and everyone would say, "Oh yes, of course, that's what it's like" and the problem would be solved. (Maybe we should start a committee?)

The "hard" problem of consciousness is often confuted (on my reading) with the problem of expressing the nature of particular phenomenal states. What's it like to see green? Well, for one thing, it's about the same thing to see green whether you are normally color-sighted and seeing a green light, or you have inverted spectra and are looking at what most people see as a red light. And, I think, this goes right to the bottom: it is the same for a llama to see green as it is for you. And the same for a bat, if they can. (To my knowledge, bats are not actually blind, nor do they use sonar exclusively, but if I'm wrong, NBD.) So the difference in likeness must lie somewhere else. You would not think so from reading Tye and others who have taken to ascribing a likeness to each and every type of perception or experience; which gets us nowhere, and is highly misleading as to the original point of Nagel's article. There is supposed to be something it is like to be in a general state of consciousness. That this leads to some communicative gap is more believable, at least, than that there is some sort of problem with seeing green or feeling pain. If the point of the whole discussion of phenomenal consciousness were that there is not some linguistic expression that just is the glassy essence of an experience, to be passed from mind to mind like genes that can reproduce entire individuals, it would have been obvious very quickly that there is no such form of language, that there never will be, and that this is not a "problem" so much as a misunderstanding of how the language of sensations functions. (I can't resist putting in a little plug here: Wittgenstein dealt with this gap between phenomenon and expression at length in his 1929-30 manuscripts, and it played a crucial role in the transition to his later philosophy and the private language argument; this is the subject of my thesis, Wittgenstein and the Grammar of Physics, CUNY Graduate Center 2000).

"But what about black and white Mary, doesn't she learn something new when she sees green grass, and isn't that "something" an objective bit of knowledge about how the universe is? And if so, doesn't that lead to the same problem? Because Mary can't say what the difference is, but she definitely learned something, not nothing." You are thinking: things seem very different to Mary, her world is suddenly phenomenologically richer in a very obvious way, and we can't deny that that is some real difference in the physical universe. So who wants to deny anything? :-) We did not need Mary to demonstrate this. We had duck-rabbit to demonstrate it a long time ago. There is a real difference in the universe when we saw only duck and now we see rabbit; it is not nothing, is it? We are in a different mental state (I am conceding, here, for the sake of argument, the customary notion of a "mental state", pace Wittgenstein's objections to this use of the term), and on the assumption that the universe contains only natural laws, measurable forces and physical objects, that is either a definite material difference or it's an illusion (which is itself a material difference, etc.)

We also have Alex to demonstrate it. Do you think that when Alex started to discriminate colors verbally his world became phenomenologically richer? I do, and not because words were added to his world. I don't believe that Mary actually "learns" anything when she is presented with an unexpected flood of color sensations. But when she later begins to conceptualize what she perceives, she does. But of course, she can also then say what the difference is; if you can't say a concept then I don't know what you can say. Alex was not very special just for squawking "red circle wool" and "green triangle metal". We have insufficient evidence to say that Alex really had concepts, but he gave enough of an impression that he did that we want to attribute to him a world richer than that of a feathered machine. Like color-concept-Mary, it appears, at least, that Alex became in some sense aware of differences that were stored in the raw data of perception.; just as connecting the dots on a line graph can bring out relationships that were not apparent before. Alex's discovery is real; so is Mary's. But neither is some willowy subjective qual that can't be expressed objectively. Both have added to their consciousness an awareness of the color spectrum. That's it. That's what it's like in this case. Nothing like those Tractarian edicts about the logic of language that "can't be said". If "now I see colors" is not a direct expression of what it's like to be in the new perceptual state, then we have the wrong idea of what an "expression" can accomplish.

"What is it like?" is used to ask for something that can be said. Applied to something that by nature can't be said, the question is nonsensical. "Where is Thursday?" Hmmmm... "Okay, what is Thursday like?" Err, I wake up at 7:15, give the kids breakfast, drop them off at school, then I take a shower... Or rather, I have the foggy -head feeling, then I feel the cool sweet tangy taste of orange juice, then the anxiety-to-get-to-work-on-time feeling... Is that what you're looking for? Try asking a construction worker or cab driver, "What's it like to be you?" You'll get an answer of some sort. What's wrong with the answer? Nothing, it's the kind of answer you're supposed to get when you ask a question like that. Even philosophers can answer the question. "Well, we all agree that there is something that attends human conscious experience that is different from what attends avian conscious experience, if avian experience is conscious at all, and that should have some expression, but it doesn't (so far). That's the problem." Well, we all know that that the word "hot" does not feel hot! Is that the problem? And the word "green" does not always look green (in fact it could look red). Is that the problem?? And the words "normal, perceptually enabled, self-aware human waking consciousness" do not feel like conscious experience. Is that the problem??? If not, what is the problem like?

At the level of consciousness as a whole, there just is nothing that it's like. That is not to say consciousness is nothing; there is just nothing that it's like. Not because there is nothing similar enough (that would be a normal use of the expression and would not lead to any philosophical problems). It's because being like in the way it is used here is actually just a meaningless colloquial expression, a familiar linguistic crutch: "You know, it's like, I don't know what he wanted, but he was like, mad at me, so I like sat there and wondered what the hell am I supposed to say?" This is really how the expression is being used here! Not the proper use that we employed in putting the question to the cab driver. Or else we are imagining that we wake up one day as a bat, while somehow retaining our own consciousness as background information, and go "Oh, how weird, I'm not enjoying this at all, I want things to be like my former conscious states!" (To think that philosophy glides along on such B-movie fantasies is sobering.) What role does "like" have here? We actually have two things before us to compare, so there's nothing wrong with it. So we can also say, "Human consciousness is not like bat consciousness?" And now the grand conceit that throws everything off: "So what is it like?" And what could the answer possibly be? Only things of this sort: "It's like Martian consciousness; I know, because I was abducted by aliens and temporarily had a Martian brain implanted, into which was downloaded all the data of my own mind for comparison, and you know what? Martian consciousness was very much like our own." But if you want some different type of answer, where the "what" is a placeholder for some really brilliant analysis that describes for all to see just "what" it's like, you're a naughty person out to set a grammatical trap for unsuspecting philosophers. And almost everyone who has discussed consciousness in the last 25 years has landed in this trap.

What is it like to be Alex? Or a bat? That we can't answer these questions is not an indication of a problem, philosophical or otherwise, and certainly does not point to a gap in materialism. Materialists generally dismiss the Nagel problem without a great deal of fanfare, wondering (if materialists wonder) what exactly the problem is. As well they should. Materialists have missed their chance to hand over one of the supreme ironies of contemporary philosophy: they could have quite profitably quoted the Wittgenstein remark you have all been waiting for me to produce: "The phenomenal quality of consciousness is not a something, and not a nothing either." Of course they would never quote Wittgenstein until they had tenure. But it would be perfect. Wittgenstein was a materialist. (Yes! Like you, and me...) He just was not a naive materialist, the kind that believes we can eventually substitute talk about the brain, and its structures and processes, for talk about the mind. But he would surely have said that talk about "what it's like" is a grammatical error based on the false inference that there is "something" consciousness is like because there is not "nothing" it's like. The materialist should say: sure it's like something: it's like having this set of neurological processes in these kinds of neurological structures. And that's a perfectly good answer. It has virtually no philosophical import whatsoever for any question in philosophy of mind, epistemology, cognitive psychology, aesthetics, ethics or anything else; but it is one of the few answers that provide a sense to the question "What is it like?" But if you reject this, and the cab driver's answer, and all other reasonable answers, then of course you must say: no, there is not something it's like, but it's not like nothing either.

"So I thought Alterman was out to persuade us that cog sci approaches to consciousness are hopeless. But among other things, he offers us a passionate defense of one of the great eliminativist propositions, that qualia don't exist. Then he tells us that the only something that consciousness can be like is gray matter. Boy, is he confused." Well, I knew that was coming. But all I will say right now is this: it is a virtual certainty that if you posit a certain ontology as the very essence of consciousness, and that ontology is vacuous, the next thing that will happen is that someone will come along and say, "Hey, here's a much better ontology, it's called physical objects, which appear here as neurons, and it surely makes your wan and evanescent ontology of qualia otiose, not to mention boring and stupid". Just as the unacceptably spooky Cartesian immaterial substance gave way to phenomenologies of various sorts, inexpressible but somehow objectified qualia are an open door through which cognitive scientists can run at full speed, with wires, test tubes and forceps flashing, declaring that there are no such ghostly objects, and that "neurophilosophy" will save the day for consciousness studies. Nothing has done more damage to the effort to understand consciousness than the notion of a subjectively objective quality of "what it's like". This phrase should be banned from the language, except as a historical reminder of how the discussion of consciousness was distorted for three decades.

Let me close with a few further thoughts about Alex. I think Alex was using language, in at least the sense that the person described by St. Augustine in the passage that opens the Philosophical Investigations was using language. Wittgenstein's point, of course, is that human language cannot be reduced to the simple game of ostensive reference, where the rest of the field "takes care of itself". Alex's accomplishments may have actually been slightly beyond that of Augustine's infant self (not his real infant self, which would have been 100 times more sophisticated than Alex, but the one he describes). But even if they were not, it is clear, as even the NY Times writers brought out, that Alex's behavior prompts us to ask what we ourselves are doing when we use language. Indeed, I don't think I can say it better than Verlyn Klinkenberg does in the Times editorial: "To wonder what Alex recognized when he recognized words is to wonder what we recognize when we recognize words." (George Johnson, BTW, ends his article with a brief musing on the Nagel problem.) "Using language" is not necessarily an on/off situation. Wittgenstein says that the Augustine picture represents "a language simpler than ours". That is not no language, it is a language simpler than ours. So perhaps Alex was using a language much simpler than ours. How much? Whales communicate through sounds, and I don't know that their sounds have no syntax (indeed it would make little sense if they had none); but that is certainly a language much simpler than ours. Whale talk does not involve concepts. What about dog and cat talk? "You're on my turf, pipsqueak, get out of here before I give a lesson you'll remember!" That's us, awkwardly translating into complex grammar and concepts what canines and felines express with "grrrrr..." and "yeeeoooooow"! The economy of their language shouldn't prevent us from calling it language at all. Alex surely one-upped these language users, being able to use human noises to express simple desires and indicate recognitions.

But to say that Alex was using even a simple language is to say something that somewhat, though not completely, undermines the notion of an innate generative grammar. It is a far more radical idea that parrots have an innate ability to use human language, even to the extremely minimal degree that Alex did, than that humans do. But if Alex could be trained to use even a small subset of one human language, and could moreover demonstrate some of the combinatorial and syntactic capabilities that seem so pecualiar to human verbal communication, the innateness hypothesis seems unnecessary. A few hours a day with a parrot doesn't even compare with the constant verbal coaxing we give to a young child, so if the prune-brained parrot can learn that much, surely we can account for human language cognition as rote learning. Now, the Chomskian view of language has a mixed relationship to the cog sci view of the mind. On the one hand, cog sci needs some machinery to explain human linguistic capabilities, and the notion of a highly evolved module that encodes these capabilities like a wet compiler is very appealing. But its appeal is more to computationalists like Jackendoff than to neuroscience types like the Churchlands. For example, Paul Churchland complains (see e.g., The Engine of Reason, the Seat of the Soul) that Chomsky requires the rules to be represented in the mind, and representation, we know, is a dirty word to Churchland. Neural nets are all we need, he says; a pox on your representational rules engine. So it is not clear whether Alex, if he challenges Chomsky, challenges cognitive science in general, though he may challenge some forms of computationalism.

I suppose that innatists of any variety could get around this by saying that every creature of higher order than a clam may have evolved some minimal generic linguistic capability, which could be harnessed, through sufficient training, and assuming some innate vocal capabilities, to any human language. They would never get very far, but their lower order innate generative grammar would account for the possibility of an Alex. At some point, animal grammar would radically drop off, but the location of that point can be debated. But this whole reply seems a bit ad hoc, to me. It would be better to stick to your guns and deny that Alex could actually use language at all. That, however, seems to depend on an artificially rigid definition of what using language consists in, and is thus equally ad hoc. One could of course deny that language use has anything fundamental to do with consciousness, and insist it is therefore extraneous to the debate. This is a very dubious hypothesis, which I'm not even going to try to come up with a rationale for. Thus, any effort to erect an intellectual blockade between human and animal consciousness by virtue of a difference in lingusitic capabilities is probably doomed to fail. Animals are conscious, or so I hold; and they use language. These two things may scale to one another, or they may not, since it has not been argued (by me, anyway) that the relationship between language and consciousness is necessary, directly proportional, or anything like that. But if we are talking about human consciousness in particular, we would probably do well to focus more on language use, and how it evolved, than on brain scans.

I will leave the Alex discussion with the thought that the nature of a bat's consciousness may be far more accessible than we think. Perhaps Alex chose to leave his body to science. In which case his vocal apparatus would be available to others. There are plenty of people who love bats; and I'm sure they would like nothing better than to have one that talks. I am dreaming of Tom Nagel waking up one day to find, hanging upside down from his bookshelf, a bat, who greets him with the words: "It's like this..." and concludes: "And I want that published!"

Monday, August 27, 2007

Science, Philosophy and the Mind

I left for vacation (in Alaska) shortly after publishing my introductory post, and did not have access to the media I would normally look at to keep this blog current and relevant, nor to my reference materials on consciousness and cognitive science. But we're just getting started, and I have a few more preliminaries to add anyway, so perhaps it is just as well.

It is pleasant to see that I have had a couple of readers already, and certain issues that clearly need to be addressed have already been raised. So the first thing I want to do here is discuss the relationship betweeen philosophy and science in a very general way. This is not the place for an extended theoretical defense of my position; I merely state it so that readers have an idea where I'm coming from. I have referred to Wittgenstein and his position that there is a gap between the conceptual and linguistic tasks of philosophy and the factual and theoretical tasks of science. While my position on cognitive science and consciousness is partly informed by Wittgenstein's view, I do not subscribe to what might be a naive, or perhaps a correct interpretation of it. That is, I do not believe that science and philosophy are absolutely unrelated enterprises. My early college career was spent in scientific study, an interest I actively maintain, and I might note that Wittgenstein too had a lifelong interest in scientific developments (indeed the Tractatus directly reflects some of Hertz's ideas). But perhaps he believed that concepts are more distinct from facts than I do. I think concepts are very liquid, and conceptual truths, though they are not factual truths, are informed by our changing knowledge of the natural world. The way I would put the relationship is this: science can narrow down the range of possible conceptual truths, alter the course of philosophical investigation by closing off some lines of thought, and sometimes suggest new philosophical strategies by analogy with physical strategies (and this is not always a bad thing, though more on this later).

A common example of a scientific truth can be used to show what I am talking about. "Heat is the motion of molecules" is an example of what is usually called a scientific reduction from the macro to the micro level. Heat is a macroscopic physical phenomenon that has scientific application and is subject to measurement and scientific study. It was discovered that heat occurs if and only if, and to the extent that, there is motion at the molecular level, so that one can equate greater molecular motion with a rise in temperature. Thus one physical phenomenon was "reduced" to another. In this manner, (a) certain scientific speculation about the physical concept of heat was cut off; (b) since the concept of physical heat now had a new physical basis, the phenomenological concept of heat could no longer have exactly the same meaning it did before, or play the same role in philosophical speculation, or be confused with the physical concept (and if you don't think of "heat" as a philosophical concept, the same could be said at some point for "energy", though the reasons are more complex than this simple "reduction"); (c) a strategy for the "reduction" of philosophical concepts was suggested. Thus a scientific finding had a direct and permanent impact on philosophical speculation. Similarly, the study of light, color, and the biology of vision could not but have an impact on the way we talk about color, light, vision, or perception in philosophy. It would be madness to speculate about the nature of "colors" and simply ignore the scientific facts. Such discoveries continually alter the scope and direction philosophical speculation.

This applies to consciousness too. For example, it is known that certain areas of the brain control certain mental functions, and that consciousness itself is not evenly distributed throughout the brain. It follows ineluctably that consciousness is not equally dependent on every mental function. People can lose significant functionality in the area of memory, recognition, sensory awareness, linguistic capability, and other critical forms of intelligence and still be "conscious" in the sense we normally mean it. On the other hand, people with some forms of epilepsy can apparently have most or all of these functions intact and not be entirely conscious (e.g., not respond to ordinary stimuli) for a period of time. It follows that these functions do not entirely depend on consciousness. These again are scientific results, the ignorance of which would simply lead philosophy down blind alleys.

But in spite of all this, there is no reason to believe that these bits of knowledge we have acquired about the brain suggest that we are on the way - indeed, that there is a way - to "reduce" consciousness to brain function. It is still far from clear that we will at some point be able to speak about physical entities and processes, eliminating, without remainder, all chatter about minds, intelligence, thoughts, ideas, beliefs, desires, motives, imaginings, and the like. It is the fervent hope of materialists of all sorts that this should be the case; that "folk" psychological concepts should be at most a shorthand for talking about what we know to be neural occurences. The most sophisticated developments in cognitive psychology fall so far short of reducing anything that we don't even know what such a reduction would look like. For the most part, what they amount to is that when certain mental functions are performed, there is increased blood flow or electrical activity in certain parts of the brain. This is good for brain mapping, but not for figuring out what consciousness is. Extensions to these mappings are not much help either. For example, you can tell by mapping that some of the same regions light up when you imagine, remember, or dream of an object as when you encounter it first hand (have "knowledge by acquaintance" of it). We should hope that not too much money was expended on research that proves this, since most thoughtful people would have predicted something like it. But let it be granted that such discoveries are advances of some sort. Are they advances towards reducing the mind to brain functions? I don't see how. What is the path from this to eliminating the necessity of speaking of imagination when we talk of artistic creation or scientific theorizing, or even in theories of knowledge, language, or indeed consciousness? If we are to really believe in the cog sci program, we must think we are on a path which will eventually lead to the consignment of Kant's discussion of imagination, Peirce's discussion of belief, Locke's discussion of the will, or Wittgenstein's discussion of privacy, to the dustbin of quaint but terribly outmoded theories, whose truths (if any) can be better stated in terms of neural activity. As I said, some factual discoveries could sideline some avenues of discourse. But I see no reason to believe that a single important philosophical debate will be solved by cognitive science. The nature of consciousness as they are looking for it simply terminates in a physical or physiological description, never hooking up directly to any interesting philosophical theory or program. The scenario in which little by little we stop speaking of beliefs or conscious will, just as we (should have) stopped speaking of an anthropomorphic god, bodily humors, phlogiston, or the "elements" as air, fire and water, is a mere pipe dream of an overzealous scientific research program. There is neither scientific evidence nor philosophical reason to believe it. (I suppose it would be a cheap shot here to call it self-negating, since we would have to believe there are no beliefs to justify the eliminativist program!)

It seems that philosophers who support the cog sci program for consciousness are in the grip of an analogy like the following. Philosophers used to speculate about the physical world; little by little, philosophers themselves, and later on people who we identify as scientists, made discoveries that more or less replaced philosophical speculation with hard science. Similarly, philosophical speculation about consciousness will be replaced by some combination of neuroscience and computational theory, with perhaps some help from linguistics (a more scientifically credentialed enterprise than philosophy) and mathematics. But note that when someone asks, "how do earthquakes occur?" or "what are stars made of?", they are normally looking for one, and only one, kind of answer: a true description of a physical process. But when someone asks: "how can unconscious matter combine to create consciousness?", or "what is it to have the belief that tomorrow is Wednesday?", not to mention "what is artistic creativity?", they can be asking several different kinds of questions. Either they want a description of a chemical or neurological process, or a psychodynamic explanation as provided in contemporary post-Freudian psychology, or a philosophical discussion. Someone who is interested in one kind of explanation is going to feel cheated if they leave with another. Nor is this a sign of a primitive state of any of these disciplines. Any area of inquiry is in its infancy compared with some imagined state of it in the distant future, but it cannot be said that physics, psychology or philosophy are in their infancy in any absolute sense. "Folk" psychology and its philosophical development is not a poor stand-in for the knowledge we wish we had through neuroscience. I don't want to use the obvious phrase and call it a different "level of explanation", because that only sounds like grist for the Quinian mill, in which levels of explanation simply go away, or become "naturalized", as science develops. Think of it this way, instead: we already have, and have had for a long time, the ability to describe human action strictly in terms of mechanics and biochemistry. Instead, we still describe it in terms of motivations, will, desire, belief and the like. Why did the level of "reduction" already available to us not replace the outmoded talk involving mental terms? Hmmmmm.... I'm sure the physicalists have an answer, but prima facie, there's no reason to think the Next Big Step will be any more "eliminative" than the last.

It would be fair to ask at this point: Just what would you require, Mr. Alterman, before you would be ready to say that such a reduction is at hand, or at least conceivable in the ordinary progress of scientific investigation? Fair enough; here is one answer: I would like to see someone describe, in purely mathematical and physical terms, what it means for two people to have the same thought. That is, take Fred and Freida, and say they each have a simple thought, like "I have to take out the trash", or "I believe my cat is bigger than your ocelot" or "Billy just learned how to do long division". These are not such complex thoughts. So what I want is to know what it would mean, or what sort of program could possibly explain, how to provide a physical-mathematical description of these thoughts such that by examining the brains of Fred and Freida we would discover an instantiation of exactly that unique, purely physical, and completely general description. (In the old lingo, I want a physical reduction of token-token identity.) In my opinion, we are not just far from having a program of this sort; we cannot even conceive what it would mean to have this kind of reduction. But without it, we do not have an eliminative materialist theory of consciousness; nor, to put it more bluntly, a physicalistic theory of consciousness of any sort. And it is not that we do not have it in the sense that we do not have a molecular transporter; we can at least conceive of what a molecular transporter would be and do, if not how it would accomplish its task. We cannot conceive of what a physical reduction of consciousness would be; what would a general neural correlate of "learned long division" be like? Where would we begin to look? The thought is just spooky, not even on the agenda of science. And my position is that it never will be, and that it involves deep misunderstandings.

This is not an anti-scientific view; nor, as you might guess, do I subscribe to some post-Cartesian form of substance dualism. "Dualism" is a bad word as long as it is associated with substances, or processes, or any form of parallelism whereby the "mental" happenings are conceived as analogous to the "physical" happenings: the brain is doing its work, and the "mind" (mysteriously conceived) is doing its work, and the two are somehow doing it together, but are not one and the same thing. This rationalist program is way too tired, not to mention theistically inspired, for me to take seriously. (There are other forms of rationalism, such as the kind promoted by Llinas, and somewhat supported by research, that locates fixed structures and assumptions in the mind as a result of evolutionary choices. This is a different sort of discussion, which I will not pursue right now.) Playing around with the word "substance" to make it fit something that is not conceived of as being constituted by rocks, water, burning hydrogen, subatomic particles, or other recognized physical substances is just a path to confusion. Substance dualism is a non-issue; yet consciousness is real, and yet not "reducible" to physical objects and processes. This is the paradox we have to address.

So why am I not a materialist? Is there a third way? Here I must revert to Wittgenstein, who dealt with his sort of confusing antinomy dozens of times, all to little avail, as evidenced by much of the writing on consciousness. Take, for example, his discussion of the "if-feeling", where he accepts the idea that there may be such a feeling, but rejects the notion that it somehow "accompanies" the word or thought. Then is it the word or thought itself? No. Then it merely accompanies it, or course? No. Then it doesn't exist, it is a mere error? No. Well, what then? Well, there is a feeling, but it is not an it! In the same way, Wittgenstein denied that there are mental processes. In the same sense that he said we should reserve the term mental "state" for something like depression or anger, not the belief that today is Monday. In the same sense that he asked if there was a something in the beetle box, and said no, there is not a something there, and not a nothing either! It seems that no matter how many times Wittgenstein discussed these kinds of confusions, no matter how many thousands of philosophers read them, the same inane dichotomy is posed again and again as it you can make some philosophical hay out of it. You're not a dualist? You must be a materialist! There are either two things there, or there are not two things there, and you say there are not two things there, so you must be a materialist, QED!

What seems to be the problem here? I think it is "how high the seas of language run"; it is people trying to piece together a theory of consciousness and finding it is "like trying to repair a spider web with your bare hands". "Heat" is a much simpler concept than "thought" or "awareness" or "sensation". It has one very strong usage, and if there are others, they can be sidelined when we give a very strong reductive explanation of the central usage. When we talk about "heat" what we are normally, literally talking about can be fully described as "the motion of molecules". When we talk about "thought" or "attention" or "imagination", what is it that can be fully described by a very strong theory of the motion of neurons and fluids? Who has an answer as to what the "it" is here that can allegedly be so described? No one. This is why, perhaps, Varela and his followers focused so hard on having a phenomenology to reduce, before actually trying to do a reduction to neurology. The point has almost completely escaped the Churchlands and most other cog sci types. But once we have the phenomenology - and Husserl is not a bad place to start, though not a complete program either - what do we have? An "it" that can be "reduced"? I don't think so. We have a phenomenology, and we have the scientifically motivated assumption that sensory facts have physical explanations, but we are far from having any valid reason for thinking that the "phenomenology" has a directly corresponding physical basis. This is where Varela and his school are wrong. Having a phenomenology (or a "phenomenological language", of the kind Wittgenstein once sought and others have actually developed) will provide interesting connections at the macro level between various neural processes and mental phenomena. They might be much richer than anything we have today. But again, the gap between that and a reduction of the mental to the physical is light years wide.

At most, I think it will eventually be recognized that while the desired reductive theory of consciousness is a worthy goal, it is not a practical program and may never be. I myself am not ready to concede that it is a worthy goal, but even if one does that, it hardly justifies the collapse of philosophy of mind into cog sci programs, as described in my previous post. Nor does it mean that brain research programs should be defunded (except to the extent that they are morally obnoxious, as in their treatment of human or non-human subjects - a matter for a different sort of blog). It means that philosophy should finally put aside the Russellian and logical positivist paradigm of philosophy following "the model of science"; though Russell at least distinguished between scientific method and results, suggesting we follow the former. Today's philosophy programs, ever-conscious of trendy bandwagons that might attract funds and build national reputations, have attempted to follow, and indeed even produce, the results. This is a rejection of philosophy itself, and an embarrassment to the profession. Once again, if this blog has even a small impact in altering this self-abnegation, I will consider it a success.

I expect to have one more preliminary post before I get current and start examining some recent results. This will be on the position that is most identified with the opposition to physicalistic monism, the idea that there is "something it is like" to have a paricular form of consciousness, that this is perspectival or subjective, and that it therefore cannot be stated in the objective language of materialism, or at least we have no idea how that would be done. If it were that easy to undermine the materialist line, the battle would have been won long ago. Unfortunately, this response is itself fundamentally flawed, for much the same reason that materialism itself is flawed. But I will get to that soon. Lastly, I will just mention that I expect to be reviewing the philosophical literature on consciousness and commenting on it as appropriate as long as I keep up this blog, so that hopefully, eventually, it will become clear where I stand not only on cog sci but on the philosophical debate as a whole.