Sunday, September 16, 2007

What Is It Like to Be a Parrot Named Alex?

(9/27/07: Very minor changes made, and a number of potentially misleading typos corrected.)

What to do for news? The war in Iraq goes numbingly on and on; the Presidential election is already old and still a year off; there are no national scandals, or perhaps so many that none stands out; and
most recent natural and manmade disasters pale in comparison with those of yesteryear (at least in their sensational aspect - famine and disease are not big headline-grabbers, though perhaps more devastating in reality). Perhaps someone important has died recently? There you go! By the lights of New York Times editors, the untimely death of Alex the Parrot is definite breaking news. First, an o-bird-uary, then an editorial, followed by numerous letters, and now - squawk - a "Week-in-Review" article on the implications of Alex's famous efforts at learning, speaking, and conceptualizing. (Not to mention previous articles, like the "A Thinking Bird? Or Just Another Bird-Brain?" (10/9/99), reproduced on 123computer.net.) This is more space than they have devoted to most non-human individuals, as well as to about 99.999% of human individuals. Something big must be happening! Could it be - another opportunity to question whether consciousness really amounts to much more than rote responses? If you are willing to see that question as being implied by its reverse - Is rote learning a form of consciousness? - then the answer is Yes.

I was planning to post the last of three introductory pieces for this blog under a title like, "What Is It Like to Be Anton Alterman?" or "What Is It Like to Be Or Not to Be?", and I somewhat regret not doing that (especially since I want to take credit for the latter title, which is rich in possibilities). Of course many such satirical titles have been used in professional journal articles in response to Thomas Nagel's (in)famous "What Is It Like to Be a Bat?", but one more wouldn't hurt. However, in the interest of being relevant and timely I opted to join the Alex debate, without changing very much what I intended to say.

For those of you who have not followed it, Alex was a gray parrot who was being trained by Dr. Irene Pepperberg in using words to indicate concept recognition. Alex was reportedly able to identify (within limits) colors, numbers, shapes and materials, to combine these concepts in interesting ways, to understand simple, stripped-down English language syntax, to respond to some situations in a way that apparently mimicked human emotive responses, and to verbally indicate his expectations of reward for performing correctly ("Want a nut!"). In some cases Alex was able to formulate responses that bordered, or appeared to border, on combining known concept-words to form new concepts. Alex did not have a huge vocabulary - about 100+ words - but in applying these words selectively in response to questions, he gave the impression that he understood the references of not only object names but property words and emotional terms.

There has been much scientific brouhaha about Alex. Typically, people involved in cognitive research have insisted that Alex's reactions are just sophisticated stimulus-response
behavior. Whereas human language is rule-based and reflects some internal representation - a "compositional" module from which we produce the infinite transformations of basic syntax that we know as human language. Even Alex's trainer, Dr. Pepperberg, denied that Alex was using language, describing it rather as complex communication of some other sort. By implication, Alex was not demonstrating that this important aspect of human consciousness is available to creatures with pea-sized brains. Consciousness of the human type is still safe from animals, just as Descartes wanted it to be; and apparently talking parrots are no exception.

Why the resistance? One thing that occurs to me is that the huge debate over human consciousness would take on a very different shape if it could be identified in a more pure and simplified form in creatures who are perhaps more closely related to dinosaurs than to humans. What is it like to be Alex? Well, ask him! So the answer you get would not be even as sophisticated as that of a 3 year old child. But who wants sophisticated? That's only going to distort the unvarnished report of the nature of being so-and-so. Consider the idea that we train Alex, or his successor, to have just exactly enough grammar and vocabulary to answer the question: "What is it like to be you, Alex?" It is pretty clear what kind of answer we would get: "Want a nut!" Does this mean Alex just can't learn enough to answer the question? Why is that not a satisfactory answer?

Someone (I hope) is reading this blog. Ask yourself: What is it like to be you? What kinds of answers are at your disposal? You can describe experiences and sensations you enjoy, desires and drives that you have, pains, emotions, things you know or believe, worries, creative impulses; in short, the things that go to make up the bulk of your cognitive life. Is that a better answer to the question "What is it like...?" Why so? Being Alex is pretty simple. "Want a nut" is a large part of it. But Alex would also say "I love you" to Dr. Pepperberg at the end of the day, or "I'm going away now" to indicate resistance to a training session. So, with some help, Alex might have been ready to add to the "What is it like...?" response: "Sometimes I don't want to train and I say the thing Irene says when she goes away, and sometimes I realize that I am going to be alone and I don't like that so I say the thing that seems to mean Irene will come back." Pretty basic; Alex has some likes/dislikes beyond cashews, and he knows (and possibly feels) the difference between company and no company.

Okay, that's what it's like to be Alex. Have you figured out what it's like to be you? I mean, what it's really like, not these more sophisticated Alex-type responses. You have, of course, direct access to your own phenomenal consciousness, and this, the story goes, has its intuitive feel, or style, or shape, or... quality, that's it. As Michael Tye tells us
ad nauseum in the first chapter of his book Ten Problems of Consciousness, "for feelings and perceptual experiences, there is always something it is like to undergo them" (p.3). Or take Pater Carruthers (our token HOT theorist for this post) who writes: "Phenomenally conscious states are states that are like something to undergo; they are states with a subjective feel, or phenomenology; and they are states that each of us can immediately recognize in ourselves..." ("Why the Question of Animal Consciousness Might Not Matter Very Much", Philosophical Psychology 18 (2005) 83-102, p.84). Well, come on, now, you're an articulate and reflective sort, what the heck is it like after all? Tom, Peter, Michael, you said it, it's like something to be in your (directly accessible) state of consciousness. So, what's it like? How does it feel? Cat got your tongue?

Bats, says Nagel, operate on sonar, and "this appears to create difficulties for the notion of what it is like to be a bat" (Mortal Questions, p.168). Moreover, according to Nagel, we cannot even know what it is like for people who are deaf and blind from birth to be the way they are (p.170). Many of those people, nevertheless, can use language about as well as the rest of us, so clearly Nagel is writing off their ability to communicate linguistically as a conveyor of "what it's like". But no such difficulty attends knowing what your own experience is like; the only difficulty, according to Nagel, is that we can't express it in "objective" language such that the next guy can grok it. (If you haven't read Robert Heinlein's Stranger in a Strange Land please proceed to the nearest bookstore; and pick up a copy of Kurt Vonnegut's Cat's Cradle too, and maybe Thomas Pynchon's The Crying of Lot 49 while you're at it, as I have no compunction about using neologisms of philosophical interest from popular literature. Anyway, grokking is kind of like allowing a meme into one of your mental ports. Capiche?) So there you have it: let Alex talk all he wants; let you talk all you want; let even Tom Nagel talk all he wants, ain't nobody going to express what it's like to be them in such a way that the next animate, linguistically enabled being can read it and know what it is like to be them.

But if that's the case, why deny that Alex was using language? It seems that the phenomenology behind the use of words remains forever hidden and indecipherable; the information is simply lost. Riemann discovered he could reconstruct mathematical landscapes from the zeros of the zeta function (see Marcus du Sautoy, The Music of the Primes); but apparently, no one can reconstruct phenomenological landscapes from their base level expressions. The equations of language simply fail as the conveyors of information about the curves and lumps from which they originate. If our phenomenological reports are the equations of our system, their original coordinates are simply lost. We have a scrambled vista of individual reports, but this gives us no leg up on the original landscape that shows what it's truly like. At best we can build our own landscape by association with the terms of someone else's report. With Alex's report, our ability is that much more limited. A bat's report, forget it: screeeeeeyyyyeeeee builds about as flat a landscape as Riemann's zeros, with no possibility of recovering the geographic information. But the bat idea is, as Nagel himself admits, kind of superfluous; all we ever get through language - indeed through any form of communication - is, you might say, an instruction to translate these words into associations from your own experience. And the less you can associate - very little with Alex, just about zip with a bat - the less you can even do that. The landscape-in-itself, that untouchable "what it's like" of the other, remains all zeros, for all conscious creatures whatsoever.

Now all I want to say is that this whole conception is incoherent (in the vernacular, rubbish - but we don't say that in nice philosophical discussions, even across the social tables at APA meetings. But let them try bringing the meeting to Brooklyn for a change...) You cannot describe a problem as a gap in the capabilities of objective language (nor as beyond the limit of our cognitive capabilities, but I'll deal with McGinn and his school (?) some other time). The phrase what it is like does not describe generically some objective thing that can't be objectively described in any specific instance. It is not a placeholder for something that we await better forms of expression for. There is no such place. There is no "there" there, or rather no "what" there. The phrase "what it is like",
to borrow another of Wittgenstein's analogies, is a linguistic wheel that turns without moving any part of the mechanism.

To go deeper we have no choice but to consider how the expression "what it's like" is being used here. It seems that there is supposed to be some objective quality, likeness, that inheres in conscious states, but which we lack the means to express in words; if only we could, we could say what it's like. But when Tom Nagel or Michael Tye reflect on their own consciousness, what exactly is the likeness that they find there a likeness to; what is that very thing that they recognize and that is objectively different from what a bat would find, if bats were self-reflective? "Well, that's he problem, you see; we can't say! Maybe you just don't understand the problem. Everyone else seems to understand it. How can we help you?" This is such a cop-out, it does nothing but extend the attempt to wrap the reader in nonsensical uses of common expressions. The assumption that there just is something that it's like to have this or that form of consciousness should be recognized as one of the most peculiar and frankly nutty ideas in philosophy; but it goes on and on as the eye of the hurricane in the consciousness debate. Examples and counterexamples fly through the air all around it, blowing inverted spectra and swamp men all over the place, and likeness just calmly stands stock still in the middle of it, laughing at the maelstrom. But the phrase does not denote anything; not, of course, that there is no such thing as consciousness (fie on those who hope to turn this into an argument for eliminativism or behaviorism) but there is not some particular way that consciousness feels, seems, is. We think: if only there were some brilliant enough mind to find the way to express it, some Shakespeare/Einstein of consciousness, it could be done, and everyone would say, "Oh yes, of course, that's what it's like" and the problem would be solved. (Maybe we should start a committee?)

The "hard" problem of consciousness is often confuted (on my reading) with the problem of expressing the nature of particular phenomenal states. What's it like to see green? Well, for one thing, it's about the same thing to see green whether you are normally color-sighted and seeing a green light, or you have inverted spectra and are looking at what most people see as a red light. And, I think, this goes right to the bottom: it is the same for a llama to see green as it is for you. And the same for a bat, if they can. (To my knowledge, bats are not actually blind, nor do they use sonar exclusively, but if I'm wrong, NBD.) So the difference in likeness must lie somewhere else. You would not think so from reading Tye and others who have taken to ascribing a likeness to each and every type of perception or experience; which gets us nowhere, and is highly misleading as to the original point of Nagel's article. There is supposed to be something it is like to be in a general state of consciousness. That this leads to some communicative gap is more believable, at least, than that there is some sort of problem with seeing green or feeling pain. If the point of the whole discussion of phenomenal consciousness were that there is not some linguistic expression that just is the glassy essence of an experience, to be passed from mind to mind like genes that can reproduce entire individuals, it would have been obvious very quickly that there is no such form of language, that there never will be, and that this is not a "problem" so much as a misunderstanding of how the language of sensations functions. (I can't resist putting in a little plug here: Wittgenstein dealt with this gap between phenomenon and expression at length in his 1929-30 manuscripts, and it played a crucial role in the transition to his later philosophy and the private language argument; this is the subject of my thesis, Wittgenstein and the Grammar of Physics, CUNY Graduate Center 2000).

"But what about black and white Mary, doesn't she learn something new when she sees green grass, and isn't that "something" an objective bit of knowledge about how the universe is? And if so, doesn't that lead to the same problem? Because Mary can't say what the difference is, but she definitely learned something, not nothing." You are thinking: things seem very different to Mary, her world is suddenly phenomenologically richer in a very obvious way, and we can't deny that that is some real difference in the physical universe. So who wants to deny anything? :-) We did not need Mary to demonstrate this. We had duck-rabbit to demonstrate it a long time ago. There is a real difference in the universe when we saw only duck and now we see rabbit; it is not nothing, is it? We are in a different mental state (I am conceding, here, for the sake of argument, the customary notion of a "mental state", pace Wittgenstein's objections to this use of the term), and on the assumption that the universe contains only natural laws, measurable forces and physical objects, that is either a definite material difference or it's an illusion (which is itself a material difference, etc.)

We also have Alex to demonstrate it. Do you think that when Alex started to discriminate colors verbally his world became phenomenologically richer? I do, and not because words were added to his world. I don't believe that Mary actually "learns" anything when she is presented with an unexpected flood of color sensations. But when she later begins to conceptualize what she perceives, she does. But of course, she can also then say what the difference is; if you can't say a concept then I don't know what you can say. Alex was not very special just for squawking "red circle wool" and "green triangle metal". We have insufficient evidence to say that Alex really had concepts, but he gave enough of an impression that he did that we want to attribute to him a world richer than that of a feathered machine. Like color-concept-Mary, it appears, at least, that Alex became in some sense aware of differences that were stored in the raw data of perception.; just as connecting the dots on a line graph can bring out relationships that were not apparent before. Alex's discovery is real; so is Mary's. But neither is some willowy subjective qual that can't be expressed objectively. Both have added to their consciousness an awareness of the color spectrum. That's it. That's what it's like in this case. Nothing like those Tractarian edicts about the logic of language that "can't be said". If "now I see colors" is not a direct expression of what it's like to be in the new perceptual state, then we have the wrong idea of what an "expression" can accomplish.

"What is it like?" is used to ask for something that can be said. Applied to something that by nature can't be said, the question is nonsensical. "Where is Thursday?" Hmmmm... "Okay, what is Thursday like?" Err, I wake up at 7:15, give the kids breakfast, drop them off at school, then I take a shower... Or rather, I have the foggy -head feeling, then I feel the cool sweet tangy taste of orange juice, then the anxiety-to-get-to-work-on-time feeling... Is that what you're looking for? Try asking a construction worker or cab driver, "What's it like to be you?" You'll get an answer of some sort. What's wrong with the answer? Nothing, it's the kind of answer you're supposed to get when you ask a question like that. Even philosophers can answer the question. "Well, we all agree that there is something that attends human conscious experience that is different from what attends avian conscious experience, if avian experience is conscious at all, and that should have some expression, but it doesn't (so far). That's the problem." Well, we all know that that the word "hot" does not feel hot! Is that the problem? And the word "green" does not always look green (in fact it could look red). Is that the problem?? And the words "normal, perceptually enabled, self-aware human waking consciousness" do not feel like conscious experience. Is that the problem??? If not, what is the problem like?

At the level of consciousness as a whole, there just is nothing that it's like. That is not to say consciousness is nothing; there is just nothing that it's like. Not because there is nothing similar enough (that would be a normal use of the expression and would not lead to any philosophical problems). It's because being like in the way it is used here is actually just a meaningless colloquial expression, a familiar linguistic crutch: "You know, it's like, I don't know what he wanted, but he was like, mad at me, so I like sat there and wondered what the hell am I supposed to say?" This is really how the expression is being used here! Not the proper use that we employed in putting the question to the cab driver. Or else we are imagining that we wake up one day as a bat, while somehow retaining our own consciousness as background information, and go "Oh, how weird, I'm not enjoying this at all, I want things to be like my former conscious states!" (To think that philosophy glides along on such B-movie fantasies is sobering.) What role does "like" have here? We actually have two things before us to compare, so there's nothing wrong with it. So we can also say, "Human consciousness is not like bat consciousness?" And now the grand conceit that throws everything off: "So what is it like?" And what could the answer possibly be? Only things of this sort: "It's like Martian consciousness; I know, because I was abducted by aliens and temporarily had a Martian brain implanted, into which was downloaded all the data of my own mind for comparison, and you know what? Martian consciousness was very much like our own." But if you want some different type of answer, where the "what" is a placeholder for some really brilliant analysis that describes for all to see just "what" it's like, you're a naughty person out to set a grammatical trap for unsuspecting philosophers. And almost everyone who has discussed consciousness in the last 25 years has landed in this trap.

What is it like to be Alex? Or a bat? That we can't answer these questions is not an indication of a problem, philosophical or otherwise, and certainly does not point to a gap in materialism. Materialists generally dismiss the Nagel problem without a great deal of fanfare, wondering (if materialists wonder) what exactly the problem is. As well they should. Materialists have missed their chance to hand over one of the supreme ironies of contemporary philosophy: they could have quite profitably quoted the Wittgenstein remark you have all been waiting for me to produce: "The phenomenal quality of consciousness is not a something, and not a nothing either." Of course they would never quote Wittgenstein until they had tenure. But it would be perfect. Wittgenstein was a materialist. (Yes! Like you, and me...) He just was not a naive materialist, the kind that believes we can eventually substitute talk about the brain, and its structures and processes, for talk about the mind. But he would surely have said that talk about "what it's like" is a grammatical error based on the false inference that there is "something" consciousness is like because there is not "nothing" it's like. The materialist should say: sure it's like something: it's like having this set of neurological processes in these kinds of neurological structures. And that's a perfectly good answer. It has virtually no philosophical import whatsoever for any question in philosophy of mind, epistemology, cognitive psychology, aesthetics, ethics or anything else; but it is one of the few answers that provide a sense to the question "What is it like?" But if you reject this, and the cab driver's answer, and all other reasonable answers, then of course you must say: no, there is not something it's like, but it's not like nothing either.

"So I thought Alterman was out to persuade us that cog sci approaches to consciousness are hopeless. But among other things, he offers us a passionate defense of one of the great eliminativist propositions, that qualia don't exist. Then he tells us that the only something that consciousness can be like is gray matter. Boy, is he confused." Well, I knew that was coming. But all I will say right now is this: it is a virtual certainty that if you posit a certain ontology as the very essence of consciousness, and that ontology is vacuous, the next thing that will happen is that someone will come along and say, "Hey, here's a much better ontology, it's called physical objects, which appear here as neurons, and it surely makes your wan and evanescent ontology of qualia otiose, not to mention boring and stupid". Just as the unacceptably spooky Cartesian immaterial substance gave way to phenomenologies of various sorts, inexpressible but somehow objectified qualia are an open door through which cognitive scientists can run at full speed, with wires, test tubes and forceps flashing, declaring that there are no such ghostly objects, and that "neurophilosophy" will save the day for consciousness studies. Nothing has done more damage to the effort to understand consciousness than the notion of a subjectively objective quality of "what it's like". This phrase should be banned from the language, except as a historical reminder of how the discussion of consciousness was distorted for three decades.

Let me close with a few further thoughts about Alex. I think Alex was using language, in at least the sense that the person described by St. Augustine in the passage that opens the Philosophical Investigations was using language. Wittgenstein's point, of course, is that human language cannot be reduced to the simple game of ostensive reference, where the rest of the field "takes care of itself". Alex's accomplishments may have actually been slightly beyond that of Augustine's infant self (not his real infant self, which would have been 100 times more sophisticated than Alex, but the one he describes). But even if they were not, it is clear, as even the NY Times writers brought out, that Alex's behavior prompts us to ask what we ourselves are doing when we use language. Indeed, I don't think I can say it better than Verlyn Klinkenberg does in the Times editorial: "To wonder what Alex recognized when he recognized words is to wonder what we recognize when we recognize words." (George Johnson, BTW, ends his article with a brief musing on the Nagel problem.) "Using language" is not necessarily an on/off situation. Wittgenstein says that the Augustine picture represents "a language simpler than ours". That is not no language, it is a language simpler than ours. So perhaps Alex was using a language much simpler than ours. How much? Whales communicate through sounds, and I don't know that their sounds have no syntax (indeed it would make little sense if they had none); but that is certainly a language much simpler than ours. Whale talk does not involve concepts. What about dog and cat talk? "You're on my turf, pipsqueak, get out of here before I give a lesson you'll remember!" That's us, awkwardly translating into complex grammar and concepts what canines and felines express with "grrrrr..." and "yeeeoooooow"! The economy of their language shouldn't prevent us from calling it language at all. Alex surely one-upped these language users, being able to use human noises to express simple desires and indicate recognitions.

But to say that Alex was using even a simple language is to say something that somewhat, though not completely, undermines the notion of an innate generative grammar. It is a far more radical idea that parrots have an innate ability to use human language, even to the extremely minimal degree that Alex did, than that humans do. But if Alex could be trained to use even a small subset of one human language, and could moreover demonstrate some of the combinatorial and syntactic capabilities that seem so pecualiar to human verbal communication, the innateness hypothesis seems unnecessary. A few hours a day with a parrot doesn't even compare with the constant verbal coaxing we give to a young child, so if the prune-brained parrot can learn that much, surely we can account for human language cognition as rote learning. Now, the Chomskian view of language has a mixed relationship to the cog sci view of the mind. On the one hand, cog sci needs some machinery to explain human linguistic capabilities, and the notion of a highly evolved module that encodes these capabilities like a wet compiler is very appealing. But its appeal is more to computationalists like Jackendoff than to neuroscience types like the Churchlands. For example, Paul Churchland complains (see e.g., The Engine of Reason, the Seat of the Soul) that Chomsky requires the rules to be represented in the mind, and representation, we know, is a dirty word to Churchland. Neural nets are all we need, he says; a pox on your representational rules engine. So it is not clear whether Alex, if he challenges Chomsky, challenges cognitive science in general, though he may challenge some forms of computationalism.

I suppose that innatists of any variety could get around this by saying that every creature of higher order than a clam may have evolved some minimal generic linguistic capability, which could be harnessed, through sufficient training, and assuming some innate vocal capabilities, to any human language. They would never get very far, but their lower order innate generative grammar would account for the possibility of an Alex. At some point, animal grammar would radically drop off, but the location of that point can be debated. But this whole reply seems a bit ad hoc, to me. It would be better to stick to your guns and deny that Alex could actually use language at all. That, however, seems to depend on an artificially rigid definition of what using language consists in, and is thus equally ad hoc. One could of course deny that language use has anything fundamental to do with consciousness, and insist it is therefore extraneous to the debate. This is a very dubious hypothesis, which I'm not even going to try to come up with a rationale for. Thus, any effort to erect an intellectual blockade between human and animal consciousness by virtue of a difference in lingusitic capabilities is probably doomed to fail. Animals are conscious, or so I hold; and they use language. These two things may scale to one another, or they may not, since it has not been argued (by me, anyway) that the relationship between language and consciousness is necessary, directly proportional, or anything like that. But if we are talking about human consciousness in particular, we would probably do well to focus more on language use, and how it evolved, than on brain scans.

I will leave the Alex discussion with the thought that the nature of a bat's consciousness may be far more accessible than we think. Perhaps Alex chose to leave his body to science. In which case his vocal apparatus would be available to others. There are plenty of people who love bats; and I'm sure they would like nothing better than to have one that talks. I am dreaming of Tom Nagel waking up one day to find, hanging upside down from his bookshelf, a bat, who greets him with the words: "It's like this..." and concludes: "And I want that published!"

5 comments:

N. N. said...
This comment has been removed by the author.
N. N. said...

Excellent post!

Just in case you haven't read it, I'd like to bring your attention to some relevant remarks of Hacker's (the section titled "qualia" beginning on page 11 of the linked pdf): http://info.sjc.ox.ac.uk/scr/hacker/docs/Reply%20to%20Dennett%20and%20Searle.pdf

Anonymous said...

Thanks, N.N. Not sure the whole URL made it into your comment; I couldn't link to it. But I know Hacker's piece on eliminative materialism, and I think something on Nagel, and we're on the same page. It is unfortunate but predictable that he is not taken very seriously in the consciousness debate. Nobody who is conceived as clinging to W's coattails will get their due from that crowd. (A word to the wise for W scholars...)

Unknown said...

Ah yes, good 'ole Thomas Nagel. His famous piece about bats got me all involved in this 'problem of consciousness'.

Unfortunately, Ludwig Wittgenstein dispelled all sense of intrigue I once had regarding consciousness and its status as a philosophical problem

Nevertheless, very descriptive and interesting post.

Language Games, my blog

I'll add you to my blogroll. Good to see another philosopher-blogger.

Anonymous said...

Thanks, David. Nice blog you have there, it has been added to my link list.

Do you mean what W says in The Blue Book about the mind-body problem? What he says on consciousness in the PI has never seemed to me like anything you could hang your hat on.