Saturday, October 6, 2007

AI, Cog Sci, Pie in the Sky

So I've been working my way through this long article on robotics that appeared in the July 29 edition of the Sunday Times, and I'm thinking the author, Robin Marantz Henig, is being very measured and balanced in dealing with these nasty questions, like "Can robots have feelings?" and "Can they learn?" etc. And yet I can't avoid the nagging conceit that in spite of her good intentions, she just doesn't get it.

Get what? Get what cognitive scientists really want. Get the idea of what Andy Clark, quoting computer scientist Marvin Minsky, calls a "meat machine". Artificial intelligence/meat machine: two sides of the same coin. Robots think; brains compute. It's a bit confusing, because it sounds like we're talking about two different things, but they are logically identical. Nobody said that wires can't look like neurons, or neurons can't look like wires; if we just used gooey wires inside robots, people who opened them up might say, "Oh, of course they have feelings, Madge, what do you think?" Maybe when we start creating real-life Darth Vaders, with some PVC-coated copper inside the skull (how long can it be until this happens?) people won't jump quite so quickly on the train going the other way: "Oh, of course all we are is elaborate computers, Jim, what do you think?" But the seed will have been planted, at least. With a little help from the connectionist folks we might begin one of those epistemological shifts to a new way of thinking, sort of like when people began to accept evolution as a natural way of looking at species. This is the picture that cognitive scientists really want.

Ms. Henig describes her encounters with a series of robots at the M.I.T. lab of Rodney Brooks: Mertz, whose only performance was to malfunction for the author; Cog, a stationary robot that was "programmed to learn new things based on its sensory and motor inputs" (p.32); Kismet, which was designed to produce
emotionally appropriate "facial" expressions; Leo, which was allegedly supposed to understand the beliefs of others, i.e. it had a "theory of mind"; Domo, equipped with a certain amount of "manual" dexterity; Autom, linguisitcally enabled with 1,000 phrases; and Nico, which could recognize its "self" in a mirror. (You can get more intimately acquainted with some of these critters by going to the Personal Robots Group at the MIT Media Lab web site. Before they try to create consciousness in a can, the roboticists should try fixing their Back button, which always leads back to the MIT site rather than their own page.) Throughout her discussion, Henig expresses both wonder at the tendency of people to interact with some robots as if they were conscious beings (a result of cues that set off our own hard-wired circuitry, it is surmised) as well as disillusionment with the essentially computational and mechanical processes responsible for their "humanoid" behavior. It is the latter that I am referring to when I say I don't think she's quite clued in to the AI mindset.

The first hint at disillusionment comes when she describes robots as "hunks of metal tethered to computers, which need their human designers to get them going and smooth the hiccups along the way" (p.30). This might be the end product of one of my diatribes, but how does it figure just 5 paragraphs into an article called "The Real Transformers", which carries the blurb: "Researchers are programming robots to learn in humanlike ways and show humanlike traits. Could this be the beginning of robot consciousness - and of a better understanding of ourselves?" Is Henig deconstructing her own article? She certainly seems to be saying: hunks of metal could only look like they're conscious, they can't really be so! Whereas I take it that computationalists suggest a different picture, of a slippery slope from machine to human consciousness, or at least a fairly accurate modeling of consciousness by way of the combined sciences of
computer science, mechanics, neuropsychology, and evolutionary biology. (Sounds awfully compelling, I must admit.)

Henig does say that the potential for merging all these individual robot capacities into a super-humanoid robot suggests that "a robot with true intelligence - and with perhaps other human qualities, too, like emotions and autonomy - is at least a theoretical possibility." (p.31) Kant's doctrine of autonomy would have to be updated a bit... And can we add "meaning" to that list of qualities"? (I'd like to set up a poll on this, but it seems pointless until I attract a few thousand more readers...) The author seems inclined to wish that there were something to talk about in the area of AC (Artificial Consciousness :-) but then to express disappointment that "today's humanoids are not the sophisticated machines we might have expected by now" (p.30). Should we be disappointed? Did anybody here see AI? (According to the article Cynthia Breazeal, the inventor of Kismet and Leo, consulted to the effects studio on AI - though not on the boy, who was just a human playing a robot playing a human, but on the Teddy bear.)

Cog, says Henig, "was designed to learn like a child" (p.32). Now here come a series of statements that deserve our attention. "I am so careful about saying that any of our robots 'can learn'", Brooks is quoted as saying. But check out the qualifiers: "They can only learn certain things..." (that's not too careful already) "...just like a rat can only learn certain things..." (a rat can learn how to survive on its own in the NYC subways; how about Cog?) "...and even [you] can only learn certain things" (like how to build robots, for example). It seems to be inherent in the process of AI looking at itself to imagine a bright future of robotic "intelligence", take stock of the rather dismal present, and then fall back on a variety of analogies to suggest that this is no reason to lose hope. Remember when a Univac that took up an entire room had less capabilities than the chip in your cell phone? So there you go.

Here we go again: "Robots are not human, but humans aren't the only things that have emotions", Breazeal is quoted as saying. "Dogs don't have human emotions either, but we all agree they have genuine emotions." (Obviously she hasn't read Descartes; which may count in her favor, come to think of it.) "The question is, What are the emotions that are genuine for the robot?" (p.33) Hmmm... er, maybe we should ask the Wizard of Oz? After reading this statement I can't help thinking of Antonio Damasio's highly representational account of emotions. For Damasio, having an emotion involves having a representation of the self and of some external fact that impacts (or potentially impacts) the self; the emotion consists, roughly, in this feedback mechanism, whereas actually feeling the emotion depends on consciousness, i.e., on recognition that the feedback loop is represented. On this model, why not talk about emotions appropriate to a robot? Give it some RAM, give it some CAD software that allows it to model its "self" and environs, and some light and touch sensors that permit it to sense objects and landscapes. Now program a basic set of attraction/avoidance responses. Bingo, you've got robot emotions. Now the feeling of am emotion, as Damasio puts it - that will be a little harder. But is it inconceivable? It depends, because this HOT stuff (Higher-Order Thought, for those socially well-adjusted souls out there who don't spend your lives reading philosophy of mind lit) can get very slippery. Does the feeling require another feeling in order to be felt? And that require another feeling, etc.? I suppose not, or no one would pause for 2 seconds thinking about this theory. One HOT feeling is enough, then. Great. RAM 2 solves the problem; the robot now has a chip whose function is to recognize what's being represented on the other chip. This is the C-chip (not to be confused with C-fibers) where Consciousness resides, and it produces the real feelings that we (mistakenly, if Damasio is right) call "emotions". So, we're done - consciousness, feelings at least, are represented in the C-chip, and therefore felt. Now we know what it's like to be a robot: it's like having second-order representation of your emotions in a C-chip. And now we can end this blog...

Unless we are concerned, with Henig, that still all we have are hunks of metal tethered to computers. Let's move on. Leo, the "theory of mind" Bot, M.I.T calls "the Stradivarius of expressive robots". Leo looks a bit like a Pekingese with Yoda ears. If you look at the demo on the web site y
ou can see why Henig was excited about seeing Leo. A researcher instructs Leo to turn on buttons of different colors, and then to turn them "all" on. Leo appears to learn what "all" means, and responds to he researcher with apparently appropriate nods and facial expressions. Leo also seemed capable of "helping" another robot locate an object by demonstrating that the Bot had a false belief about its location. Thus, Leo appears to have a theory of mind. (This is a silly way of putting it, but it's not Henig's fault; it's our fault, for tolerating this kind of talk for so long. Leo has apparently inferred that another object is not aware of a fact that Leo is aware of; is this a "theory of mind"?) But, says Henig, when she got there it turned out that the researchers would have to bring up the right application before Leo would do a darned thing. Was this some kind of surprise? "This was my first clue that maybe Leo wasn't going to turn out to be quite as clever as I thought." (p.34) If I were an AI person I would wonder what sort of a worry this was supposed to be. I would say something like: "Look, Robin, do you wake up in the morning and solve calculus problems before you get out of bed? Or do you stumble into the kitchen not quite sure what day it is and make some coffee to help boot up your brain, like the rest of us? Why would you expect Leo to do anything before he's had his java?" Well, complains the disappointed Henig, once Leo was started up she could see on computer monitors "what Leo's cameras were actually seeing" and "the architecture of Leo's brain. I could see that this wasn't a literal demonstration of a human 'theory of mind' at all. Yes, there was some robotic learning going on, but it was mostly a feat of brilliant computer programming, combined with some dazzling Hollywood special effects." (p.34). Leo was not even recognizing objects per se, but magnetic strips - Leo was in part an elaborate RFID reader, like the things Wal-Mart uses to distinguish a skid of candy from a skid of bath towels. Even the notion that Leo "helped" the other Bot turns out to have been highly "metaphoric" - Leo just has a built in group of instruction sets called "task models" that can be searched, compared to a recognizable configuration of RFID strips, and initiated based on some criteria of comparison.

And what exactly do humans do that's so different? You know what the AI person, and many a cognitive scientist, is going to say: after 10's of millions of years of evolution from the first remotely "conscious" living thing to the brain of Thales and beyond, the adaptive mechanisms in our own wiring have become incredibly sophisticated and complex. (So how do you explain Bush, you ask? Some questions even science can't answer.) But fundamentally what is going on with us is just a highly evolved version of the simple programming (! - I wouldn't want to have to write them!) that runs Leo and Cog and Kismet. What conceivable basis could we have for thinking otherwise?

Henig goes on to talk mainly about human-robot interaction, and why the illusion of interacting with a conscious being is so difficult to overcome. Here, as you might expect, the much-ballyhooed "mirror neurons" are hauled out, along with brain scans and other paraphenalia. I don't have too much to say about this. There are certainly hard-wired reactions in our brains. One could argue that what makes humans different from all possible androids is that we can override those reactions. A computer can be programmed to override a reaction too, but this merely amounts to taking a different path on the decision tree. It overrides what it is programmed to override, and overrides that if it is programmed to do so, etc. But someone will say that that is true of us too; we merely have the illusion of overriding , but it is just another bit of hard-wired circuitry kicking in. Since this spirals directly into a discussion of free will I'm going to circumvent it. I think evolved, genetically transmitted reaction mechanisms may well play a part in our social interactions, and if some key cues are reproduced in robots it may trigger real emotions and other reactions. What happens once that button is clicked is a matter that can be debated.

The article concludes with a variety of surmises on consciousness, citing Dennett, philosophy's own superstar of consciousness studies, and Sidney Perkowitz, an Emory University physicist who has written a book on the human-robot question. Consciousness, says Henig, is related to learning and emotion, both of which may have occurred already at the M.I.T. lab, though only Brook seems to think the robots actually "experienced" emotions in the sense that Damasio requires. Dennett says that a robot that is conscious in the way we are conscious is "unlikely"; John Haugeland said the same thing in 1979; robots "just don't care", he says (see "Understanding natural Language"). And these are some of the people who are most inclined to describe the mind as a in some sense a computational mechanism located in the structure of the brain. But people who would go much further are not hard to find. "We're all machines", Brooks is quoted as saying. "Robots are made of different sorts of components than we are... but in principle, even human emotions are mechanistic". (p.55) He goes on: "It's all mechanistic. Humans are made up of biomolecules that interact according to the laws of physics and chemistry." (I'm glad he didn't say "the laws of biology".) "We like to think we're in control, but we're not." You see, it's all about free will. These cog sci guys want to drag us into a debate about free will. No, I take that back, they have solved the problem of free will and they want us to see that. Or possibly, they have been reading Hobbes and want to share the good news with us. Whatever.

Henig's elusive, ambivalent position on robotic consciousness is easy to sympathize with, and as anyone who has read this post thoughtfully can tell, the ultimate point of my article is not to take her to task for being naive or ambivalent. It is that perspectives like the one coming from Brooks have insinuated themselves into our culture - into the media, philosophy, and cocktail parties - and legitimized the notion that whatever is left of the mind-body problem will just be taken care of by the accumulated baby steps of Kismets and Leos and Automs. Statements like the ones Brooks makes are tokens of the inability of people to think outside their own intellectual boxes. There is plenty of scientific evidence for the fact that mental processes go on below the level of consciousness (blindsight, etc.); there is not the remotest shred of evidence that these processes are mainly computational, or that computations, however complex, can yield outputs that have more than a superficial similarity to any kind of animal consciousness. There is every reason to believe that every fact and event in the universe has a scientific explanation; there is not the slightest reason to believe that explanation of consciousness is more like the Cartesian-Newtonian mechanisms behind the motion of mid-sized objects at slow speeds than it is like the probabilistic fields of quantum electrodynamics. We don't have a clue how consciousness works; not at the neural level, and certainly not at the computational level. We are in the same position that Mill was in the 19th century when he said that whatever progress we might hope for in the area of brain research, we are nowhere near knowing even whether such a research program will produce the results it seeks, much less what those results might be. We very likely do not even have two psychologists, neurologists or philosophers who agree with one another on what an emotion is, much less whether a robot could have one.

What's more, at present we have no philosophical or other justification for the notion that when we are trying to solve the mind-body problem, or talk about the mind or consciousness at all, what we are looking for should be thought of at the level of explanation of basic science or computation rather than traditional philosophy or psychology. People have brought all sorts of tools to the study of literature - lately, even "evolutionary literary studies" have gained a foothold, to say nothing of Freudian, Marxian, linguistic, deconstructionist or anthropological approaches. Does any of this demonstrate that the best understanding of literature we can obtain will be through these approaches, which subvert the level of literary analysis that studies the author's intentions, rather than through traditional literary criticism or philosophical approaches to fictionality? I don't know that philosophers or literary critics are in general ready to concede this point, though obviously various practitioners of postmodernism and other such trends would like to have it that way. Then why would we concede that the best approach to the mind-body problem is through AI, IT, CS, or other two-letter words? We might be better off reading William James (who was hardly averse to scientific study of the mind) than reading Daniel Dennett. Or reading Husserl than reading Damasio. We'd certainly be better off reading WIttgenstein on private language than Stephen Pinker on the evolutionary basis of cursing.

Put all the C-chips you want into Leo or Nico. Putting in a million of them wouldn't be that hard to do these days. Give them each 1,000,000 C-chips, 10 petabytes each; what will that do? Get them closer to consciousness? They're still hunks of metal tethered to computers, and for all we can tell, nothing that any AI lab director says is going to make them anything more.

4 comments:

Anonymous said...

Prediction: make someone wear inverting glasses, and they will see un upside down image at first (the brain inverts it out of habit), but eventually the brain will turn it right side up. It works!

Is this at all in reference to TS Kuhn's The Structure of Scientific Revolutions; in particular where his discussion of the transition from 'crisis' to 'normal science' after paradigm selection is akin to individuals undergoing just kind of transformation (from wearing inverted glasses as a kind of anomalous sensory experience to--eventually--becoming 'used' to it)?

Language Games

Anonymous said...

I guess it could serve as an interesting metaphor for Kuhn's idea. My point of course was intended literally, as a phenomenological fact that can be predicted from scientific knowledge about the brain. But now that you mention it, it is probably the weakest of the examples I gave, since I am not sure we know the physical mechanism underlying the phenomenon. Perhaps it is only known from phenomenological evidence. I'd have to do a little research to see if that's true.

Anonymous said...

VS BENDANEER SAYS:

Consciousness is such a troublesome word: if consciousness is thought of as separate from the content of consciousness--that is, if consciousness is apart from what appears in it- comes paradox ---- I can hold that any notion and indeed any appearance of anything whatsoever is content--rather than consciousness itself-- then I can conclude that consciousness--can never be known--since any characterization of it or appearance must be content rather than consciousness itself. I can't even know whether there is such a thing as consciousness or not. the whole thing loses any foundation.
Is some separate entity --consciousness-- indicated by the arising of notions like "I am conscious"--or "there is consiousness" or "I am thinking"
or "I am aware that I am aware" ?
I can deny that these indicate any entity beyond the arising of the notions themselves. And these notions are similar to the following:
thus-- If it arises, "there is a
duck in the driveway" and then arises "this arising is to acknowledge that the notion 'there is a duck in the driveway' has arisen"-- it is not necessary to posit some entity--consciousness--
by which the arisings are seen or in which the arisings appear or by which they are produced.
It can be said that something shows up--that's all.
Indeed I can argue that that is all that ever happens--something shows up. Don't need consciousness
for that.
I suspect that consciousness is
a notion meant to reinforce the idea of an independent individual entity or ego---in the face of scientistic reduction of humans to
molecules bumping into each other.


Is there such a clear difference between mental and physical as is asssumed in the zombie debate? If physicalism admits a mental versus a physical,
holding that mental is emergent From the physical---is that scheme not a notion, a mental notion? ---unless the physicalist wants to assume that a notion is physical- just another sort of physicality---(as apparently some scientists do)--
If two mutually exclusive entities are positied, physical and mental then the issue comes down to how much knowing is conceptual and how much sensual. I can argue that you can't parse them clearly--because mind is involved in sensual perception-- (there being no physical sans the senses--including touch).
Seems a silly idea to say that one can know something extra-mentally.
Further, I can hold that both physical and mental are simply aspects of a third thing--experience (I think James held something like this).
Mutually and untouchably exclusive mental and physical is in my view just another species of the mind body problem--and intractable.

Anonymous said...

...please where can I buy a unicorn?