Monday, August 27, 2007

Science, Philosophy and the Mind

I left for vacation (in Alaska) shortly after publishing my introductory post, and did not have access to the media I would normally look at to keep this blog current and relevant, nor to my reference materials on consciousness and cognitive science. But we're just getting started, and I have a few more preliminaries to add anyway, so perhaps it is just as well.

It is pleasant to see that I have had a couple of readers already, and certain issues that clearly need to be addressed have already been raised. So the first thing I want to do here is discuss the relationship betweeen philosophy and science in a very general way. This is not the place for an extended theoretical defense of my position; I merely state it so that readers have an idea where I'm coming from. I have referred to Wittgenstein and his position that there is a gap between the conceptual and linguistic tasks of philosophy and the factual and theoretical tasks of science. While my position on cognitive science and consciousness is partly informed by Wittgenstein's view, I do not subscribe to what might be a naive, or perhaps a correct interpretation of it. That is, I do not believe that science and philosophy are absolutely unrelated enterprises. My early college career was spent in scientific study, an interest I actively maintain, and I might note that Wittgenstein too had a lifelong interest in scientific developments (indeed the Tractatus directly reflects some of Hertz's ideas). But perhaps he believed that concepts are more distinct from facts than I do. I think concepts are very liquid, and conceptual truths, though they are not factual truths, are informed by our changing knowledge of the natural world. The way I would put the relationship is this: science can narrow down the range of possible conceptual truths, alter the course of philosophical investigation by closing off some lines of thought, and sometimes suggest new philosophical strategies by analogy with physical strategies (and this is not always a bad thing, though more on this later).

A common example of a scientific truth can be used to show what I am talking about. "Heat is the motion of molecules" is an example of what is usually called a scientific reduction from the macro to the micro level. Heat is a macroscopic physical phenomenon that has scientific application and is subject to measurement and scientific study. It was discovered that heat occurs if and only if, and to the extent that, there is motion at the molecular level, so that one can equate greater molecular motion with a rise in temperature. Thus one physical phenomenon was "reduced" to another. In this manner, (a) certain scientific speculation about the physical concept of heat was cut off; (b) since the concept of physical heat now had a new physical basis, the phenomenological concept of heat could no longer have exactly the same meaning it did before, or play the same role in philosophical speculation, or be confused with the physical concept (and if you don't think of "heat" as a philosophical concept, the same could be said at some point for "energy", though the reasons are more complex than this simple "reduction"); (c) a strategy for the "reduction" of philosophical concepts was suggested. Thus a scientific finding had a direct and permanent impact on philosophical speculation. Similarly, the study of light, color, and the biology of vision could not but have an impact on the way we talk about color, light, vision, or perception in philosophy. It would be madness to speculate about the nature of "colors" and simply ignore the scientific facts. Such discoveries continually alter the scope and direction philosophical speculation.

This applies to consciousness too. For example, it is known that certain areas of the brain control certain mental functions, and that consciousness itself is not evenly distributed throughout the brain. It follows ineluctably that consciousness is not equally dependent on every mental function. People can lose significant functionality in the area of memory, recognition, sensory awareness, linguistic capability, and other critical forms of intelligence and still be "conscious" in the sense we normally mean it. On the other hand, people with some forms of epilepsy can apparently have most or all of these functions intact and not be entirely conscious (e.g., not respond to ordinary stimuli) for a period of time. It follows that these functions do not entirely depend on consciousness. These again are scientific results, the ignorance of which would simply lead philosophy down blind alleys.

But in spite of all this, there is no reason to believe that these bits of knowledge we have acquired about the brain suggest that we are on the way - indeed, that there is a way - to "reduce" consciousness to brain function. It is still far from clear that we will at some point be able to speak about physical entities and processes, eliminating, without remainder, all chatter about minds, intelligence, thoughts, ideas, beliefs, desires, motives, imaginings, and the like. It is the fervent hope of materialists of all sorts that this should be the case; that "folk" psychological concepts should be at most a shorthand for talking about what we know to be neural occurences. The most sophisticated developments in cognitive psychology fall so far short of reducing anything that we don't even know what such a reduction would look like. For the most part, what they amount to is that when certain mental functions are performed, there is increased blood flow or electrical activity in certain parts of the brain. This is good for brain mapping, but not for figuring out what consciousness is. Extensions to these mappings are not much help either. For example, you can tell by mapping that some of the same regions light up when you imagine, remember, or dream of an object as when you encounter it first hand (have "knowledge by acquaintance" of it). We should hope that not too much money was expended on research that proves this, since most thoughtful people would have predicted something like it. But let it be granted that such discoveries are advances of some sort. Are they advances towards reducing the mind to brain functions? I don't see how. What is the path from this to eliminating the necessity of speaking of imagination when we talk of artistic creation or scientific theorizing, or even in theories of knowledge, language, or indeed consciousness? If we are to really believe in the cog sci program, we must think we are on a path which will eventually lead to the consignment of Kant's discussion of imagination, Peirce's discussion of belief, Locke's discussion of the will, or Wittgenstein's discussion of privacy, to the dustbin of quaint but terribly outmoded theories, whose truths (if any) can be better stated in terms of neural activity. As I said, some factual discoveries could sideline some avenues of discourse. But I see no reason to believe that a single important philosophical debate will be solved by cognitive science. The nature of consciousness as they are looking for it simply terminates in a physical or physiological description, never hooking up directly to any interesting philosophical theory or program. The scenario in which little by little we stop speaking of beliefs or conscious will, just as we (should have) stopped speaking of an anthropomorphic god, bodily humors, phlogiston, or the "elements" as air, fire and water, is a mere pipe dream of an overzealous scientific research program. There is neither scientific evidence nor philosophical reason to believe it. (I suppose it would be a cheap shot here to call it self-negating, since we would have to believe there are no beliefs to justify the eliminativist program!)

It seems that philosophers who support the cog sci program for consciousness are in the grip of an analogy like the following. Philosophers used to speculate about the physical world; little by little, philosophers themselves, and later on people who we identify as scientists, made discoveries that more or less replaced philosophical speculation with hard science. Similarly, philosophical speculation about consciousness will be replaced by some combination of neuroscience and computational theory, with perhaps some help from linguistics (a more scientifically credentialed enterprise than philosophy) and mathematics. But note that when someone asks, "how do earthquakes occur?" or "what are stars made of?", they are normally looking for one, and only one, kind of answer: a true description of a physical process. But when someone asks: "how can unconscious matter combine to create consciousness?", or "what is it to have the belief that tomorrow is Wednesday?", not to mention "what is artistic creativity?", they can be asking several different kinds of questions. Either they want a description of a chemical or neurological process, or a psychodynamic explanation as provided in contemporary post-Freudian psychology, or a philosophical discussion. Someone who is interested in one kind of explanation is going to feel cheated if they leave with another. Nor is this a sign of a primitive state of any of these disciplines. Any area of inquiry is in its infancy compared with some imagined state of it in the distant future, but it cannot be said that physics, psychology or philosophy are in their infancy in any absolute sense. "Folk" psychology and its philosophical development is not a poor stand-in for the knowledge we wish we had through neuroscience. I don't want to use the obvious phrase and call it a different "level of explanation", because that only sounds like grist for the Quinian mill, in which levels of explanation simply go away, or become "naturalized", as science develops. Think of it this way, instead: we already have, and have had for a long time, the ability to describe human action strictly in terms of mechanics and biochemistry. Instead, we still describe it in terms of motivations, will, desire, belief and the like. Why did the level of "reduction" already available to us not replace the outmoded talk involving mental terms? Hmmmmm.... I'm sure the physicalists have an answer, but prima facie, there's no reason to think the Next Big Step will be any more "eliminative" than the last.

It would be fair to ask at this point: Just what would you require, Mr. Alterman, before you would be ready to say that such a reduction is at hand, or at least conceivable in the ordinary progress of scientific investigation? Fair enough; here is one answer: I would like to see someone describe, in purely mathematical and physical terms, what it means for two people to have the same thought. That is, take Fred and Freida, and say they each have a simple thought, like "I have to take out the trash", or "I believe my cat is bigger than your ocelot" or "Billy just learned how to do long division". These are not such complex thoughts. So what I want is to know what it would mean, or what sort of program could possibly explain, how to provide a physical-mathematical description of these thoughts such that by examining the brains of Fred and Freida we would discover an instantiation of exactly that unique, purely physical, and completely general description. (In the old lingo, I want a physical reduction of token-token identity.) In my opinion, we are not just far from having a program of this sort; we cannot even conceive what it would mean to have this kind of reduction. But without it, we do not have an eliminative materialist theory of consciousness; nor, to put it more bluntly, a physicalistic theory of consciousness of any sort. And it is not that we do not have it in the sense that we do not have a molecular transporter; we can at least conceive of what a molecular transporter would be and do, if not how it would accomplish its task. We cannot conceive of what a physical reduction of consciousness would be; what would a general neural correlate of "learned long division" be like? Where would we begin to look? The thought is just spooky, not even on the agenda of science. And my position is that it never will be, and that it involves deep misunderstandings.

This is not an anti-scientific view; nor, as you might guess, do I subscribe to some post-Cartesian form of substance dualism. "Dualism" is a bad word as long as it is associated with substances, or processes, or any form of parallelism whereby the "mental" happenings are conceived as analogous to the "physical" happenings: the brain is doing its work, and the "mind" (mysteriously conceived) is doing its work, and the two are somehow doing it together, but are not one and the same thing. This rationalist program is way too tired, not to mention theistically inspired, for me to take seriously. (There are other forms of rationalism, such as the kind promoted by Llinas, and somewhat supported by research, that locates fixed structures and assumptions in the mind as a result of evolutionary choices. This is a different sort of discussion, which I will not pursue right now.) Playing around with the word "substance" to make it fit something that is not conceived of as being constituted by rocks, water, burning hydrogen, subatomic particles, or other recognized physical substances is just a path to confusion. Substance dualism is a non-issue; yet consciousness is real, and yet not "reducible" to physical objects and processes. This is the paradox we have to address.

So why am I not a materialist? Is there a third way? Here I must revert to Wittgenstein, who dealt with his sort of confusing antinomy dozens of times, all to little avail, as evidenced by much of the writing on consciousness. Take, for example, his discussion of the "if-feeling", where he accepts the idea that there may be such a feeling, but rejects the notion that it somehow "accompanies" the word or thought. Then is it the word or thought itself? No. Then it merely accompanies it, or course? No. Then it doesn't exist, it is a mere error? No. Well, what then? Well, there is a feeling, but it is not an it! In the same way, Wittgenstein denied that there are mental processes. In the same sense that he said we should reserve the term mental "state" for something like depression or anger, not the belief that today is Monday. In the same sense that he asked if there was a something in the beetle box, and said no, there is not a something there, and not a nothing either! It seems that no matter how many times Wittgenstein discussed these kinds of confusions, no matter how many thousands of philosophers read them, the same inane dichotomy is posed again and again as it you can make some philosophical hay out of it. You're not a dualist? You must be a materialist! There are either two things there, or there are not two things there, and you say there are not two things there, so you must be a materialist, QED!

What seems to be the problem here? I think it is "how high the seas of language run"; it is people trying to piece together a theory of consciousness and finding it is "like trying to repair a spider web with your bare hands". "Heat" is a much simpler concept than "thought" or "awareness" or "sensation". It has one very strong usage, and if there are others, they can be sidelined when we give a very strong reductive explanation of the central usage. When we talk about "heat" what we are normally, literally talking about can be fully described as "the motion of molecules". When we talk about "thought" or "attention" or "imagination", what is it that can be fully described by a very strong theory of the motion of neurons and fluids? Who has an answer as to what the "it" is here that can allegedly be so described? No one. This is why, perhaps, Varela and his followers focused so hard on having a phenomenology to reduce, before actually trying to do a reduction to neurology. The point has almost completely escaped the Churchlands and most other cog sci types. But once we have the phenomenology - and Husserl is not a bad place to start, though not a complete program either - what do we have? An "it" that can be "reduced"? I don't think so. We have a phenomenology, and we have the scientifically motivated assumption that sensory facts have physical explanations, but we are far from having any valid reason for thinking that the "phenomenology" has a directly corresponding physical basis. This is where Varela and his school are wrong. Having a phenomenology (or a "phenomenological language", of the kind Wittgenstein once sought and others have actually developed) will provide interesting connections at the macro level between various neural processes and mental phenomena. They might be much richer than anything we have today. But again, the gap between that and a reduction of the mental to the physical is light years wide.

At most, I think it will eventually be recognized that while the desired reductive theory of consciousness is a worthy goal, it is not a practical program and may never be. I myself am not ready to concede that it is a worthy goal, but even if one does that, it hardly justifies the collapse of philosophy of mind into cog sci programs, as described in my previous post. Nor does it mean that brain research programs should be defunded (except to the extent that they are morally obnoxious, as in their treatment of human or non-human subjects - a matter for a different sort of blog). It means that philosophy should finally put aside the Russellian and logical positivist paradigm of philosophy following "the model of science"; though Russell at least distinguished between scientific method and results, suggesting we follow the former. Today's philosophy programs, ever-conscious of trendy bandwagons that might attract funds and build national reputations, have attempted to follow, and indeed even produce, the results. This is a rejection of philosophy itself, and an embarrassment to the profession. Once again, if this blog has even a small impact in altering this self-abnegation, I will consider it a success.

I expect to have one more preliminary post before I get current and start examining some recent results. This will be on the position that is most identified with the opposition to physicalistic monism, the idea that there is "something it is like" to have a paricular form of consciousness, that this is perspectival or subjective, and that it therefore cannot be stated in the objective language of materialism, or at least we have no idea how that would be done. If it were that easy to undermine the materialist line, the battle would have been won long ago. Unfortunately, this response is itself fundamentally flawed, for much the same reason that materialism itself is flawed. But I will get to that soon. Lastly, I will just mention that I expect to be reviewing the philosophical literature on consciousness and commenting on it as appropriate as long as I keep up this blog, so that hopefully, eventually, it will become clear where I stand not only on cog sci but on the philosophical debate as a whole.


The Tetrast said...

You wrote: "In the same way, Wittgenstein denied that there are mental processes. In the same sense that he said we should reserve the term mental "state" for something like depression or anger, not the belief that today is Monday."

I'd apply "state" to both or to neither. Emotions have subject matters just as cognitions do, though we think of emotions as more self-involved, more error-prone, more like events or states "in themselves" than are the cognitions which strive to be like transparent windows on their subject matters. It's true that one doesn't always know why one feels an emotion toward something, but likewise sometimes one is sure that one knows something yet can't help wondering how one knows it.

In fact cognitions like belief and knowledge are more stable and "state-like" than are emotions like anger. If one's brain can be in a state of anger or depression about one's losses, one's brain can be in a state of belief or knowledge about those losses, too. None of it explains anger or knowledge materialistically. What might need explaining is how those material processes manage to "code up" values and knowledge (establishment, confirmation, disconfirmation, etc.) and, for instance, the decimal expansion of pi to thirty digits, in such a way that our behavior, and therefore ultimately our environement, are really influenced and determined by such "abstract" concerns. (I don't think that coding in an information-theoretic sense is the whole answer, but "code up" is a handy evocative phrase). The mind/brain is/are, so to speak, the portal through which confirmations, laws, values, etc., exert actual causal influence. I mean nothing paranormal, I mean that we go out of our ways to influence or determine situations into letting the truth influence or determine us (and with all that determinative activity on our parts in order for us to get determined by truth, a big challenge is to keep from stacking the deck). I think that most mathematicians will agree, for instance, that mathematical works are determined most of all by mathematics, rather than by social or biological forces. Mathematicians have worked hard to make their work be so determined.

C.S. Peirce took a view which seems in the spirit of dimensional analysis -- like DIDO instead of GIGO -- dimension in, dimension out. I don't know what he would have thought of discussions of "emergent properties." I don't know what I think of them either, I'm not familiar enough with them or with criticisms of them.

Your posing of the token-token identity problem between thoughts in different minds crystallizes the issues rather well. Yet it seems conceivable to me that researchers might identify certain brain states as corresponding to beliefs -- cognitions on which the person is willing to act under circumstances as they seem, even though confirmation of the cognition would not be redundant -- but would remain unable (in general physical and mathematical terms) to identify brain states with cognitions as pertaining to particular objects and situations, such as Billy's learning long division. Belief, yes, yet belief specifically about Billy's long-division, no. This is analogously as one might identify a brain state of anger but not a brain state specifically of anger about the price of eggs in the local shop. The researcher would have to independently know all too much about the price of eggs, the local shop, and the given person's history there. As long as one ignores the correlates of the cognition, the affectivity, the dealing, the willing, for so long does identification of associated general kinds of brain states seem plausible. But those correlates, the specific subject matters of the cognition, etc., and their specifically cognized signs, interpretations, and confirmations, all contribute to the actual consequences of the "state." So the general character of the state at the neural level is light-years from enough to tell you about what will happen with that person and in that person's mind/brain.

(Note -- I don't think that pointing out the vicious circle as you do is a cheap shot as you fear, unless it is used as a barrier to inquiry rather than as a pointer to an important and unavoidable problem. You don't use it as a barrier to inquiry. In another philosophical tradition, the phenomenologist Merleau-Ponty, who also held a chair of child psychology, pursued the psycho-causal vicious-circle issues rather far.)

I guess what I'm wondering is, do you mainly develop accounts of how materialism doesn't work? It goes back to what I said about a "problem-based" defense of philosophy. That may allow compatibility with diverse opponents of materialism. Or do you aim also to develop an account of what does work -- I mean, positive (I don't mean positivist) characterizations of the not-strictly-material dependencies in the intelligent being? Presenting a positive alternative to materialism may put you at odds with more views than you wish but, on the other hand, as a practical matter, people do like to be offered a positive alternative; sometimes they find that more convincing. On the "third" hand, I am not one who should spout off about what will work in convincing people! :-)

N. N. said...

I wonder if it would be better to say that science adds new concepts that can be related (in complex grammatical ways) to our existing concepts rather than to say that science informs our existing concepts. Or perhaps I should put my concern in the form of a question: What does it mean for, e.g., my concept of color to be informed by discoveries that certain other phenomena (frequencies of electro-magnetic waves) are inductively well-correlated with the phenomena of color? Do I then use my concept differently in all of its applications, or only some? Or do I have two concepts that share a name and are related in complex ways?

"...consciousness itself is not evenly distributed throughout the brain."

By Wittgenstein's lights, this has to be metaphorical, right? I'm thinking of Wittgenstein's "crude" thought-experiment in the opening pages of the Blue Book concerning the "location" of the visual field.

Anton Alterman said...

Thanks, Tetrast, thoughtful and provocative comments as usual. My replies will unfortunately not entirely do justice to them, but I wanted to at least respond in some manner.

I believe Wittgenstein's refusal to admit beliefs, etc. to the category of mental "states" goes back to his distinction between the "grammar of physics" and the "grammar of phenomenology" (the subject of my thesis, BTW). "State" is literally applied to things like gases, bodies, galaxies, or atoms, and suggests a definite physical arrangement or proeprty that differs from other "states". It is also temporal, having a definite beginning and end in time. This concept can be transferred, without especially misleading consequences, to a "state of mind" when what we mean by that is really a state of the whole physical system, but one that is best characterized as a state of the mind of that system: depression, anxiety, etc. This also has a fairly specific duration. But a "state" of belief - the belief that birds fly, for example - has no immediate material manifestation. If you think of belief as a disposition to act, you could say that it manifests itself when someone acts in accordance with it. But aside from Quinian reasons to doubt whether behavior is a reliable indicator of a particular belief, a disposition is defeasible and dependent on catalysts and therefore does not necessarily ever result in a physical manifestation at all. So it is indeed misleading to characterize beliefs as obeying "the grammar of physics". The same applies mutatis mutandis for "processes". There are some mental "processes" - falling asleep, for instance - but a train of thought is not a mental process in the same sense that oxidation is a process. In any case, this is how I understand Wittgenstein's point.

To the extent that it is interesting to say "hey, this portion of the brain lights up on the PET scan when the subject says she is thinking about a firmly held belief", we in a sense already have the time and tools to make some sort of identification at the general level. But ask yourself: just how is it that this particular neural cluster can "be" a belief in John's mind? It doesn't make sense; something seems missing, i.e., the connecting argument that tells how gray goo is the ultimate referent of what we discuss when we talk about "beliefs". So I'm not sure what would be achieved by this kind of identification, though I have nothing against pursuing this sort of brain mapping for its own sake.

Your last question is surely the most important: do I merely want to knock out the cog sci program for consciousness (and through fairly close analogy, for other higher-level mental functions)? Or do I plan to say how it works, offer something positive? What I don't want to do right now is get caught up in the logic I alluded to in the post: consciousness is certainly not *nothing*, we all know it is a real feature of the natural world, so will somebody please say *what* it is? It's like, we have this phenomenon that everybody not only knows about but experiences first hand, and some of the greatest minds have been trying to say what it is for a long time (at least since CPR) and nobody can do it. So will Anton Alterman do it? Is it, for example, like the Poincare hypothesis or Fermat's last theorem, which just required a sufficiently talented person to come along? Let's ask that another way: Did the solution of those problems just require a sufficiently talented person to come along? Absolutely not. It was the accumulation of matehematical techniques over centuries, the narrowing down of the problem to a certain extremely specialized point that required a very capable specialist to complete the solution. So is consciousness like that? I'm not prepared to answer that yet. I think Kant, Freud, Wittgenstein, Dennett and others have contributed to our understanding of the problem; but I can't say for sure that we even understand what we are looking for. To say more I would probably have to start talking about the way consciousness is characterized in mainstream philosophical literature. But since that is the subject of my next post, I think I will stop. My answer to your question, in short, si that I don't know if I will present anything that people will recognized as a solution. But I count dispelling illusions and closing off wrong paths as part of the solution to a problem, in good scientific and academic tradition. So I would not want to say this is a "merely" negative enterprise. I hope it will be an important part of the discussion.

Anton Alterman said...

Regarding N.N.'s comments, I think it is more helpful to stop thinking of concepts as being fixed things, in either the individual mind or the general consciousness, and then the difficult of having them "informed" by facts goes away. I think it is one of the more negative legacies of analytic philosophy that we tend to have an ahistorical and rigid notion of concepts. "Concept" is a term of art. There's no such thing, really, but it is a useful term becaus we do think of things more in one way or another, and people generally agree to a certain extent on how things should be thought of. But I can't see any problem with saying they change, whether through scientific, cultural or other developments.

Yes, metaphorical, that's right - consciousness is not literally spatially "distributed" at all, so it can't exactly be distributed "unevenly"! But you can say: these spatially located anatomical features of the brain have different roles to play in the higher-level functions that we usually characterize as belonging to consciousness. That's more or less what I meant.

The Tetrast said...

I wish I had more time to respond to your response at the moment. I wish that I were still on vacation!

I do think that there is some sort of "special" relation between consciousness and cognition (as opposed to affectivity, will, and dealing), but I wouldn't ask you to provide a positive characterization of consciousness in order to defend your anti-materialist position, though I guess that others may. I have no special expertise, but I do suspect that there is not only unconcsious cognition, but unconscious confirmation/disconfirmation, etc., though consciousness's function does seem to have something to do with a need for checking and double-checking. Well, I'm one of those who finds consciousness both (a) real and (b) really mysterious from an intellectual viewpoint. I would not set the bar so high as to ask you to "solve" it.

I would look for nothing more (I guess it's still a lot) than a positive characterization of the kinds of dependencies that lift mind above the "merely material." I think of it as something which, rightly seen, would seem like a truism except when denied (then it would become a rich truth).

So I keep returning to the comparison of emotion regarded apart from its object, the comparison versus cognition regarded in relation to its object. "Anger" in some absolute sense, versus cognition of Billy's long division. I'm saying that that's not the right comparison. A fairer comparison would be hope or confident feeling about Billy's long division, versus belief in Billy's long division. Or anger about Billy's lack of long division, verus knowing that Billy lacks long division. Both such specific affectivity and such specific cognition would be hard indeed to establish by neurological study alone. It's actually quite uncommon among people generally to regard their own emotions apart from their emotions' objects, it's usually emotion about some specific thing, and this is not due merely to failure to abstact. Like I said, "both or neither." Emotion stands out, so brain researchers detect anger in the brain. Yet why concede that some sort of agitation is anger? I suspect that that will be to concede the whole dispute. A cognition such as a belief will be a quieter state or structure, and, regarded separately from its object, may well prove detectable as a belief about "something," who knows what. But neither the affectivity nor the cognition will be adequately understandable except in logical or inferential relationships to their objects, to the signs for those objects, to the interpretations, and to the (dis-)confirmations of those interpretations and the whole interpretive system. I don't mean the researcher's confirmations, etc., I mean instead those of the mind under examination. Somehow a material system arranges for itself to recieve determination or influence from abstract logical considerations among many other things. Well, that's my take anyway. Take it as an rough amateur example of what I mean by positive characterization of dependencies.

Anonymous said...


I read your LampPost blog in which you mention David Bean. I was a student at UCSB and was on the committee that arranged for concert artists to come to UCSB. I invited Mr. Bean and arranged for him to give a solo concert at UCSB--I think it was about 1970. He played all Lizst--all lot of stuff (and a thousand notes!) that I have forgotten, but he was a terrific pyrotechnic. I think he was from Oho, a professor of music there. I too, have lost touch--and wish I hadn't. The Westminister recording is very much the way he plays--the Ginastera Piano Sonata is incredible.

I wonder what happened to him--I really don't know why he didn't get picked up by a larger label. At the time, he was 45 and very much in his prime. The only negative thing about his performance was too bravura.

Paul Robinson
Physics Teacher
San Mateo High School

Tony Alterman said...

Thanks, Paul. I'm not sure why you posted this here rather than on The Parrot's Lamppost, but in any case I appreciate the info on David Bean. It certainly is exceptional to discover someone that good who is practically unknown, and it's nice to find that someone else enjoyed it too. I have stumbled on a number of little known steel string guitarists who are up there with the best, and I have some recordings by relatively unknown violinists that I consider first class. The Bean recording stands out among piano recordings of this type.

David said...

Long time no see! Looks like you've had an interesting time over the past 20 years or so. Let me know how it goes.

David Turner -- a fellow escapee of TSDI/CHSM...