From stapp@thsrv.lbl.gov Wed Jan 6 15:22:10 1999 Date: Mon, 4 Jan 1999 22:15:20 -0800 (PST) From: Henry Stapp To: Aaron Sloman Cc: bdj10@cam.ac.uk, brings@rpi.edu, brucero@cats.ucsc.edu, chalmers@paradox.ucsc.edu, ghrosenb@ai.uga.edu, hameroff@u.arizona.edu, hpstapp@lbl.gov, jmschwar@ucla.edu, keith@imprint.co.uk, klein@adage.berkeley.edu, patrickw@cs.monash.edu.au, phayes@coginst.uwf.edu Subject: Re: Attention, Intention, and Will in Quantum Physics Dear Aaron, Many thanks for your comments. The paper was based on a lecture I gave at the invitation of the philosophy department at U. of Hawaii, and they specifically asked for tie in to Dennett. That is why I contrasted my views with his, instead of with, say, yours or Pat's. On Mon, 4 Jan 1999, Aaron Sloman wrote: > Dear Henry, > > Thanks for the copy of your latest > > > Attention, Intention, and Will in Quantum Physics > > I must say I find it very strange that after all our interactions over > the last two or three years you can still write such things. E.g. you > state this as if it were obviously true, despite all the counter > arguments you have heard: > > > The last paradigm shift in this field was Descartes' division of nature > > into two parts, the realms of mind and of matter. > As Brian already remarked, I use "mind" to mean `conscious experience'. > The last paradigm shift in this field occurred much more recently: i.e. > in the last half century as a result of many people (computer > scientists, AI theorists, brain scientists, philosophers, biologists, > and even many physicists) coming to understand that independent of the > nature of the underlying physical reality there can be levels of virtual > machinery supporting different ontologies with their own types of > causation which may be implemented in but not logically reducible to > other ontological levels. > > (The people who have noticed this don't all use the same language: but > the same idea is constantly being rediscovered.) > > The key idea, which is not yet properly understood, is that > configurations in one level (what Stan and I called "boundary > conditions" in an earlier phase of this correspondence) can have > properties which are not describable at that level. > > Consider what happens when a particular chess program CP in a particular > high level language (e.g. Lisp, or CPP+) is implemented in many > different physical configurations, e.g. in various generations of intel > PCs running various operating systems, e.g. windows 95, or NT, or linux, > or freeBSD, or Solaris, or in various generations of Sparcs running > Solaris or linux, or in a DIGITAL(now Compaq) Alpha CPU running NT, or > Digital Unix, or linux. > > All those implementations (and indefinitely many other possible physical > configurations implementing the same program CP) have (or would have > if they were built) physical configurations whose common features > account for their ability to play chess properly (i.e. in accordance > with the laws of chess). > > But there is NO description of those COMMON features at the level of the > language of physics (classical or non-classical, it makes no difference) > from which their conformity to the laws of chess can be deduced by > logic. > > Yet they all implement the same virtual machine with the same causal > powers. > The "rules of chess", like "poverty", and all things that we name are defined in the minds of men. The theoretical development of the chess-playing machines of which you speak is occurring in the world of ideas of the computer scientists et. al. Only when one comes to the implementation of ideas in specific hardware does the question of the physics of the implementing machine come into question. At that point some sort of translation of, or reduction of, the concepts---say of chess---into physical concepts, or concepts linked to physical concepts, is needed. No one doubts that people can design and build complex computers that can be programmed to behave in ways that cause them follow courses of action that can be compactly described in "mental" terms. But once one implements one of these "mental" systems in actual hardware there are at least two relevant levels of description: the physical level, which ---insofar as the concepts of classical physics are adequate---is basically a description in terms of particles and local fields; and the designer/programmer's description in terms of ideas that are meaninful to him. These may include what HE, the programmer asserts that the machine "knows" or "believes", and what the machine's "goals" are. But the fact that the designer/programmer asserts that the machine "knows" something, or is acting to further some "goal", does not entail that there is any ontological reality associated with that machine itself other than the physical particles and fields that comprise the machine. There is nothing within the principles of classical physics to suggest or entail that the machine possesses anything that is at all like the `feelings that we feel' when we `know' something, or are moved to act in order to futher some felt `goal'. Some extra principle would be needed to postulate the existence of such a feel. So there is, then, a third type of description that is neither the classical physical description, nor the designer/programmer's description (perhaps in mentalistic terms) of how HE is thinking about the behaviour of the machine. This third description is about what the machine itself is feeling: what the conscious experiences of the machine itself are. The content of this third level of description is not logically or ontologically or physically entailed by the other two within classical physical theory, but it IS closely linked to the physical description in vN/W quantum theory. The ideas/feelings of the programmer ought not to be confounded with the ideas/feelings of the machine he is programming.. > (In some cases such a machine might include a random element if the > program sometimes chooses between equally good moves on a random basis > -- there are many well known ways of implementing such randomness which > could differ in the different machines). One purpose of my paper was to emphasize that it was not the random element in QT that is the interesting and important thing. > > In terms of the virtual machine defined by the software engineer's > specification for CP (usually requiring a higher level language than a > programming language) we can explain why a particular pawn was taken. > > The explanation mentions nothing physical: only knowledge, beliefs, > goals: i.e. only what the machine knows about chess and the current > state of the game, what it believes the opponent's goals are (if the > system is good enough to do plan inference) and what its own current > goals are (e.g. it might behave differently if its goal is not just to > win but to help me improve my playing). > > Events in this virtual machine (e.g. discovering a sequence of moves > that will fend off a threatened mate) can cause physical events (e.g. > changes on the screen, and changes in the internal electronic states) > but without requiring any causal gaps in the physics. > Events in the virtual machine cause no physical events unless the virtual machine is physically implemented, in which case the abstract concepts are transcibed into, hence reduced to, physical aspects. This virtual machine analysis buys us nothing, as regards the problem of consciousness, unless one invokes some supra-physical principle that asserts that the physical implementing of mental-type logical structures brings into being the corresponding experiential/ontological reality. You have not mentioned your reliance on any such principle in these discussions. ... > > I've not seen anything in your papers (or for that matter anything else > I have read) which shows > EITHER > (a) That the correctness of the model of interaction between > ontological levels "Ontological levels?" According to `A Compannion to the Philosophy of Mind' "Ontology is the branch of metaphysics centrally concerned with what there is". What "there is" in a chess-playing machine, according to classical physical theory, is a horde of particles, and some local fields. One could assert that a dark thundercloud is filled with "anger". A feeling of "anger" is something a naive observer may ascribe to the thundercloud. But, according to the precepts of classical physical theory, no actual feeling of anger is part of the ontological reality that is that thundercloud. Similarly, the "goals" and "beliefs" of the chess-playing machine are, to the extent that precepts of classical physics are accepted, and to the extent that those words refer to conscious experiences, part of the ontological reality that is the programmer/designer, not part of the ontological reality that is the machine. Of course, it is often useful for US to describe physical aspects of a physical system in picturesque mentalistic terms, but that does not entail that the physical system itself has the corresponding experiences. Or are you invoking some extra principle that you have not set forth. > DEPENDS on whether physics is quantum based or > not (e.g. I have seen NO evidence that the kind of virtual machine > architecture that could account for all known mental phenomena Virtual machine architecture alone, within classical physics, accounts for NO experiential qualities at all, except for those of the architects. > requires any non-local interactions at a physical level, or any > causal gaps in physics, or any heisenberg indeterminism -- though IF > such arguments turn up I would happily say we need to use quantum > implementations: I am not opposed to them. (Of course, at the level > of chemical information processing and e.g. absorption of photons by > the retina it seems that quantum phenomena are crucial: but you > don't seem to be interested in those cases.) Until critical experiments are performed the issue is a theoretical one: How it is possible for conscious experiences to have any causal efficacy, if one stays within the bounds of classical physical theory, which makes consciousness a mere passive witness to the physical passing parade. Discussions about virtual machines merely divert attention from this problem. Those discussions are probably useful for designing machines that can do things we can do, but not for the problem at hand. > OR > (b) That there is anything WRONG with the sort of multi-ontology model > we've repeatedly drawn to your attention, as an account of the > relationship between physical and mental (and social and political) > phenomena The problem at hand is with the connection between the physical and mental aspects of one system, a human being, not, with the connection between the physical aspects of a machine and the ideas of its programmer. > OR > (c) that a BETTER explanation of mental phenomena can be produced by > transporting concepts from the mental ontology into the ontology of > physics; > The "mental ontology" that you talk about is not an ontology. Non-implemented it is an ideas in someone's mind: implemented it is an alternative true description of a physical system, but the fact that one physical system composed of particles and fields has physical aspects that hang together according to some rules does not mean that some second ontological reality is present: what is physically and ontologically present are the particles and fields and their aspects. New rules do not entail new ontological entities, except in the sense that one can DEFINE complex physical aspect that obey simple rules to be new entities. Such definitions can be very useful. But if one bases one's theory on classical physical principles then the defining of certain physical aspects to BE certain entities is a move in the mind of the theorist, not any ontological change at the level of the physical system he is studying. An extra "functionalist" ontological postulate would be needed to achieve that. > Why do you keep on flogging dead horses instead of pointing to live > ones? > The problem of how conscious experiences are connected to brains is hardly a dead horse. > As for me, I am trying to go on developing more detailed specifications > of the architecture of the type of machine which could account for > everything we know about minds (of many kinds). > That should be useful! > IF that project hits serious obstacles that could give us reason to > try something different. > > But new directions will not come from shallow *purely philosophical* > arguments that ignore most of the richness of the phenomena of mind, and > instead focus on a few sentences characterising the nature of mind in > language suffused with several hundred years of philosophical vagueness > and confusion about "consciousness", "knowledge", "experience", > "observation", "freedom" and "self". > My work is based on analyzing the causal structure of physics. Hopefully an appropriate psycho-physical foundation will lead to a better theory of mind/matter. > Are you really trying to convince people like Pat and me (and Stan), > or are you merely trying to reassure all the true believers? > I hope that you and Pat will see that QT seems to be telling us something quite important about how our physical and mental(conscious) aspects interact: it opens up possibilities of causal relationships not conceivable within classical physical theory. > How can you face sending us sentences repeating this sort of stuff: > > > led to the notion that causal connections in matter alone control > > all motions in the material universe, and that mind is thus naught but a > > passive witness to the unfolding of events upon which it has no influence. > > There's a very very dead horse. Why waste your time (and ours) on such > nothing buttery? > This horse is alive and kicking. It cannot be roped in by considerations that fail distinguish the experiences of the programmer from the experiences of the system being programmed. But within classical physical theory the horse is dead. > > This exclusion of mind from any determining role in nature has come to be > > viewed by many as the {\it sine qua non} of science. > > Yeah. All those old logical positivists and their surviving friends? > > Why bother attacking such very old and sick soldiers? > Is your position so different? Complex physical systems can certainly often be usefully described in terms of aspects of those systems that, by virtue of the basic physical laws, conform to some other set of rules. And these other rules might be those that seem to be appropriate for mental properties. If the basic physical laws in question are those of classical physics, then one is entitled, according to the principles of classical physics, to say that this system IS the collection of particles and fields that comprise it, and that the evolution of this system is controlled, via local physical laws, by the motions of these parts alone. You might assert that some physical aspect of this system IMPLEMENTS a "goal" that the system is progammed to "strive for". But once you have achieved your programming goal you apparently will, for any implementation, have reduced the complex behaviour to local mechanical causation. > > > > The contemporary explosive proliferation ... > > ...Daniel Dennett ... eliminative materialists .. > > ...Identity theorists ... Epiphenomenal dualists > > You list only weak theories which are now easily knocked down. Why > bother? > High-profile weak theories, though. > (Dennett's is perhaps too vague to knock down!) > > > Dennett (1994, p.237) described the recurring idea that pushed him to his > > counter-intuitive conclusion: ``a brain was always going to do what it was > > caused to do by local mechanical disturbances.'' This passage lays bare > > the underlying presumption behind his own theorizing, and undoubtedly behind > > the theorizing of most cognitive scientists and philosophers of mind, namely > > the presumptive essential correctness of idea of the physical world foisted > > on us by the assumptions of classical physical theory. > > I don't think Dennett has a good grasp of the correct answers, but I > wonder whether he is as backward as you suggest: he has known about > quantum mechanics for many years. Have you asked him whether he REALLY > intends to commit himself to ignoring quantum mechanical non-local > effects? > > I suspect he was using "local" and "mechanical" in such a broad way as > to encompass ALL physical phenomena, including QM. But you could ask > him. > I am certain that he is not using QM in the way that I am proposing. > > According to this > > now-superceded theory, in its nineteenth century form, we human beings are > > essentially robots, in the sense that (1) every motion of every person is the > > consequence of the motions of the billions of tiny particles that make up his > > body, and that (2) each of these particles follows the rigid dictates of an > > impersonal myopic law that specifies the behaviour of this particle > > exclusively in terms of physicals properties located in its immediate > > microscopic neighborhood. > > The good old nothing-buttery fallacy again. The nothing-buttery non-fallacy, I believe. > Even if all of that were > true it would not exclude the existence of a supervenient causally > efficacious ontology in which actions are produced by interactions > between knowledge, preferences, goals, strategies, etc. (I.e. the now > familiar causally efficacious virtual machine.) > > The only arguments I have ever heard against the view I've just > summarised make wholly unsupportable assumptions about causation: e.g. > that causal connections can hold ONLY between physical events, and > that there cannot be alternative TRUE causal explanations of the same > happening. > > Perhaps you are implicitly presuming such a theory of causation. If so > perhaps you should expose it and defend it. The world is full of counter > examples. > > Certainly, certain aspects of a classical physical system can, by virtue of the underlying classical physical laws, obey certain rules. No argument about that. Hence there certainly can be different true causal explanations of the same sequence of physical events. No problem there. But an extra principle is needed to pass, in classical physics, from the fact that some aspect of a classical physical system behaves like a "goal" or a "belief" to the proposition that an "experience" or "feeling" of that goal or belief is an ontologically real part of that system itself. According to the principles of classical physics the behavior would be exactly the same whether the system has experiences or not. The fact that the "set of rules" are implemented in the system, with certain identifications of the high-level concepts with their physical counterparts, ensures that the behaviour will follows the rules whether the `conscious experience' is actually present or not. Thus one gets the correct behaviour, even if there is no conscious experience associated with the classical physical system itself. In that sense the conscious experience itself is epiphenomenal: the physical behaviour would be the same even if we do not include the extra assumption that the experience itself is present. > > It is now widely appreciated that assimilation by the general public > > of this ``scientific'' view, according to which each human being is basically > > a mechanical robot, > > Is this a pun on "mechanical"? (Like Weizenbaum's complaint (1976) that > AI implies that we are "nothing but a clockwork".) The kinds of > information processing virtual machines we have been investigating for > the last 40 years or so, and which you persistently ignore, are not > "mechanical" in any normal sense of that word. (They don't involve > interactions between levers, strings, pulleys, fluids, particles, etc. > and they don't involve force, energy, momentum, and motion.) > I guess there is some unresolved difference here. As long as one is theorizing at the "virtual machine level" there is of course no physics involved. But when one implements this virtual machine there must be a transformation that links the abstractions to the physics in such a way as to get a correct physical implementation of the virtual machine. Then the virtual machine becomes a mechanism in the sense that I use the word: the evolution of the system is controlled by the `myopic' local classical physics equations pertaining exclusively to particles and local fields. > Compare the pun on "robot" discussed below. > > > is likely to have a significant---and largely > > detrimental---impact on the moral fabric of society. > > ....Daniel Dennett speaks of > > the Spectre of Creeping Exculpation: recognition of the growing tendency of > > people to exonerate themselves by arguing that it is not ``I'' who is at > > fault, but some mechanical process within: ``my genes made me do it''; > > or ``my high blood-sugar content made me do it.'' [Recall the infamous > > ``Twinkie Defense'' that got Dan White off with five years for murdering > > San Francisco Mayor George Moscone and Supervisor Harvey Milk.] > > So what is your point: do you want to bring back flogging so as to > influence the "will" directly? > My point is that quantum theory allows our experiences per se to enter into the causal structure in a way that is not reducible to myopic local laws and random chance: THREE irreducible elements enter into the causal structure. > Compare: > http://www.cs.bham.ac.uk/~axs/misc/freewill.disposed.of > > > Steven Pinker (1997, p.55) defends the classical/computational conception > > of the brain, and, like Dennett, recognizes the important need to reconcile > > the notion of science-based causation with a rational conception of personal > > responsibility. > > My impression, from reading his book and hearing him argue is that he > has not fully understood how virtual machines are related to > physical machines. He has not been trained as a software engineer? > (Fodor has similar problems.) > > > His solution is to regard science and ethics as two > > self-contained systems: > > ``Science and morality are separate spheres of reasoning. Only by > > recognizing them as separate can we have them both.'' > > There's a LONG and complex history of analysis of this relationship in > philosophical discussions over hundreds of years. All answers which can > be expressed in two three or three lines are likely to be gross > oversimplifications, and probably false. This also applies to their > negations! > > > ...And ``The cloistering > > of scientific and moral reasoning also lies behind my recurring metaphor > > of the mind as machine, of people as robots.'' > > The word "robot" in discussions like this, is as much liable to > obfuscating punnery as "mechanical". > > We have two very different notions of robot in our culture. One is based > on mechanical devices like the old toy dolls, clocks, steam-engines, > typewriters, etc. and implicitly contrasts with "human", "intelligent", > "creative", "conscious", etc. The other is the non-commital concept of > some kind of artificially produced machine which has sufficiently > sophisticated information processing capabilities to meet some > collection of criteria which make it human-like: e.g. it can see, plan, > learn, communicate, be puzzled, have goals, etc. > > Until it is specified precisely what sort of criteria are to be met it > remains totally unclear what is involved in comparing or contrasting > people with robots. > > In fact it is possible that people are EXACTLY like some robots except > for their provenance. > > (There are philosophers who think origins make a difference. That's just > a new type of "class" prejudice!) > > > A paradigm shift is needed, but the critical element is missing: > > a viable alternative! > > Actually you consistently ignore the one before your nose. > > Perhaps I should be more charitable and assume that you look at it but > don't see it because it is quite hard to understand, and to distinguish > from a host of inferior variants! > > > Actually, there {\it must} be something wrong with this latter argument. For > > there is a viable model of the mind/brain system that conforms to all > > known empirical scientific evidence, > By `conforms to' I meant `is compatible with' > > > To a mind mired in the classical-physics conception of man and nature this > > claim seems ludicrous: there is no way within the classical-physics > > idea of nature that our thoughts, per se, can enter into the causal structure > > that governs matter except insofar as these thoughts are reducible to, > > or expressible in terms of, the local physical properties of matter. > > Please don't send me any more papers repeating this kind of stuff. I'd > rather be removed from your list. I had hoped to see more progress after > all our interactions. > > I am not objecting to your claim that classical mechanics is false, or > that quantum mechanics requires a different sort of ontology. But that's > orthonal to the issue whether mental virtual machines could or could not > be implemented in a classical world. > Certainly SOME `mental virtual machines' could be implemented in a classical world. But they could only be those that can be implemented in a way that allows experiences to be replaced by aspects of a physical system of particles and fields. > It's also orthogonal to another issue, whether the use of mental > language ("knowing", "observing") in articulating QM is just another > confusing pun (like talking about "charm" and "spin".) > > > In an essentially liquid system such as the brain they produce only > > a gigantic jumble of partially interfering partial states: > > I wonder how many brain scientists who have spent their careers > investigating the intricate articulations of neural and chemical > architectures in brains would agree with the characterisation of brains > as "essentially liquid" systems? There are membranes, but a lot of ions in solution. > > Would you describe computers the same way? No! > > > The issue here is not whether cannon balls and tennis balls exist > > in some real sense. It is whether in the description of the complex inner > > workings of a thinking human brain it is justifiable to assume, not just for > > certain simple practical purposes but as a matter of principle, that this > > brain is made up of tiny particles of the kind assumed to exist in classical > > physical theory. > > You seem to be fixated on particles. I don't think of my computer as > made up of particles. It's architecture is far more subtle. > But its architecture is based on matter. > > The theory of Ghirardi, Rimini, and Weber (1986), which tries to modify > > orthodox theory in a way that tends to create a classical reality > > at the macroscopic level, has been forced to restrict its ad hoc parameters > > to a very narrow window ..... > > Consider submersible robots which are sent to inspect the foundations of > off-shore oil-wells and report back when they see signs of weakness, > etc.? Is there any reason to believe that this is any different from a > human, or trained pigeon, doing the job? I believe the experiential content would be very different. We are still at a very rudimentary stage in our understanding of such matters, but eventually it will be important to have an adequate model. At this stage we should be using general ideas to evaluate contenders. > > > So the proper question is: how within orthodox vN/W theory can > > our conscious experiences possibly enter into the > > dynamical equations of physics? > > Why should it? It is possible that, as Dennett says, consciousness is some sort of illusion. I am making the assumption that is roughly what it appears to be, though fallible. > > I suspect you are the victim of a pun on "conscious" "experience", etc. > > > > Orthodox quantum theory rejects the notion of a ``real classical > > reality''. What, then, {\it are} its basic realities? > > And I don't think bringing words like "real" and "reality" into > scientific discussions is helpful. > > Such words are at home in the murky depths of metaphyscial seminars in > philosophy departments. As far as science is concerned we can propose > theories, do experiments, compare rival explanations in regard to > richness of consequences, generality, parsimony, accuracy of > predictions, etc. > > But bringing in questions about reality as well is usually a symptom of > a commitment to some over-simple metaphysical theory (e.g. a "flat" > ontology.) > How far one should or can go toward an ontological picture of nature is certainly problematic. But if one does not try to push toward some comprehensible understanding of ourselves in nature we will never know how far we can go. > The problem with phlogiston wasn't that it somehow lacked reality which > oxygen had, but that the whole phlogiston theory was not nearly as good > as its rival in dealing with a wide range of phenomena and integrating > well with a wide range of additional theories. > I am keeping one foot in pragmatism. > > The answer is that all that we can think or know, hence all of science, > > lies in these fleeting thoughts: whatever is beyond our thoughts is beyond > > our science. > > Just another naive epistemological theory. It was refuted long ago by > Kant. I have no trouble thinking about things that are totally beyond my > own thoughts and experiences. How can one think about something without making it part of ones thoughts? > Henry