Let us conduct a thought-experiment about the modeling of brain activity and subjective experience.
In scientific practice, models are elaborations of hypotheses, used as follows:
So far, so elementary. Suppose now we want to investigate the relationship between brain activities and subjective experience.
We will suppose the last word in neurological testing equipment – we can examine and record the actions of a living brain, to any desired degree of detail. That gives us part of the evidence for Step 1. The other part, subjective experience, we can supply from within ourselves.
Or, to be accurate, each investigator can supply it for themselves.
We build up a "wiring diagram" of the human brain, with suitable allowances for individual variations. We can then specify the wiring diagram of any particular brain. This is the model of Step 2.
In our model, we simulate the stimulation of various sensory nerves and watch the resulting neural activity. In the end, the model sends out signals on various motor nerves, then resumes a more or less quiescent state. We know from other studies that to stimulate the required sensory nerves, we would shine a green light in the subject's eyes. We also know that, given the subject's linguistic training and immediate expectations (represented in the model by boundary conditions within the brain), the motor signals result in the subject pronouncing the words, "I see green." This is a deduction, à la Step 3.
And, finally, we get a real subject, shine a green light in his eyes, and, we may suppose, get the reply, "I see green." Step 4.
Unfortunately, it is not Step 4 of the experiment we started out to conduct. We don't have a study of the relationship between brain activity and subjective experience; we have a study of the relationship between brain activity and behavior. We don't want to compare brain activity with stimuli and responses; we want to compare it to subjective experience. But the only person who can make this comparison is the test subject.
Usually, in scientific method, experiments can be made publicly. It's part of "the rules" that a result is at least potentially public. Every astronomer gets to look at the same stars; any chemist can (in theory) examine the same gunk as any other chemist. But only one person can examine any given experience.
There is a structural difference here. Two people can examine the same object, each collect experiences of it, and assemble the experiences into patterns. They can then see if their two patterns are similar by asking each other questions. But now we are not concerned with an object we can both examine, nor are we concerned with patterns of experience. Rather we are concerned with the experiences themselves, and these cannot be compared in the same way (if at all).
So the thought experiment with the green light and the brain scanner does not imply, "All who go through neural event G experience the same thing (i.e. green)." All it implies is, "All who go through neural event G say that they saw green." There is no way of deciding if two different subjects really had the same experience – no way to decide if neural event G means the same thing to both.
It gets worse when you back up to Step 3. That was when you examined the modeled neural activity. Unfortunately, that was all you examined. You did not examine the modeled experience, so as to compare it with the real experience later. That's because you can't. Only the model can examine the modeled experience ... if there was a modeled experience, i.e. if the model was sentient. You don't know that it was. Similarly, you don't know that any of the brain scanner subjects were.
There is more in this than the familiar discovery that solipsism cannot be disproved empirically. We have a permanent hole in the model. An adequate model has an analog for every feature of the original. If this model has an analog of experience (as distinct from neural activity), then you can't detect it.
Furthermore, it may be very unlikely indeed that the model is sentient. If it is a full-blown, neuron-at-a-time emulation being run on a big computer, you might very well believe that it had as much claim to sentience as anyone else (except you, whose claim is better, at least with yourself).
But suppose it is a more general model, as the ones in physics often are. In that case, whole chunks of brain might not be involved in the modeling of a particular neural event, and other chunks might be represented in some abstract, general mode.
The equivalent in celestial mechanics is something like this: You are calculating the motions of Neptune's moons. You must take into account many precise details about Neptune and each moon. You sketchily treat the Sun, Uranus, Jupiter, and Saturn as distant point-masses. You ignore all other bodies in the solar system. You are, in effect, modeling a whole spectrum of possible solar systems.
Likewise, you could find yourself modeling a whole spectrum of possible brains. I have not yet examined the argument closely, but I doubt that a model can be sentient when it isn't anyone in particular. And yet the general model could give you the same predictive power as the particular model – that is, predictive power at the stimulus/neural-event/response level.
The upshot is that you cannot model experience. At best, you get a sentient model of a brain that doesn't have models of experience, but rather real experiences, as permanently private as a human's. More likely, I think, you get a very valuable description of neural activity with nothing to say about experience.
The permanent privacy of experience puts another limit on the model: because you cannot model an experience, you cannot show why one set of neural acts is an experience and another is not; you can only show that one set is an experience, rather than another, by making a purely empirical correlation. "I'm sentient, and I act like X and have neural activity Y. Other folk show X and Y, too, so I suppose they are also sentient, though I can never test it. Finally, the model simulates X and Y, so I suppose it is sentient." But your model doesn't allow you to build up an objectively observable sentience, because there isn't any such thing. It only allows you to build up models of X and Y. If it's got sentience, you can't observe it.
Suppose I now run a new model on our super-computer. It is of some neural network, but it is clearly not human. You have no idea of the body and senses that should go with it. Is the new model sentient? You might be lucky – some of the patterns might clearly resemble human ones. But if they didn't, you would have no clear grounds for rejecting the model as non-sentient.
You wouldn't be able to tell until you had interacted with it, which implies you model a body and senses for it somehow – probably not a perfect match for the original, but never mind. The point is that, just by watching the neurons fizz, you couldn't tell if the fizzing were sentience or not. You would have to meet the creature somehow, see if it acted sentient, then make the same uncheckable induction that you make about people, dogs, chairs, and other familiar objects. Only after you learned to communicate with the creature could you determine which neural events mark awareness and which do not – by asking it.
That is what I mean by a merely empirical correlation between observered (really inferred) sentience and neural activity. This is not what you expect of a successful model. If a chemist develops a model of some material that has never been observed in reality, he can (in theory) figure out its properties just by examining the model. He can say that it will boil at a given temperature, or is acidic or basic, and so forth. But sentience is not a property that can be extracted from a model that way.
Therefore, I reject reductionism, at least in its usual sense. You will never be able to explain sentience in non-sentient terms.
What are the alternatives to reductionism? I can think of three: dualism, holism, and panpsychism.
Dualism posits that the center of awareness (and usually volition) is something immaterial, and that each sentient being has one of these centers, usually called a soul. There is a great deal of mythology about the soul, which makes many moderns uncomfortable and, I believe, moves them to look for alternatives to dualism.
However, there is no logical connection between the soul (considered merely as a center of awareness) and the various properties we usually ascribe to a ghost. It may be that the soul, though immaterial, does not survive death. Or perhaps it does but has nothing to be aware of, once its host brain is destroyed. Or a new soul may be created every morning, when you wake up. Or maybe the ghost stories are right.
Holism posits that collections have properties you could not infer from the properties of the members and their relationships. Modern science is, if anything, more opposed to holism than to dualism. After all, a soul would just be one more kind of object, like a new species of elementary particle or a new force of nature. But holism implies a different method of reasoning, which many feel to be essentially unscientific.
But some philosophers find holism attractive, though they usually use it to describe life, not sentience. Under the holistic hypothesis, it is just a law of the universe, with no further explanation, that certain kinds of structures are sentient.
Panpsychism posits that everything is sentient, that sentience is just a universal property of matter, like mass. I suppose pantheism is a form of panpsychism. It allows a return to reductionism, or something like it, since it pictures a human mind as being built up out of many little cellular sub-minds, just as the brain is built out of cells. Panpsychism therefore attracts borderline reductionists.
Return to Introduction to Essays
Return to Wind Off the Hilltop
Copyright © Earl Wajenberg, 2011