Saturday, January 15, 2005


I ran across this peculiar thought experiment by David Chalmers.

He starts with the concept of the world as a simulation running in an external reality. But he looks not at the usual case where the brains of the agents in the simulation are also in the simulation, but instead at the different case where their brains are running outside the simulation and where the agents have no direct access to those brains. This is the situation, for example, of a typical "artificial life" system, or of a video game where the characters are driven by some AI system.

Chalmers seems to be claiming that, to the agents living in the simulation, assuming they've gotten interested in the question of what minds are and how they work, the most reasonable explanation is a dualistic one -- and that we ourselves should consider dualistic explanations as not so outlandish.

I see a couple of issues here.

First, it's not clear that the simulated agents need access to their physical brains in order to posit theories of how they work. People have been doing this for years by indirect psychological experiments. It may be possible for the agents to do similar experiments, ones that help them choose between different theories of their cognitive architectures. If they're smart enough and spend enough time on the problem, perhaps they would hit upon the right mechanism.

Second, Chalmer's scenario doesn't make me very much more comfortable about dualism. The analogue in the real world would be that the brain that we see is only a conduit to another "mental world" where the mind really happens. But why would the brain need hundreds of specialized centers to be, in effect, a glorified radio transmitter? One could make a case for this, but it strikes me as enormously unlikely.