Tuesday, March 11, 2014

Thoughts on How Consciousness Can Affect the World

Overview

In "The Generalized Anti-Zombie Principle" Eliezer mentioned that one aspect of consciousness is that it can affect the world, e.g. by making us say out loud "I feel conscious", or deciding to read (or write) this article. However my current working hypothesis is that consciousness is a feature of information and computation. So how can pure information affect the world? It is this aspect of consciousness that I intend to explore.

Assumptions

A1: Let's go all in on Reductionism. Nothing else is involved other than the laws of physics (even if we haven't discovered them all yet).

A2: Consciousness exists (or at least some illusion of experiencing it exists).

Tentative Lemmas

TL1: A simulation of the brain (if necessary down to quantum fields) would seem to itself to experience consciousness in whatever way it is that we seem to ourselves to experience consciousness

TL2: If an electronic (or even pen and paper) simulation of a brain is conscious, then consciousness is "substrate independent", i.e. not requiring a squishy brain at least in principle.

TL3: If consciousness is substrate independent, then the best candidate for the underlying mechanism of consciousness is computation and information.

TL4: "You" and "I" are information and algorithms, implemented by our brain tissue. By this I mean that somehow information and computation can be made to seem to experience consciousness. So somehow the information can be computed into a state where it contains information relating to the experience of consciousness.

Likely Complications

LC1: State transitions may involve some random component.

LC2: It seems likely that a static state is insufficient to experience consciousness, requiring change of state (i.e. processing). Could a single unchanging state experience consciousness forever? Hmmmm....

Main Argument

What does is mean for matter such as neurons to "implement" consciousness? For information to be stored the neurons need to be in some state that can be discriminated from other states. The act of computation is changing the state of information to a new state, based on the current state. Let's say we start in state S0 and then neurons fire in some way that we label state S1, ignoring LC1 for now. So we have "computed" state S1 from a previous state S0, so let's just write that as:

S0 -> S1

From our previous set of assumptions we can conclude that S0 -> S1 may involve some sensation of consciousness to the system represented by the neurons, i.e. that the consciousness would be present regardless of whether a physical set of neurons was firing or a computer simulation was running or we if we writing it slowly down on paper. So we can consider the physical state S0 to be "implementing" a conscious sensation that we can label as C0. Let's describe that as:

S0 (C0) -> S1 (C1)

Note that the parens do note indicate function calls, but just describe correlations between the two state flavours. As for LC1, if we instead get S0 -> S(random other) then it would simply seem that a conscious decision had been made to not take the action. Now let's consider some computation that follows on from S1:

S1 (C1) -> S2* (C2)

I label S2* with a star to indicate that it is slightly special because it happens to fire some motor neurons that are connected to muscles and we notice some effect on the world. So a conscious being experiences the chain of computation S0 (C0) -> S1 (C1) -> S2* (C2) as feeling like they have made a decision to take some real world action, and then they performed that action. From TL2 we see that in a simulation the same feeling of making a decision and acting on it will be experienced.

And now the key point... each information state change occurs in lockstep with patterns of neurons firing. The real or simulated state transitions S0 -> S1 -> S2* correspond to the perceived C0 -> C1 -> C2. But of course C2 can be considered to be C2*; the conscious being will soon have sensory feedback to indicate that C2 seemed to cause a change in the world.

So S0 (C0) -> S1 (C1) -> S2* (C2)  is equivalent to S0 (C0) -> S1 (C1) -> S2* (C2*)

The result is another one of those "dissolve the question" situations. Consciousness affects the real world because "real actions" and information changes occur in lockstep together, such that some conscious feeling of the causality of a decision in C0 -> C1 -> C2* is really a perfect correlation to neurons firing in the specific sequence S0 -> S1 -> S2*. In fact it now seems like a philosophical point as to whether there is any difference between causation and correlation for this specific situation.

Can LC2 change the conclusion at all? I suspect not; perhaps S0 may consist of a set of changing sub-states (even if C0 does not), until a new state occurs that is within a set S1 intead of S0.

The argument concerning consciousness would also seem to apply to qualia (I suspect many smaller brained animals experience qualia but no consciousness).

0 Comments:

Post a Comment

<< Home