42
ARTIFICIAL CONSCIOUSNESS AND JOHN BASL’S ACCOUNT J. Blackmon

J. Blackmon. Could a synthetic thing be conscious? A common intuition is: Of course not! But the Neuron Replacement Thought Experiment might convince

Embed Size (px)

Citation preview

ARTIFICIAL CONSCIOUSNESSAND JOHN BASL’S ACCOUNT

J. Blackmon

Neuron Replacement

Could a synthetic thing be conscious? A common intuition

is: Of course not! But the Neuron

Replacement Thought Experiment might convince you otherwise.

Neuron Replacement

The Neuron Replacement Thought Experiment Suppose

bioengineers develop a synthetic neuron, a unit that is functionally identical to our own biological neurons.

Neuron Replacement

The Neuron Replacement Thought Experiment If one of your

neurons is about to die, they can replace it with a perfect functional (although synthetic) duplicate.

Neuron Replacement

The Neuron Replacement Thought Experiment Imagine that this is done for one of your

neurons. You now have a synthetic neuron doing

exactly what your biological neuron had been doing.

Neuron Replacement

The Neuron Replacement Thought Experiment Would there be any difference? No one else would notice a difference in

your behavior. After all, your behavior is driven by the activity of your neurons, and because this synthetic replacement is functionally equivalent, your behavior will be exactly the same.

Neuron Replacement

The Neuron Replacement Thought Experiment Suppose now that another of your

neurons is replaced with a functionally equivalent synthetic neuron.

And another, and another, …

Neuron Replacement

The Neuron Replacement Thought Experiment What happens? Do you gradually lose consciousness

even though you continue to act exactly the same?

Do you reach some threshold at which consciousness just vanishes?

Or do you remain conscious, even when every single neuron has been replaced and your “brain” is entirely synthetic?

Neuron Replacement

The Neuron Replacement Thought Experiment Either you remain conscious throughout,

or you lose your conscious experience at some point.

If you remain conscious throughout, then an artificial synthetic thing can be conscious.

If you lose your conscious experience at some point, then consciousness is a property that cannot be scientifically studied.

Neuron Replacement

The Neuron Replacement Thought Experiment If, on the other hand, you lose your

conscious experience at some point, then consciousness is a property that cannot be scientifically studied at the level of behavior.

After all, your body’s behavior remains exactly the same. (And “you” continue to insist that “you” feel fine and are having conscious experience.)

Neuron Replacement

The Neuron Replacement Thought Experiment Either artificial consciousness is possible

or consciousness cannot be detected or investigated by examining one’s behavior!

But if you think consciousness can be detected and investigated this way, then the answer to whether an artificial thing can be conscious is: Of course!

Moral StatusJohn Basl’s Account

Moral Status

Something has moral status if it’s worthy of our consideration in moral deliberation. An artifact might have certain kind of moral

status just because it belongs to someone else. Or because it’s instrumental to doing something morally required.

However, a being might have a different kind of moral status: moral considerability or inherent worth.

Moral Status

Something is a moral patient if it has a welfare composed of interests to be taken into account for the sake of the individual whose welfare it is. Humans are moral patients. Presumably, certain non-human animals are,

too. We’re going to investigate whether certain

kinds of artifacts can be moral patients.

Moral Status

Something is a moral patient if it has a welfare composed of interests to be taken into account for the sake of the individual whose welfare it is.

Interests are those things which contribute to an individual’s welfare.

Moral Status

Interests are those things which contribute to an individual’s welfare. Psychological Interests: those had in virtue of

having certain psychological capacities and states.

Teleo-Interests: those had in virtue of being goal-directed or teleologically organized.

Moral Status

The Easy Case Basl: If x has certain human capacities,

then x is a moral patient. Basl considers and rejects the following

alternatives: Appeal to Religion Appeal to Species

Moral Status

The Easy Case Appeal to Religion: If x was chosen by

God to be a moral patient, then x is a moral patient.

Problems First, how do we know whether some x was

chosen by God? Second, how can one justify this to the

governed given that some of the governed are secular?

Moral Status

The Easy Case Appeal to Species: If x belongs to one of

the special species, then x is a moral patient.

Problems We can imagine aliens that are moral patients

in the same way we are. A subset of humans might evolve into a new

species.

Moral Status

The Easy Case So Basl rejects both the Religion and

Species alternatives. Again, if x has certain human capacities,

then x is a moral patient.

Moral Status

The Harder Cases: Animals and Others

Moral Status

The Harder Cases: Animals and Others Basl: The capacity for attitudes toward

sensory experiences is sufficient for moral patiency.

Note that Basl thinks simply having sensory experience is not enough. Hitting a dog with a hammer is wrong not

simply because the dog can experience pain, but because the dog has an aversive attitude to that sensory experience.

Moral Status

Epistemic Challenges Problem of Other Minds: How do we truly

know that other people have conscious minds like us?

Our best answer so far is that we know other humans are mentally like us because they are like us in terms of evolution, physiology, and behavior. If x shares a common ancestor… If x is

physiologically like us… If x behaves in similar ways…

Moral Status

Epistemic Challenges Our best answer so far is that we know

other humans are mentally like us because they are like us in terms of evolution, physiology, and behavior.

But with machines, we do not have these obvious connections.

Teleo-Interests

Teleo-Interests

How can non-sentient things have interests?

Basl builds from work in environmental ethics.

Why is acid bad for the maple tree, while water is good for it?

The answer cannot rely on the maple tree’s welfare unless it can be established that the tree has welfare.

However, the tree cannot suffer or be happy.

Teleo-Interests

How can non-sentient things have interests?

Some people appeal to teleology. A maple tree has the end/goal of survival

and reproduction. But if it cannot grow leaves, then it

cannot meet its ends. So its interests/ends are frustrated.

Teleo-Interests

Teleo-Interests

Non-sentient beings can have interests. Thus they can have welfare and so be moral patients.

If non-sentient beings can have interests, why can’t (non-sentient) machines have them?

Basl considers one objection, the Objection from Derivativeness.

Teleo-Interests

Objection from Derivativeness: The teleological organization of artifacts

is derivative of our interests. (Not so with organisms.)

Therefore, the only sense in which mere machines can have interests is derivative.

But derivative interests are insufficient for welfare and moral considerability/patiency.

Teleo-Interests

Basl’s Reply to the Objection from Derivativeness There are two kinds of derivativeness. Use-Derivativeness: Machines which exist only

for our use because of our needs and desires may have use-derivative interests. Their very existence derives from our intentions and interests.

Explanatory Derivativeness: The ends or teleo-organization of a mere machine can only be explained by reference to the intentions or ends of conscious beings. The explanation derives from our intentions.

Teleo-Interests

Basl’s Reply to the Objection from Derivativeness Now, consider organisms such as crops or

pets. Yes, these things exist because of our interest. However, they clearly have interests of their own.

So use-derivativeness does not disqualify a thing for having interests of its own.

Teleo-Interests

Basl’s Reply to the Objection from Derivativeness Similarly, consider the case in which I

play a significant role in the life and career choice of a child. That child’s preferences might not be explainable without reference to mine; however, the child still has interests of his or her own.

Thus explanatory derivativeness does not disqualify a thing for having interests of its own.

Teleo-Interests

Basl says the Objection from Derivativeness is the best objection, but it fails.

His alternative is the Comparable Welfare Thesis.

Comparable Welfare Thesis

Comparable Welfare Thesis

Comparable Welfare Thesis (CTW): If non-sentient organisms have teleo-interests, then mere machines have teleo-interests.

Comparable Welfare Thesis

Comparable Welfare Thesis (CWT): If non-sentient organisms have teleo-interests, then mere machines have teleo-interests.

Note that the CWT, of course, does not say that mere machines have teleo-interests.

It just says that if non-sentient organisms have them, then so do mere machines.

The idea is that whatever justifies acknowledging teleo-interests in organisms also justifies acknowledging them in mere machines.

Comparable Welfare Thesis

Comparable Welfare Thesis (CWT): If non-sentient organisms have teleo-interests, then mere machines have teleo-interests.

But consider: Basl has argued that derivativeness can’t

rob a thing of its teleo-interests. So he has rejected the Objection from Derivativeness.

He has not refuted the position that teleo-interests must be acquired “naturally”.

Comparable Welfare Thesis

Thus the environmental ethicist might take the following stance: Agreed, an organism can’t be excluded

from teleo-interests simply because someone created and molded it to suit their own ends. (The cases of the crops, pets, and child can all be accepted.)

But this doesn’t mean that mere machines have teleo-interests.

Comparable Welfare Thesis

Thus the environmental ethicist might take the following stance (cont’d): How can crops, pets, and children have

teleo-interests while mere machines do not?

The environmental ethicist might try to argue that the crop, the pet, and the child are all members of natural kinds (plants, animals), while mere machines are not.

This view faces a line-drawing problem. (Consider genetically modified organisms.)

Comparable Welfare Thesis

Basl argues that teleo-interests are nonetheless unimportant.

“You may recycle your computers with impunity.”

Conclusion

Conclusion

So long as machines are not conscious, we may do with them largely as we please.

But once we have something approaching artificial consciousness, we must consider whether these things are conscious and, crucially, whether they have attitudes.

We must then be careful with the epistemic uncertainties so as to avoid ignoring a possible person.