Machine consciousness and complexity

Preview:

DESCRIPTION

Machine consciousness and complexity. Owen Holland. What this talk is about Machine consciousness - what do I mean by it? - what am I happy for others to mean by it? - how might we engineer it? - how would we know if we’d succeeded? Complexity - must a complex machine be conscious? - PowerPoint PPT Presentation

Citation preview

Machine consciousnessand complexity

Owen Holland

What this talk is about

Machine consciousness

- what do I mean by it?

- what am I happy for others to mean by it?

- how might we engineer it?

- how would we know if we’d succeeded?

Complexity

- must a complex machine be conscious?

- must a conscious machine be complex?

Machine consciousness: what do I mean by it?

Machine consciousness: what do I mean by it?

Phenomenal consciousness in a machine

Machine consciousness: what do I mean by it?

Phenomenal consciousness in a machine

- Is it possible in principle? Very probably.

Machine consciousness: what do I mean by it?

Phenomenal consciousness in a machine

- Is it possible in principle? Very probably.

- Is it possible in practice? There’s only one way to find out.

Machine consciousness: what do I mean by it?

Even the dullest stirring of core consciousness will do. I think we’ve all been there (like Proust) - those few moments when the system is booting up more slowly than usual and there is little or nothing more than an awareness of being.

Machine consciousness: what do I mean by it?

Even the dullest stirring of core consciousness will do. I think we’ve all been there (like Proust) - those few moments when the system is booting up more slowly than usual and there is little or nothing more than an awareness of being.

Obviously, an extended consciousness with all the trimmings would be better, just as the 1995 Boeing 777 is better than the 1903 Wright Flyer. But the difference between the two is far less than the difference between the Wright Flyer and a thrown rock. What I don’t want is something that looks like a 777 but never gets off the ground.

Machine consciousness: what do I mean by it?

Will phenomenal consciousness have to deliver any functional benefit to the machine?

Machine consciousness: what do I mean by it?

Will phenomenal consciousness have to deliver any functional benefit to the machine?

No. I’d be perfectly happy to produce the machine analogue of locked-in syndrome.

Machine consciousness: what do I mean by it?

Will phenomenal consciousness have to deliver any functional benefit to the machine?

No. I’d be perfectly happy to produce the machine analogue of locked-in syndrome.

But it’s perfectly possible that the structures and processes underlying and producing phenomenal consciousness may deliver functional benefits at the same time. In an evolved system, this delivery of (net) functional benefits would be a necessity.

Machine consciousness: what Axel thinks of it:

“It seems to me that making headway into specifically addressing the so-called ‘hard problem’ is what any project dedicated to machine consciousness should be about.”

Axel Cleeremans, 2003

Machine consciousness: what Axel thinks of it:

“It seems to me that making headway into specifically addressing the so-called ‘hard problem’ is what any project dedicated to machine consciousness should be about.”

Axel Cleeremans, 2003

There are lots of funded programmes in The USA, UK, and Europe dealing with ‘Cognitive Systems’.

The focus of an MC project should be different.

The hard problem

‘I have assumed that consciousness exists, and that to redefine the problem as that of explaining how certain cognitive or behavioural functions are performed is unacceptable…If you hold that an answer to the ‘easy’ problems explains everything that needs to be explained, then you get one sort of theory; if you hold that there is a further ‘hard’ problem, then you get another.’

David Chalmers 1996

Machine consciousness - what am I happy for others to mean by it?

Machine consciousness - what am I happy for others to mean by it?

Almost anything - as long as they say exactly what they mean by it.

Machine consciousness - what am I happy for others to mean by it?

Almost anything - as long as they say exactly what they mean by it.

Lots of interest in schemes involving:

- imagination

- building and exploiting models of itself

- I-ness, self processes, self reference

Machine consciousness - what am I happy for others to mean by it?

Almost anything - as long as they say exactly what they mean by it.

Lots of interest in schemes involving:

- imagination

- building and exploiting models of itself

- I-ness, self processes, self reference

Some interest in schemes involving:

- building and exploiting models of the world

Machine consciousness - what am I happy for others to mean by it?

Why am I so tolerant?

Machine consciousness - what am I happy for others to mean by it?

Why am I so tolerant?

- Because it’s likely that almost any good work along any of these lines will shed light on how to build the sort of systems I’m interested in.

Machine consciousness - what am I happy for others to mean by it?

Why am I so tolerant?

- Because it’s likely that almost any good work along any of these lines will shed light on how to build the sort of systems I’m interested in.

- Because I want to be tolerated - no holy wars!

Machine consciousness: how might we engineer it?

Machine consciousness: how might we engineer it?

After the Birmingham meeting, I can declare that I’m a simulator (along with another 8 or so from Ron’s list) but perhaps not a main sequence simulator.

Here’s a quick sketch of what it means and how I got there.

Machine consciousness: how might we engineer it?

After the Birmingham meeting, I can declare that I’m a simulator (along with another 8 or so from Ron’s list) but perhaps not a main sequence simulator.

Here’s a quick sketch of what it means and how I got there.

Consider an autonomous mobile agent that has to achieve some mission in a dynamic, partly unknown, and occasionally novel world.

How could the agent achieve its task (or mission)?

How could the robot achieve its task (or mission)?

- by being preprogrammed for every possible contingency? No

How could the robot achieve its task (or mission)?

- by being preprogrammed for every possible contingency? No

- by having learned the consequences for the achievement of the mission of every possible action in every contingency? No

How could the robot achieve its task (or mission)?

- by being preprogrammed for every possible contingency? No

- by having learned the consequences for the achievement of the mission of every possible action in every contingency? No

- by having learned enough to be able to predict the consequences of tried and untried actions, by being able to evaluate those

consequences for their likely contribution to the mission, and by selecting a relatively good course of action? Maybe

Here’s how Richard Dawkins puts it:

“Survival machines that can simulate the future are one jump ahead of survival machines who can only learn on the basis of overt trial and error.”

Dawkins, 1976

Is it just Dawkins?

No. The idea that some survival machines (animals) runs simulations of actions in the world in order to predict what will happen is quite widespread - e.g. Dennett has written extensively about it.

Some neuroscientists are gathering evidence for it - see for example Rodney Cotterill’s paper in Progress in Neurobiology (2000).

Hesslow (2002) calls it ‘the simulation hypothesis’ and has published a useful and concise summary of it:

Hesslow’s ‘simulation hypothesis’ “1) Simulation of actions. We can activate pre-motor areas in the frontal lobes in a way that resembles activity during a normal action but does not cause any overt movement.

Hesslow’s ‘simulation hypothesis’ “1) Simulation of actions. We can activate pre-motor areas in the frontal lobes in a way that resembles activity during a normal action but does not cause any overt movement.

2) Simulation of perception. Imagining that one perceives something is essentially the same as actually perceiving it, but the perceptual activity is generated by the brain itself rather than by external stimuli.

Hesslow’s ‘simulation hypothesis’ “1) Simulation of actions. We can activate pre-motor areas in the frontal lobes in a way that resembles activity during a normal action but does not cause any overt movement.

2) Simulation of perception. Imagining that one perceives something is essentially the same as actually perceiving it, but the perceptual activity is generated by the brain itself rather than by external stimuli.

3) Anticipation. There are associative mechanisms that enable both behavioural and perceptual activity to elicit other perceptual activity in the sensory areas of the brain. Most importantly, a simulated action can elicit perceptual activity that resembles the activity that would have occurred if the action had actually been performed.” (Hesslow 2002)

Two questions:

What exactly has to be simulated?

What is needed for simulation?

What exactly has to be simulated?

Whatever affects the mission. In an embodied agent, the agent can only affect the world through the actions of its body in and on the world, and the world can only affect the mission by affecting the agent’s body.

What exactly has to be simulated?

Whatever affects the mission. In an embodied agent, the agent can only affect the world through the actions of its body in and on the world, and the world can only affect the mission by affecting the agent’s body.

So it needs to simulate those aspects of its body that affect the world in ways that affect the mission, along with those aspects of the world that affect the body in ways that affect the mission.

What exactly has to be simulated?

How does the body affect the world? To some extent through its passive properties, but mainly by being moved through and exerting force on the world, with appropriate speed and accuracy.

What exactly has to be simulated?

How does the body affect the world? To some extent through its passive properties, but mainly by being moved through and exerting force on the world, with appropriate speed and accuracy.

How does the world affect the body? Through the spatially distributed environment (through which the body must move) and through the properties of the objects in it (cf. food, predators, poisons, prey, competitors, falling coconuts, etc. for animals)

What is needed for simulation?

Some structure or process corresponding to a state of the world that, when operated on by some process or structure corresponding to an action, yields an outcome corresponding to and interpretable as the consequences of that action.

What is needed for simulation?

I like to call these structures or processes ‘internal models’, because they are like working models rather than static representations, and because the term was used in this sense by Craik, and later by Johnson-Laird and others.

What is needed for simulation?

I like to call these structures or processes ‘internal models’, because they are like working models rather than static representations, and because the term was used in this sense by Craik, and later by Johnson-Laird and others.

So we require a model (or linked set of models) that includes the body, and how it is controlled, and the spatial aspects of the world, and the (kinds of) objects in the world, and their spatial arrangement. But consider…

What is needed for simulation?

The body is always present and available, and changes slowly, if at all. When it moves, it is usually because it has been commanded to move.

What is needed for simulation?

The body is always present and available, and changes slowly, if at all. When it moves, it is usually because it has been commanded to move.

The world is different. It is ‘complex, occasionally novel, dynamic, and hostile’. It’s only locally available, and may contain objects of known and unknown kinds in known and unknown places.

What is needed for simulation?

The body is always present and available, and changes slowly, if at all. When it moves, it is usually because it has been commanded to move.

The world is different. It is ‘complex, occasionally novel, dynamic, and hostile’. It’s only locally available, and may contain objects of known and unknown kinds in known and unknown places.

How should all this be modelled? As a single model containing body, environment, and objects?

What is needed for simulation?

The body is always present and available, and changes slowly, if at all. When it moves, it is usually because it has been commanded to move.

The world is different. It is ‘complex, occasionally novel, dynamic, and hostile’. It’s only locally available, and may contain objects of known and unknown kinds in known and unknown places.

How should all this be modelled? As a single model containing body, environment, and objects? Or as a separate model of the body coupled to and interacting with the other modelled components?

What happens in the human agent?

What happens in the human agent?

“...(I)t is always obvious to you that there are some things you can do and others you cannot given the constraints of your body and of the external world. (You know you can’t lift a truck...) Somewhere in your brain there are representations of all these possibilities, and the systems that plan commands...need to be aware of this distinction between things they can and cannot command you to do....To achieve all this, I need to have in my brain not only a representation of the world and various objects in it but also a representation of myself, including my own body within that representation....In addition, the representation of the external object has to interact with my self-representation....” (Ramachandran and Blakeslee 1998).

Does the brain model the body?

Yes, in many ways. Most importantly, it models the muscular control of movement, using forward models and inverse models (Ito, Kawato, Wolpert etc.)

It also predicts the nature and timing of the internal and external sensory inputs that will be produced if the movement is executed correctly (Frith, Blakemore). This is useful because feedback is too slow to guide rapid movements, and such prediction allows early correction.

Does the brain model the world?

Yes, in many ways. It models space, and it models the nature and behaviour of objects, and much of this modelling is innate.

Useful reading (for me anyway): Wild Minds, by Marc Hauser.

Exactly how can simulation help our agent?

All simulation can tell you is what will probably happen if you do the action Z, or the action sequence XYZ.

Exactly how can simulation help our agent?

All simulation can tell you is what will probably happen if you do the action Z, or the action sequence XYZ.

(a) If the outcome is a state Z* (the ‘goal’) then the action or action sequence can be triggered in reality. Simulation alone is enough.

Exactly how can simulation help our robot?

All simulation can tell you is what will probably happen if you do the action Z, or the action sequence XYZ.

(a) If the outcome is a state Z* (the ‘goal’) then the action or action sequence can be triggered in reality. Simulation alone is enough.

(b) If the outcome is a state Z* which can be evaluated for its likely contribution to the mission, the action (sequence) may be selected, or preferred over others once they have been evaluated

Simulation alone is not enough - you need evaluation, storage, retrieval etc.

Is simulation and evaluation worth it?

You only have to do better than you would do if you didn’t simulate and evaluate (and didn’t pay the time, energy, capital, development, and running costs of simulation and evaluation). You don’t have to calculate utility perfectly (you can use a heuristic perhaps corresponding to emotion….) You don’t have to search all possibilities, or search with maximum efficiency. And it has to be quick.

Is simulation and evaluation worth it?

You only have to do better than you would do if you didn’t simulate and evaluate (and didn’t pay the time, energy, capital, development, and running costs of simulation and evaluation). You don’t have to calculate utility perfectly (you can use a heuristic perhaps corresponding to emotion….) You don’t have to search all possibilities, or search with maximum efficiency. And it has to be quick.

BUT IF YOU CAN DO THIS YOU WILL BEAT A PURELY REACTIVE SYSTEM

Simulation and consciousness

What Dawkins (1976) said next:

“Survival machines that can simulate the future are one jump ahead of survival machines who can only learn on the basis of overt trial and error. ..The evolution of the capacity to simulate seems to have culminated in subjective consciousness…Perhaps consciousness arises when the brain’s simulation of the world becomes so complete that it must include a model of itself.”

Simulation and consciousness

What Dawkins (1976) said next:

“Survival machines that can simulate the future are one jump ahead of survival machines who can only learn on the basis of overt trial and error. ..The evolution of the capacity to simulate seems to have culminated in subjective consciousness…Perhaps consciousness arises when the brain’s simulation of the world becomes so complete that it must include a model of itself.”

How about ‘…a model of the machine’?

"...consciousness requires that the brain must represent not just the object, not just a basic self structure, but the interaction of the two….This is still an atypical foundation for a theory of consciousness, given that until recently, it was implicitly assumed that the self could be left out of the equation. There has been a recent sea change on this crucial point..."

Douglas Watt 2000, review of Damasio's "The Feeling of What Happens" (Damasio 1999).

In other words…

Intelligent behaviour in an embodied agent depends on the possession and manipulation of an internal model of the agent (the IAM) interacting with an internal model of the world.

The presence and interaction of these models may also underlie the production of consciousness.

A hypothesis

It is the (human) internal agent model that is conscious, not the agent itself.

A hypothesis

In order to produce accurate predictions, the agent model and the world model must be constantly updated with changes in the body and the world that affect the mission, whether planning is currently taking place or not. The ‘contents’ of consciousness are the effects on the internal agent model of direct updates, and also of updates to the world model to which the IAM is coupled. The IAM does not control the body, but attributes the updates of bodily movements to its own agency. The peculiarities of consciousness are simply the natural characteristics of such a system.

A proposal

The way to study these phenomena is to build a suitably complex robot, to embed it in a suitably complex environment and to examine the robot’s internal processes as it learns to cope with its mission.

Machine consciousness: how would we know if we’d succeeded?

Machine consciousness: how would we know if we’d succeeded?

Not from behaviour alone, because that could result from ‘mere’ cognitive factors.

Machine consciousness: how would we know if we’d succeeded?

Not from behaviour alone, because that could result from ‘mere’ cognitive factors.

It would have to involve the examination of internal structures and processes.

Machine consciousness: how would we know if we’d succeeded?

Not from behaviour alone, because that could result from ‘mere’ cognitive factors.

It would have to involve the examination of internal structures and processes. And in principle, we could have a complete description of these for the entire lifetime of the machine.

Machine consciousness: how would we know if we’d succeeded?

Not from behaviour alone, because that could result from ‘mere’ cognitive factors.

It would have to involve the examination of internal structures and processes. And in principle, we could have a complete description of these for the entire lifetime of the machine.

Can we say anything at all about their relationship to phenomenal consciousness?

For instance, if a system has a phenomenally conscious experience of X, can we say anything about whether it must have supported some internal model/representation corresponding to X?

For instance, if a system has a phenomenally conscious experience of X, can we say anything about whether it must have supported some internal model/representation corresponding to X?

For at least some X, the likely answer is yes.

For instance, if a system has a phenomenally conscious experience of X, can we say anything about whether it must have supported some internal model/representation corresponding to X?

For at least some X, the likely answer is yes.

Is the detection of the presence of an internal model/representation of X any evidence that the system is currently supporting a phenomenally conscious experience of X?

For instance, if a system has a phenomenally conscious experience of X, can we say anything about whether it must have supported some internal model/representation corresponding to X?

For at least some X, the likely answer is yes.

Is the detection of the presence of an internal model/representation of X any evidence that the system is currently supporting a phenomenally conscious experience of X?

The answer has to be ‘no, but if it is supporting an experience of anything, it might be of X - especially if only the X activity has been detected.’

The Turin test for machine consciousness (2050)

The Turin test for machine consciousness (2050)

A normal awake human and the machine under test are each equipped with a useful device developed from Andrew Brown’s ‘secret policeman’s brain scanner’. It detects all forms of neural or computational activity (conscious or unconscious) and is able to generate a display of whatever that activity corresponds to. If you think of a blue banana with pink spots, that’s what appears on the screen.

The Turin test for machine consciousness (2050)

A normal awake human and the machine under test are each equipped with a useful device developed from Andrew Brown’s ‘secret policeman’s brain scanner’. It detects all forms of neural or computational activity (conscious or unconscious) and is able to generate a display of whatever that activity corresponds to. If you think of a blue banana with pink spots, that’s what appears on the screen. (The earliest version of this machine was invented by a certain Igor Aleksander.)

The Turin test for machine consciousness (2050)

A normal awake human and the machine under test are each equipped with a useful device developed from Andrew Brown’s ‘secret policeman’s brain scanner’. It detects all forms of neural or computational activity (conscious or unconscious) and is able to generate a display of whatever that activity corresponds to. If you think of a blue banana with pink spots, that’s what appears on the screen. (The earliest version of this machine was invented by a certain Igor Aleksander.)

The task of the interrogator is to identify the human from the information displayed by the scanner.

Consciousness: some peculiarities

Common sense tells us it’s obvious…

- that we consciously perceive the world accurately

- that we consciously remember what we perceive

- that we consciously decide on actions, and then consciously initiate and control them

- etc etc

Common sense tells us it’s obvious…

- that we consciously perceive the world accurately

- that we consciously remember what we perceive

- that we consciously decide on actions, and then consciously initiate and control them

- etc etc

We teach our children to do this

The law assumes we do this

etc etc

…but common sense is wrong

Change and inattentional blindness (Simons, O’Regan, etc

- you don’t see what’s there

Misattribution of agency (Daprati, Wegner)

- you don’t know your own actions

Backwards referral of sensation (Libet)

- when sensations become conscious, they are experienced as if they started about half a second

previously

Backwards referral of action (Walter, Kornhuber)

- the neural processes of a voluntary action begin about half a second before you are aware of initiating it

‘Consciousness is a peculiar phenomenon. It is riddled with deceit and self-deception; there can be consciousness of something we were sure had been erased by an anaesthetic; the conscious I is happy to lie up hill and down dale to achieve a rational explanation for what the body is up to; sensory perception is the result of a devious relocation of sensory input in time; when the consciousness thinks it determines to act, the brain is already working on it; there appears to be more than one version of consciousness present in the brain; our conscious awareness contains almost no information but is perceived as if it were vastly rich in information. Consciousness is peculiar.’

Tor Norretranders ‘The User Illusion’ 1991 (tr 1998)

This does not look like a straightforward well-designed control system to me!

This does not look like a straightforward well-designed control system to me!

Is it like that because all advanced control systems have to be like that?

This does not look like a straightforward well-designed control system to me!

Is it like that because all advanced control systems have to be like that?

Is it like that because it was evolved by making random changes to the design documentation of previous controllers, or because it was implemented on a meat-based computational substrate?

This does not look like a straightforward well-designed control system to me!

Is it like that because all advanced control systems have to be like that?

Is it like that because it was evolved by making random changes to the design documentation of previous controllers, or because it was implemented on a meat-based computational substrate?

Or is it like that because all members of a particular class of advanced control systems - the conscious ones - are like that?

If I find that the controller I develop for intelligent action generation/selection has the same peculiarities as human consciousness (perhaps because I have had to put some of those peculiarities in to make the system work properly or quickly or economically)…

If I find that the controller I develop for intelligent action generation/selection has the same peculiarities as human consciousness (perhaps because I have had to put some of those peculiarities in to make the system work properly or quickly or economically)…

…then I might be justified in claiming that some mechanism of this sort underpins human phenomenal consciousness.

If I find that the controller I develop for intelligent action generation/selection has the same peculiarities as human consciousness (perhaps because I have had to put some of those peculiarities in to make the system work properly or quickly or economically)…

…then I might be justified in claiming that some mechanism of this sort underpins human phenomenal consciousness.

And, if not, then I’m no worse off than the rest of you.

Complexity

Must a complex machine be conscious?

Complexity

Must a complex machine be conscious?

I can’t think of any reason why an arbitrary machine (or control system) should have phenomenal consciousness just because it is complex.

Complexity

Must a complex machine be conscious?

I can’t think of any reason why an arbitrary machine (or control system) should have phenomenal consciousness just because it is complex.

But I’d be quite happy for Ricardo to talk about the level of consciousness of a complex control system with multilevel hierarchies and heterarchies involving predictive plant-, self-, and other-modelling, because I would know the sense in which he was using the word (although I might prefer him to use one of the standard euphemisms such as ‘awareness’).

Complexity

Must a conscious machine be complex?

Complexity

Must a conscious machine be complex?

If complexity is to do with size (number of components) then the human brain is certainly relatively complex at 1400g (1011 cells).

Complexity

Must a conscious machine be complex?

If complexity is to do with size (number of components) then the human brain is certainly relatively complex at 1400g (1011 cells).

Smallest primate brain: tarsier at 4g (109 cells).

Complexity

Must a conscious machine be complex?

If complexity is to do with size (number of components) then the human brain is certainly relatively complex at 1400g (1011 cells).

Smallest primate brain: tarsier at 4g (109 cells).

Smallest mammal brain: hog nose shrew at 0.07g (107 cells).

Complexity

Must a conscious machine be complex?

If complexity is to do with size (number of components) then the human brain is certainly relatively complex at 1400g (1011 cells).

Smallest primate brain: tarsier at 4g (109 cells).

Smallest mammal brain: hog nose shrew at 0.07g (107 cells).

Betty the Caledonian crow can make tools. Her brain is probably around 6g (109 cells). http://users.ox.ac.uk/~kgroup/tools/tools_main.html

Complexity

Must a conscious machine be complex?

If I’m right, then a conscious machine needs to be able to build internal models of itself and of the world, and to arrange for these models to simulate interactions on demand, and to evaluate the outcomes of the simulations, and to select and implement a good sequence of actions.

Complexity

Must a conscious machine be complex?

If I’m right, then a conscious machine needs to be able to build internal models of itself and of the world, and to arrange for these models to simulate interactions on demand, and to evaluate the outcomes of the simulations, and to select and implement a good sequence of actions.

How complex does it need to be? Surely that depends on the complexity of the world and itself…perhaps a simple body in a simple world can produce a simple conscious system?

Recommended