7
A Blueprint for Command and Control: Automation and Interface Jason Scholz, Dale Lambert, Don Gossink, Glen Smith C3I Division Defence Science and Technology Organisation Edinburgh, Australia Abstract— Traditionally the domain of humans, Command and Control (C2) increasingly necessitates the use of automated decision aids and automated decision makers to assist in managing the complexity and dynamics of modern military operations. We propose a blueprint based on the US Joint Directors of Laboratories (JDL) model, cognitive psychology and agent research literature. This blueprint provides design guidance for fusion, resource management and automation policy through decomposition into levels and processes with commensurate levels of human-interface, and the development of foundational language. Keywords-architecture; agents; psychology; interface I. INTRODUCTION The development of an information fusion system to solve problems relies on the intent of that system (policy) and the ability to management resources to achieve that intent. In C2, fusion, resource management and policy may be considered as equal peers. This is the third in a three part series to express a blueprint for the development of machine-based C2. The first covers resource management, the second policy management and the third a complete architecture, noting that a blueprint for higher level fusion has already been proposed [1] though not yet integrated into a holistic form as presented here. A. Motivation A future vision for “Ubiquitous Command and Control” [2] was offered in 1999 and later [3,4] for “a similar and significant C2 capability on every platform” to provide “extreme robustness.” To realise this vision, automation is required. Automated decision aids and automated decision makers must also integrate with human decision making. In addition to filling conventional roles of “dull and dangerous” tasks, automation provides further potential, Ability to dynamically adapt the C2 system according to required scale, composition and robustness as a result of change. Software agents are a potential means to this [5,6] applied to date mostly in commercial areas. Agents span a range [7] from simple reactive automata through to programs with intentions e.g. [8]. The latter adapt plans to meet goals as situations change. More recent research extends this with the ability to adapt goals if no adequate plan is found [9]. Ability to balance cognitive load. “Network Centric Warfare” [10] (Alberts et al, 1999) raised to military consciousness the power of networks but neglected automation. Later the U.S. [11] recognised the flaw stating that “an unintended consequence” is that “everybody must think”! The increased cognitive load on individuals introduced by networking may in part be addressed by automation. Ability for ethical decision making. A report [12] on mental health issues during Operation Iraqi Freedom provides some stirring facts indicating human performance in making ethical decisions can be less than ideal. Automation does not suffer fatigue, low morale, sleep deprivation, etc that might cloud human judgement. Ethical hazards may apply even to the search for information e.g. being unaware of information available somewhere “in the system” can have strategic implications. In this paper we specifically address the Automation and Interface aspects of a C2 blueprint. This excludes protocols for communication between people and machines and between machines at this time, noting that [13] proposes a comprehensive legal agreement protocol to address this. Our motivation is to establish a common language and conceptualization for design of fusion and C2 systems. B. Approach Command and Control is defined in [3,4], We assert that command involves the creative expression of intent to another. Complementing this we assert, control involves the expression of a capability (a plan is an example of a capability) to another and the monitoring and correction of the execution of that capability. Lambert… in effect suggests that we can understand action as the utilisation of capability to achieve intent, given awareness. This is broader than the traditional Western military definition based on authority. Further, by “capability” we mean “something that changes ones awareness of the world (usually) by changing the world” [3]. Figure 1 illustrates the three key components of C2: intent, capability and awareness. Our philosophical basis is to consider that action exists as each of the three, yet is unified as one – a kind of action trinity. Together they form a non-well-founded set – meaning a set that contains itself as a member 1 . This gives rise to the recursions observed in C2 and provides a fundamental insight into C2 processes. The trinity may be 1 Non-well founded sets negate the foundation axiom of Zermelo-Fraenkel set theory. 211

A Blueprint for Command and Control - ISIF

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

A Blueprint for Command and Control: Automation and Interface

Jason Scholz, Dale Lambert, Don Gossink, Glen Smith C3I Division

Defence Science and Technology Organisation Edinburgh, Australia

Abstract— Traditionally the domain of humans, Command and Control (C2) increasingly necessitates the use of automated decision aids and automated decision makers to assist in managing the complexity and dynamics of modern military operations. We propose a blueprint based on the US Joint Directors of Laboratories (JDL) model, cognitive psychology and agent research literature. This blueprint provides design guidance for fusion, resource management and automation policy through decomposition into levels and processes with commensurate levels of human-interface, and the development of foundational language.

Keywords-architecture; agents; psychology; interface

I. INTRODUCTION The development of an information fusion system to solve

problems relies on the intent of that system (policy) and the ability to management resources to achieve that intent. In C2, fusion, resource management and policy may be considered as equal peers. This is the third in a three part series to express a blueprint for the development of machine-based C2. The first covers resource management, the second policy management and the third a complete architecture, noting that a blueprint for higher level fusion has already been proposed [1] though not yet integrated into a holistic form as presented here.

A. Motivation A future vision for “Ubiquitous Command and Control” [2]

was offered in 1999 and later [3,4] for “a similar and significant C2 capability on every platform” to provide “extreme robustness.” To realise this vision, automation is required. Automated decision aids and automated decision makers must also integrate with human decision making. In addition to filling conventional roles of “dull and dangerous” tasks, automation provides further potential,

• Ability to dynamically adapt the C2 system according to required scale, composition and robustness as a result of change. Software agents are a potential means to this [5,6] applied to date mostly in commercial areas. Agents span a range [7] from simple reactive automata through to programs with intentions e.g. [8]. The latter adapt plans to meet goals as situations change. More recent research extends this with the ability to adapt goals if no adequate plan is found [9].

• Ability to balance cognitive load. “Network Centric Warfare” [10] (Alberts et al, 1999) raised to military consciousness the power of networks but neglected

automation. Later the U.S. [11] recognised the flaw stating that “an unintended consequence” is that “everybody must think”! The increased cognitive load on individuals introduced by networking may in part be addressed by automation.

• Ability for ethical decision making. A report [12] on mental health issues during Operation Iraqi Freedom provides some stirring facts indicating human performance in making ethical decisions can be less than ideal. Automation does not suffer fatigue, low morale, sleep deprivation, etc that might cloud human judgement. Ethical hazards may apply even to the search for information e.g. being unaware of information available somewhere “in the system” can have strategic implications.

In this paper we specifically address the Automation and Interface aspects of a C2 blueprint. This excludes protocols for communication between people and machines and between machines at this time, noting that [13] proposes a comprehensive legal agreement protocol to address this. Our motivation is to establish a common language and conceptualization for design of fusion and C2 systems.

B. Approach Command and Control is defined in [3,4], We assert that command involves the creative expression of intent to another. Complementing this we assert, control involves the expression of a capability (a plan is an example of a capability) to another and the monitoring and correction of the execution of that capability. Lambert… in effect suggests that we can understand action as the utilisation of capability to achieve intent, given awareness.

This is broader than the traditional Western military

definition based on authority. Further, by “capability” we mean “something that changes ones awareness of the world (usually) by changing the world” [3].

Figure 1 illustrates the three key components of C2: intent, capability and awareness. Our philosophical basis is to consider that action exists as each of the three, yet is unified as one – a kind of action trinity. Together they form a non-well-founded set – meaning a set that contains itself as a member1. This gives rise to the recursions observed in C2 and provides a fundamental insight into C2 processes. The trinity may be

1 Non-well founded sets negate the foundation axiom of Zermelo-Fraenkel set theory.

211

reformed into duals. For example, collapsing intent into capability (i.e. intent in planning) creates attent in awareness.

Action

CapabilityIntent

Awareness

Intent in Capability(e.g. in a plan)

Attent in Awareness

Capability

Awareness

Action

Figure 1. Action as a trinity from [3] (left), and Action as a non-well

founded dualism (right).

The data fusion community’s Joint Directors of Laboratories (JDL) model for data fusion outlines the automation of awareness, see for example [14]. The JDL model was extended to include resource management (as a dual to fusion) and aligning it to automation of capability, though constrained to the management of sensors and sensor processing, rather than more general effectors as is required for C2.

The JDL model has remained silent on the question of centralised versus distributed processing. Multi-agent systems (MAS) may provide a scalable architectural basis for distributed automation in C2 systems. In a general MAS model, cognitive human and machine individuals are members of a society of information processors, each with their unique strengths and weaknesses. By possessing human-understandable cognitive constructs (such as beliefs, roles, desires and intentions) and social protocols, software agents have the potential for more naturalistic interaction, thereby freeing Commanders from the burden (and stigma) of operating a computer in the current sense. This leads to a vision whereby interaction with a cognitive software agent is more like interacting with a human, allowing verbal commands to be expressed and understood as per [1].

In the next sections, we present the C2 Challenge, outline fusion, resource management, policy and human-interaction architectures and summarise a total C2 automation and interaction blueprint.

C. C2 Challenge We generalise the elucidation in [1] on the requirements for

higher-level fusion systems to C2. Figure 2 illustrates. C2 may be interpreted firstly in terms of its human component, often cited as military “art”, it is concerned with psychology and in particular those aspects peculiar to Commanders engaged in war. Der Kriegerische Genius (“On Military Genius”) [15] typically illustrates this component. A machine interpretation of C2, even in more modern times however, is limited mostly to low-level fusion for object assessments especially detection and tracking (awareness); aids for detailed schedule management, critical path analysis and the routing of assets (capability); and low-level programmed ends (intent) for automated control of self-defence systems (e.g. Phalanx). The interpretation of C2 as integration is more ubiquitous. Almost every modern Western military C2 headquarters has a “dots on

maps” display as the “Common Operating Picture” (COP) and use telephones, email, Microsoft Powerpoint® and so on for what we term “Planning and Operating Controls” (POC), yet the degree to which integration of what is in a person’s head corresponds to what these interfaces provide is questionable.

The C2 Challenge subsumes the higher-level fusion challenge of [1]. The C2 Challenge is illustrated in the progression of figure 2 where all areas of the matrix might be addressed, shown as converting the areas from light to dark.

PI

POC

COP

CommandIntent

Planning&

Execution

SituationAwareness

Mac

hine

s

Inte

grat

ion

Hum

an

Intent

Capability

Awareness

MachineC2

InterfaceC2

Psychology of C2

Interpretation

Views onAction

COP = Common Operating PicturePOC = Planning & Operating ControlsPI = Policy Interface

Policy

ResourceMgmt

Fusion

PI

POC

COP

CommandIntent

Planning&

Execution

SituationAwareness

Mac

hine

s

Inte

grat

ion

Hum

an

Intent

Capability

Awareness

MachineC2

InterfaceC2

Psychology of C2

Interpretation

Views onAction

Policy

ResourceMgmt

Fusion

PI

POC

COP

CommandIntent

Planning&

Execution

SituationAwareness

Mac

hine

s

Inte

grat

ion

Hum

an

Intent

Capability

Awareness

MachineC2

InterfaceC2

Psychology of C2

Interpretation

Views onAction

Policy

ResourceMgmt

Fusion

Figure 2. A progression of C2 development.

II. FUSION ARCHITECTURE Data fusion is defined in [16] by, The process of utilising one or more data sources over time to assemble a representation of aspects of interest in an environment. The Joint Directors of Laboratories (JDL) model is a key

conceptual framework for Information Fusion. It was initiated in the 1980’s, as indicated in the lexicon of [17], and has subsequently been revised over the years [14,18,1]. The latter [1] outlines a blueprint for higher-level fusion systems. Following [18], [1] deconstructs the JDL model levels. The deconstruction celebrates differences in fusion levels that parallel human situations awareness. Most recently in [19], Lambert adds ‘Sensation’ to Endsley’s account of situation awareness [19 and section V.B],

Situation awareness is the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future.

to provide the following parallel for human – machine fusion levels:

• Level 0: Sensation - Observable Assessment;

• Level 1: Perception - Object Assessment,

• Level 2: Comprehension - Situation Assessment; and

• Level 3: Projection - Scenario Assessment.

Lambert’s process philosophy view absorbs JDL Level 4 Process Refinement into each of the aforementioned levels. Machine fusion is situation awareness performed by machines. Human situation awareness is fusion performed by people.

212

In reconstructing machine fusion in [1] to celebrate

uniformities across the fusion levels, Lambert presents the State Transition Data Fusion (STDF) model. In this model, information fusion involves the prediction, observation and explanation of state transitions in the world. A critical architectural aspect of STDF is the conjecture that the same basic process applies at each of the fusion levels. In [19] Lambert discusses this in relation to higher level fusion processing and lower level signal, textual and image fusion processing. The general STDF model is shown in figure 3 and quoted from [19].

Agent

Prediction

Observation

Explanation

Sensors

transitionstate si(k+1)

k+1 time

state si(k)

k

World

external

interactions

ej(k+1)

association

oi(k+1|k)^

initiation

oj(k+1)matches oi(k+1|k)^

si(k+1|k+1)^

si(k+1|k)^

Rep

rese

ntat

ion

si(k|k)^

insert information

transfer process

invoke process

access information

oj(k+1)unmatched

updatefailure

updatesuccess

oi(k+1|k)unmatched

^

si(k+1|k+1)^ sr(k+1|k+1)^

registration detection

stateprediction

observationprediction

dj(k+1)oj(k+1)

Figure 3. The generic STDF model from [19].

• At time step k+1 the agent senses a number of new states in the world {si(k+1) | i ∈ N+(p1)} through its sensors and transfers the corresponding sensations {ej(k+1) | j ∈ N+(q)} to an observation process.

• The observation process involves a detection process to potentially identify a detection dj(k+1) from each sense datum ej(k+1); a registration process that yields an observation oj(k+1) by normalizing the detection dj(k+1) relative to a frame of reference; and then an association process.

• The association process first draws upon a prediction process, which:

accesses previous representations si(k|k) of states si(k) in the world at

time k; applies a state prediction process to representation si(k|k) to posit

a predicted state representation(s) si(k+1|k) of predicted state si(k+1) in

the world at time step k+1; and then applies an observation prediction

process to predicted state representation si(k+1|k) to posit predicted

observation(s) o i(k+1|k) at time step k+1. Where multi-hypothesis state

and observation predictions occur for state si(k+1) from si(k|k), these can

be labeled si,1(k+1|k), …, s

i,w(k+1|k) and oi,1(k+1|k) , …, o i,w(k+1|k)

respectively, for w ∈ N+.

• The association process then matches the observations {oj(k+1) | k ∈ Time_Step & j ∈ N+(q)} at time step k+1 to (one or more) predicted

observations { oi(k+1|k) | k ∈ Time_Step & i ∈ N+(p)} for time step k+1

and then transfers control to an explanation process.

• The explanation process must contend with three possible outcomes from the comparison of observations {oj(k+1) | k ∈ Time_Step & j ∈ N+(q)} at

time step k+1 with the predicted observations { oi(k+1|k) | k ∈

Time_Step & i ∈ N+(p)} for time step k+1.

o If the observation oj(k+1) successfully matches with a

predicted observation oi(k+1|k), then an update success

process is invoked to produce the explained representation s

i(k+1|k+1) of state si(k+1) at time step k+1.

o If predicted observation oi(k+1|k) fails to match with any

observation oj(k+1), then an update failure process is invoked

to produce the explained representation si(k+1|k+1) for state

si(k+1) at time step k+1.

o If the observation oj(k+1) fails to match with any predicted

observation o i(k+1|k), then an initiation process is invoked to

produce the explained representation sr(k+1|k+1) for new

state sr(k+1) at time step k+1.

Table I shows the application of these processes at each level in a signal processing context. Again, [19] discusses the details.

TABLE I. SUMMARY OF STDF PROCESSES AT LEVELS 1-3.

Fusion Level 0 1 2 3

ObservableAssessment

Object Assessment

SituationAssessment

ScenarioAssessment

Obs

erva

tion Detection Signal

Recognition Signal

Processing Object

Assessment Object

Assessment

Registration Signal Registration

Coordinate Registration

Semantic Registration

Situation Assessment

Association Signal Association

Data Association

Proposition Association

Situation Association

Pred

ictio

n State Prediction

Feature Prediction

State Vector Prediction

Scenario Assessment

Predictive Assessment

Observation Prediction

Signal Prediction

Measurement Prediction

Expectation Prediction

COA Assessment

Exp

lana

tion Initiation Feature

Initiation State Vector

Initiation Situation Initiation

Scenario Initiation

Update Success

Feature Update

State Vector Update

Situation Update

Scenario Update

Update Failure

Feature Detection

State Vector Deletion

Situation Deletion

Scenario Termination

III. RESOURCE MANAGEMENT ARCHITECTURE As outlined in [20], an architecture for resource

management (RM) may be formed with levels that are “duals” of the levels in the fusion architecture, yielding level 3: Course of Effect (scenario), level 2: Course of Action (situation), level 1: Effect Objects, and level 0: Effect Controllables, underpinned by Effectors.

Noting that controllability (of certain variables) is the dual of observability and is a requisite to effect the world as intended.

A resource management (RM) process model dual to the STDF model for fusion is similarly expressible [20]. The process stages of Resource Generation/Identification, Resource Evaluation and Resource Selection are an abstraction

Fusion is the sensation of observables in the environment, the perception of objects formed from observables across volumes of time and space, the comprehension of situations involving these objects, and the projection of scenarios into the future.

213

applicable to Dynamic Programming2 and allows changes in resources (capability) to be managed (chosen) as shown in figure 4. This general model process applies at each level of resource management.

at

at,<1>

at,<2>

c1

c2

at,<1,3>

at,<1,4>

at,<2,5>

c3

c4

c5

at,<1,3,6>

at,<1,3,7>

at,<1,4,8>

c6

c7

c8

it,k1 it,k2t k1 k2

cj = capability option jat = awareness at time tit,k = intended effect at time k

time

Preferred CourseMin( (it,k - ak))∑

k

Figure 4. The generic RM model after [3,4].

Noting that at each level of RM, the corresponding level of awareness (fusion) and intent (policy) applies. For example [20], the level of Course of Effect corresponds to planning sequences (trees in general) of decisive points to effect centre(s) of gravity. We can now define resource management:

IV. POLICY ARCHITECTURE Policy represents the machine aspect of intent. A policy

architecture with levels aligned to fusion and RM is presented in [21]. The architecture allows for a broad range of implementations from trivial human-set switch settings, to simple reactive agents (if situation then do…), through to goal-based and utility based agents based on [22]. The levels are termed level 3: scenario intent, level 2: situational ways, level 1: objective ends and level 0: stabilizing means, underpinned by volition. Noting that stabilisable (variables) complements the observable and controllable (variables) as there is always a question with an observable and controllable system as to whether it is stabilisable to within required / specified bounds.

A policy process model allows management of change through policy creation, policy coherence ordering and policy commitment functions as illustrated in figure 5 and summarised in the following from [21].

2 In the 1950’s Bellman (who coined the term) used the word “programming” to mean what we now term “planning”.

Fusion(from STDF

“prediction”)

3. Assess Alignment with

Capability

DesiredCandidateIntent

4. CoherenceOrdering *

2. Assess Alignment with Higher Intent

Realisable

Memory

Current Intentions

Updated Intentions

transitionit,k

it+1,k

t t+1 time

Δ awareness/Δ capability

OtherExternal

Interactions

* Optimisation: Choose the order of intentions that Maximises preference while maintaining coherence

WorldWorld

MachineMachinePolicyPolicy

k

ik,k

Achieved/not

b

Assess likelihoodof achievement

dOut of time

“If only” I did Xe

Offer of intent X from Agent B (Command)

f

Form intentdo X

1. AssessDesire

Candidate Intent

Resource Management

Permissible

ExpressIntent to others?

Offer intent X to Agent C (Command) / Agent C accepts offer

a,c

Commitment

Creation

Coherence

Figure 5. A policy model after [21].

The process model is explained: 1. Either:

a) A change in awareness state indicates that an intent it,k is achieved, at some time kt ≤ (i.e. before or at the anticipated time k), or b) From a change in awareness state and/or capability it is assessed that achievement of an intent is significantly reduced in likelihood or delayed (impending intent failure), or c) From a change in awareness state it is believed that an intent can no longer be achieved (intent failure), or d) The anticipated time to have achieved an intent is exceeded t>k and it has not been achieved (intent failure), or e) From a change in awareness state, a new candidate intent it,k is created through counterfactual reasoning, or f) The machine receives an offer to attain a candidate intent state it,k of the world (e.g. communicated by another cognitive individual). 2. Then, for case: a) the desire is assessed as zero resulting in the intent being removed from the coherence ordering. b) depending on the severity, the desire of the existing intent is reassessed (e.g. in many cases it may be increased) and steps 3-5 applied. c) and d) which involve a change in existing intents or internal desire, all existing intents are treated as candidate intents and steps 3-5 applied. e) and f) the candidate intent go to steps 3-5.

3. A candidate intent state it,k is assessed for its level of desirability.

A candidate is also assessed for permissibility with regard to higher levels of intent; that is, does it align with higher level intents (i.e. ‘stabilizing means’ require supporting ‘objective ways’, ‘objective ways’ require supporting ‘situational ends’, ‘situational ends’ require supporting ‘scenario intents’, ‘scenario intents’ are primitives in themselves). The agent must also approve of it (does not violate ethical and other criteria).

4. Candidate intent is assessed for realisability :it is realisable if peer-level capabilities exist which can achieve it. (i.e. ‘scenario intents’ are realisable as ‘courses of effect’, ‘situational ends’ are realisable as ‘courses of action’, ‘objective ways’ are realisable as ‘effect objects’, ‘stabilisable means’ are realisable as ‘effect controllables’). Note the level of planning detail is arbitrary, so at least a peer-level assessment for realisability may be a minimum.

5. If both permissible and realisable the candidate is ordered according to preference (a function of desire and urgency/time available) relative to existing intents and checked for coherence with those intents with higher preference. If not coherent it is rejected. The machine seeks to maximise the order of intents (a partial order) according to preference, while maintaining coherence. If coherent it is adopted as a current intent in memory to persist until such time as the intent is either satisfied or becomes redundant.

Table II provides language elements appropriate to policy

from [21].

Resource Management is the courses of effect to achieve intent in the scenario, the planning of courses of action to satisfice ends in situations, the effect objects in volumes of time and space for pursuit of objective ways, the effect controllables that stabilize means and the effectors that transform the environment.

214

TABLE II. POLICY LANGUAGE ELEMENTS.

Level Semantics Social Deny, Possess, Own (relate to Possession & Ownership);

Group (identifies in-group and out-group members); Offers, Agrees, Conflicts (relate to Agreement); Ally, Enemy, Neutral (relate to Alliance); Responsible, Authority, Competency, Commands, Controls (relate to C2).

Cognitive Deceive, Exploit, Believes, Expects (relate to Awareness); Performs, Succeeds, Fails, Achieves, Approves (or Permits/Permissible), Prefers (relate to Cognitive Routines); Defeat, Intends, Desires (relates to Volition); Cognitive (individual).

Functional Degrade, Destroy, Disrupt, Neutralise, Suppress, Operational, Operating (relate to Operational Status). Sense, Move, Strike, Inform, Attach, Expr (relate to Operation).

Physical Divert, Interdict. Metaphysical Delay (relates to time).

We can now define policy:

V. PSYCHOLOGY AND INTERFACE Human psychology and human-machine interface levels

need to correspond to machine levels in policy, fusion and resource management as was shown in figure 2. We define them as follows.

A. Human Intentions We base a definition of the psychology of intent on the

language of the US Army doctrine [23]:

Intent is the will that drives action, the tasks to be performed immediately, the missions directly attained through constant pursuit, the end states selected to guide those pursuits and the purposes that give orientation to those selections.

B. Human Awareness We use Endsley’s [24] definition of situation awareness to

refer to the human psychology aspect of awareness (our emphases),

Situation awareness is the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future.

C. Human Capabilities We use military planning language [25] as a basis for a

definition:

Capability is the mission analysis; the development, analysis and decision on a course of action, the assignment of objects in volumes of time and space, and the activation of effectors.

D. Interfaces The Common Operating Picture (COP) focus is on

presentation (outputs):

• COP 3 – Show scenario assessments. e.g. Interpretative story about the scenario, or ‘stories about scenarios’.

• COP 2 – Show situation assessments. e.g. Reports about the situation, or ‘stories about situations’.

• COP 1 – Show object assessments. Presents tracks, or ‘lines on maps’.

• COP 0 – Show observables assessments. e.g. ‘dots on maps’ and signal data.

Planning and Operating Controls (POC) focus is on controls (inputs):

• POC 3 – Control Courses of Effect for scenarios. e.g. Select decisive points to achieve effect on centres of gravity.

• POC 2 – Control Courses of Action for situations. e.g. Interactive option graphs.

• POC 1 – Control Effect Objects. Assign objects in volumes of time and space. e.g. interactive scheduling and path routing.

• POC 0 – Manage Effect Controllables. e.g. The activation, deactivation and intensity control settings.

Policy Interface (PI) focus is on interaction. Noting that the intent state of a machine (unlike human intent) is fully accessible for presentation and control by humans.

• PI 3 – Show scenario intent and assign purpose. e.g. Use of recognised semantic concepts via keyboard and voice recognition, registration and checking (feedback) to ensure correct machine interpretation.

• PI 2 – Show situation ways and set end states. e.g. Virtual trees depicting hierarchies of goals.

• PI 1 – Show objective ends and set missions. e.g. hand gesture recognition to communicate directions and paths.

• PI 0 – Show stabilizing means and set tasks. e.g. machine tracking eye gaze to gauge viewer attent (with implication of low level intent) and the interpretation of gestures for pointing at objects. This makes use of intentional cues not under executive control.

VI. OVERALL C2 ARCHITECTURE Figure 6 summarises the machine levels in the architecture.

Policy is the scenario intent that provides purpose, for situational ends as selected goals, to guide objective ways that are immediately and directly attained through constant pursuit, to achieve stabilizing means through acts of volition on the environment.

215

Course of effect scenario

Course of action situation

Effect Objects

Effect Controllables

R2

R1

R0

Effectors

R3

E

F2

F1

F0

F3

S

ScenarioAssessment

SituationAssessment

ObjectAssessment

ObservablesAssessment

Sensors

P3

P2

P1

ScenarioIntent

SituationalEnds

ObjectiveWays

VVolitions

Fusion Policy Resource Management

P0StabilisingMeans

Figure 6. Levels of fusion, policy and resource management for machines.

The interdependence between intent, capability and awareness expressed in the introduction is manifest between fusion, resource management and policy levels, as indicated by the horizontal ‘bus’ lines in figure 6. Further, communication of all levels is necessary to deliver information from a particular form of action. e.g. for a human to attain a complete understanding of the machine’s awareness requires all of levels 0, 1, 2, and 3 be communicated to the COP interface.

Figure 7 illustrates an overall architecture for machine C2 and human-machine interface. In the centre, the human commander interacts with the machine via a Common Operating Picture (COP) for awareness, Planning and Operating Controls (POC) for capability and a Policy Interface (PI) for intent. Four level models of fusion, policy and resource management are shown. Each level fusion, policy and RM each have a specific form of process, as outlined. We further advocate a structure with a hub at each level 2 indicating unified comprehension (fusion), unified goals (policy), and unified course of action (resource management). Although primary information flows are illustrated as trees, links between processing elements and across levels occurs as required depending on the problem (e.g. in the cross cueing of sensors), thus the communication backbone might be generalized as a grid structure.

VII. CONCLUSION We have offered a blueprint for C2 automation and human

integration which builds largely from the JDL architecture of the fusion community, agent research and psychology. The blueprint provides a set of guiding principles, proposes levels which reduce system complexity for design, and illustrative processes to manage change in intent, capability and awareness.

REFERENCES [1] D.A. Lambert, “A blueprint for higher level fusion systems,”

Information Fusion, Vol. 10, pp. 6-24, 2009.

[2] D.A. Lambert, D.A, “Ubiquitous command and control,” Proceedings IEEE Information, Decision and Control Conference, Feb., Adelaide Australia, pp. 35-40, 1999.

[3] D.A. Lambert and J.B. Scholz, “A dialectic for network centric warfare,” 10th International Command and Control Research and Technology Symposium (ICCRTS), 2005.

[4] D.A. Lambert and J.B. Scholz, “Ubiquitous command and control,” Journal of Intelligent Decision Technologies, Vol. 1, pp. 157-173, IOS Press, 2007.

[5] D. Weyns and M.P. Georgeff, "Self-adaptation using multiagent systems", presented at IEEE Software, 2010, pp. 86-91.

[6] F.M.T. Brazier, J.O. Kephart, H. Van Dyke Parunak, M.N. Huhns, M.N. “Agents and service-oriented computing for autonomic computing – a research agenda”, IEEE Journal of Internet Computing, Vol. 13 Issue 3, May 2009.

[7] S.J. Russell and P. Norvig, Artificial Intelligence: A Modern Approach. Second Ed. Prentice-Hall , 2003.

[8] M.E. Bratman, Intention, Plans, and Practical Reasoning. Harvard University, Cambridge, MA, 1987.

[9] J. Bell and Z. Huang, “Dynamic goal hierarchies”, In Proc. PRICAI’96 Workshop on Intelligent Agent Systems, Springer-Verlag, 1997.

[10] D.H. Alberts, J.J. Gartska, F.P. Stein, Network Centric Warfare – Developing and Leveraging Information Superiority, CCRP Publication, US DoD, 1999.

[11] Defense Science Board, Chatham, R., Braddock, J., “Defense science board task force on training for future conflicts”, Washington, D.C., June 2003. http://handle.dtic.mil/100.2/ADA429010

[12] Surgeon General (2006), Final Report, Mental Health Advisory Team (MHAT) IV Operation Iraqi Freedom 05-07, 17 Nov 2006, p.42. Office of the Surgeon General Multi-National Force Iraq.

[13] D. A. Lambert and A. G. Lambert, “The legal agreement protocol”, in: E. Blasch, E. Bossé and D. A. Lambert (Eds.), “High Level Information Fusion Management and Systems Design”, Ch. 8, Artech House, Norwood MA, 2012.

[14] A.N. Steinberg, C.L. Bowman, and F.E. White, “Revisions to the JDL data fusion model” in Sensor Fusion: Architectures, Algorithms, and Applications, Proc. of SPIE, Vol. 3719, p.430, 1999.

[15] Von Clausewitz, On War, 1832, Translated by M. Howard and P. Paret. Princeton Uni., 1989.

[16] D.A. Lambert, “Situations for situation awareness”, in Proc. Fourth Int. Conf. on Data Fusion, Montreal, Canada, 2001.

[17] F.E. White, “Data fusion lexicon”, Data Fusion Panel, JDL Technical Panel Report, Naval Ocean Systems Center, San Diego, 1987.

[18] A.N. Steinberg, C.L. Bowman, “Rethinking the JDL data fusion levels” MSS National Symposium on Sensor and Data Fusion, Columbia, SC, USA, 2004.

[19] D. A. Lambert, “The state transition data fusion model”, in: E. Blasch, E. Bossé and D. A. Lambert (Eds.), “High level information fusion management and systems design”, Ch. 3, Artech House, Norwood MA, 2012.

[20] J.B. Scholz, D.E. Gossink, “A resource management blueprint for fusion and command and control,” Fusion 2012 companion paper.

[21] J.B. Scholz, G. Smith, D.E. Gossink, “A automation policy blueprint for fusion and command and control,” Fusion 2012 companion paper.

[22] Q. Smith, “Four Teleological Orders of Human Action,” Published in: Philosophical Topics, Vol. 12, No. 3, Winter, pp. 312-335, 1981.

[23] US Army, Field Manual FM-05. p. E-11, 26 March 2010. [24] M.R. Endsley, "Toward a theory of situation awareness in dynamic

systems", Human Factors, Vol. 37, No. 1 pp. 32–64, 1995. [25] L. Zhang, L. Falzon, M. Davies, and I. Fuss, “On relationships between

key concepts of operational level planning”, 5th ICCRTS, Canberra, ACT, 2000.

216

Rn

Pol

icy

Inte

rface

leve

ls 0

to 3

.P

I 0-3

Cap

abili

ty le

vels

with

dire

ct c

orre

spon

denc

e to

P

OC

leve

ls.

C0-

C3

Pol

icy

leve

l ‘n’

. i.e

. P0,

P1,

P2,

P3.

P

n

Plan

ning

& O

pera

ting

Con

trols

leve

ls 0

to 3

.P

OC

0-3

Inte

nt le

vels

with

dire

ct c

orre

spon

denc

e to

PI

leve

ls.

I0-I3

Awar

enes

s le

vels

with

dire

ct c

orre

spon

denc

e to

C

OP

/ Jo

int D

irect

ors

of L

abor

ator

ies

(JD

L m

odel

) lev

els.

A0-

A3

Com

mon

Ope

ratin

g Pi

ctur

e le

vels

0 to

3.

CO

P 0

-3

Res

ourc

e M

anag

emen

t lev

el ‘n

’i.e

. R0,

R1,

R2,

R

3. (R

2 ex

ampl

e sh

own

in d

iagr

am).

Rn

Fusi

on le

vel ‘

n’i.e

. F0,

F1,

F2,

F3.

Cor

resp

onds

to

JD

L m

odel

leve

ls. S

ame

proc

ess

stru

ctur

e at

al

l lev

els,

but

diff

eren

t rep

rese

ntat

ion

leve

l.

FnG

loss

ary

Cou

rse

of

effe

ct s

cena

rio

Cou

rse

of

actio

n si

tuat

ion

Effect Objects

Cyb

er E

ffect

ors

Cybe

r Effe

ctor

s

Effect Controllables

Rad

ar C

ontr

olle

rsR

adar

Con

trol

lers

Surv

eilla

nce

cont

rol

EA

, EP

, ES

Net

wor

k Po

stur

eA

djus

tmen

t

Jam

mer

sJa

mm

ers

Wea

pons

Wea

pons

Plat

form

sPl

atfo

rms

Con

vent

iona

lSt

rike

Man

oeuv

re

Dec

ide

Act

Logi

stic

sLo

gist

ics

Supp

ly

effectors

Imag

e Im

age

Sens

ors

Sens

ors

Text

Te

xt

Sens

ors

Sens

ors

Cyb

er

Cyb

er

Sens

ors

Sens

ors

CO

MIN

T C

OM

INT

Sens

ors

Sens

ors

ELIN

T EL

INT

Sens

ors

Sens

ors

Rad

ar

Rad

ar

Sens

ors

Sens

ors

EL

INT

CO

MIN

TH

UM

INT

/O

SIN

TC

ompu

ter

Net

wor

k D

efen

ceIM

INT

Surv

eilla

nce

ObservableAssessment

Object Assessment

Situ

atio

n A

sses

smen

t

Sce

nario

A

sses

smen

t

sensors

Obs

erve

Orie

ntF2

Fusi

on

Blu

e F

orce

Own

O

wn

Forc

eFo

rce

Sens

ors

Sens

ors

R3

F3

A0.

Sen

satio

n

C1.

Ass

ignm

ent

Sens

or

Ψ

Psyc

holo

gy

CO

P2

CO

P1

CO

P0

CO

PScenario Intent

Polic

y

SituationalEnds

ObjectiveWays

PI0

PI1

PI2

PI3

PI

P2

CO

P3

Inte

nd

Machine

Machine

Interaction

Human

POC

2

POC

1

POC

0

POC

3

POC

C0.

Act

ivat

ion

A1.

Per

cept

ion

A2.

C

ompr

ehen

sion

C2.

Cou

rse

ofac

tion

C3.

Mis

sion

Anal

ysis

I1. M

issi

ons

I0. T

asks

F1F1

F1F1

F1F1

F1

F0F0

F0F0

F0F0

F0

Interaction

Interaction

Fn

R2

R1

R0

R1

R0

R1

R0

R1

R0

R1 R0

R1

R0

Pn

StabilisingMeans

P3

Com

mun

icat

ion

betw

een

all l

evel

sWill

Volitions

Func

tiona

l Vol

ition

sFu

nctio

nal V

oliti

ons

(dis

rupt

, deg

rade

, des

troy,

neut

ralis

e, o

pera

te,…

)

Phys

ical

& M

eta

Phys

ical

& M

eta --

phys

ical

phys

ical

Volit

ions

Vo

litio

ns

(div

ert,

inte

rdic

t, d

elay

, ,…

)

Cog

nitiv

eC

ogni

tive

Volit

ions

Vo

litio

ns

(dec

eive

, exp

loit,

def

eat,

achi

eve,

pre

fer,

perfo

rm,..

.)

Soci

al V

oliti

ons

Soci

al V

oliti

ons

(den

y, c

omm

and,

con

trol,

poss

ess,

agr

ee /

conf

lict,.

..)

P1

P1P1P1

P0P0P0P0

I3. P

urpo

se

A3.

Pr

ojec

tion

Effe

ctor

I2. E

nd

Stat

es

Communication between all levels

a ta t

,<1>

a t,<

2>

c 1 c 2

a t,<

1,3>

a t,<

1,4>

a t,<

2,5>

c 3 c 4 c 5

a t,<

1,3,

6>

a t,<

1,3,

7>

a t,<

1,4,

8>

c 6 c 7 c 8

i t,k1

i t,k2

tk 1

k 2

c j=

capa

bilit

y op

tion

ja t

= aw

aren

ess a

t tim

e t

i t,k

= in

tend

ed e

ffec

t at t

ime

k

time

Pref

erre

d C

ours

eM

in(

(i t

,k-a

k))∑ k

Communication between all levels

Res

ourc

e M

anag

emen

t

Fusi

on(fr

om S

TDF

“pre

dict

ion”

)

3. A

sses

s A

lignm

ent w

ith

Cap

abili

ty

Des

ired

Can

dida

teIn

tent

4. C

oher

ence

Ord

erin

g *

2. A

sses

s A

lignm

ent w

ith

Hig

her I

nten

t

Real

isab

le

Mem

ory

Cur

rent

In

tent

ions

Upd

ated

In

tent

ions

tran

sitio

ni t,k

i t+1,

k

tt+

1 tim

e

Δaw

aren

ess/

Δca

pabi

lity

Oth

erEx

tern

al

Inte

ract

ions

* O

ptim

isatio

n: C

hoos

e th

e or

der

of in

tent

ions

that

Max

imise

s pre

fere

nce

whi

le m

aint

aini

ng c

oher

ence

Wor

ldW

orld

Mac

hine

Mac

hine

Polic

yPo

licy

k

i k,k

Achi

eved

/not

b

Ass

ess l

ikel

ihoo

dof

ach

ieve

men

t

dO

ut o

f tim

e

“If o

nly”

I did

Xe

Offe

r of i

nten

t X fr

om

Agen

t B (C

omm

and)

f

Form

in

tent

do X 1.

Ass

ess

Des

ire

Can

dida

te In

tent

Reso

urce

M

anag

emen

t

Perm

issib

le

Expr

ess

Inte

nt to

oth

ers?

Offe

r int

ent X

to A

gent

C (C

omm

and)

/ Ag

ent C

acc

epts

offe

r

a,c

Com

mitm

ent

Cre

atio

n

Coh

eren

ce

Fi

gure

7. A

n O

vera

ll A

rchi

tect

ure

for C

2.

217