170
NORTH ATLANTIC TREATY ORGANIZATION SCIENCE AND TECHNOLOGY ORGANIZATION AC/323(SET-263)TP/1090 www.sto.nato.int STO TECHNICAL REPORT TR-SET-263 Swarm System for Intelligence Surveillance and Reconnaissance (Système en essaim pour la surveillance et la reconnaissance intelligentes) Final report. Published August 2022 Distribution and Availability on Back Cover

Swarm System for Intelligence Surveillance and ... - NATO STO

Embed Size (px)

Citation preview

NORTH ATLANTIC TREATY ORGANIZATION

SCIENCE AND TECHNOLOGY ORGANIZATION

AC/323(SET-263)TP/1090 www.sto.nato.int

STO TECHNICAL REPORT TR-SET-263

Swarm System for Intelligence Surveillance and Reconnaissance

(Système en essaim pour la surveillance et la reconnaissance intelligentes)

Final report.

Published August 2022

Distribution and Availability on Back Cover

NORTH ATLANTIC TREATY ORGANIZATION

SCIENCE AND TECHNOLOGY ORGANIZATION

AC/323(SET-263)TP/1090 www.sto.nato.int

STO TECHNICAL REPORT TR-SET-263

Swarm System for Intelligence Surveillance and Reconnaissance

(Système en essaim pour la surveillance et la reconnaissance intelligentes)

Final report.

ii STO-TR-SET-263

The NATO Science and Technology Organization

Science & Technology (S&T) in the NATO context is defined as the selective and rigorous generation and application of state-of-the-art, validated knowledge for defence and security purposes. S&T activities embrace scientific research, technology development, transition, application and field-testing, experimentation and a range of related scientific activities that include systems engineering, operational research and analysis, synthesis, integration and validation of knowledge derived through the scientific method.

In NATO, S&T is addressed using different business models, namely a collaborative business model where NATO provides a forum where NATO Nations and partner Nations elect to use their national resources to define, conduct and promote cooperative research and information exchange, and secondly an in-house delivery business model where S&T activities are conducted in a NATO dedicated executive body, having its own personnel, capabilities and infrastructure.

The mission of the NATO Science & Technology Organization (STO) is to help position the Nations’ and NATO’s S&T investments as a strategic enabler of the knowledge and technology advantage for the defence and security posture of NATO Nations and partner Nations, by conducting and promoting S&T activities that augment and leverage the capabilities and programmes of the Alliance, of the NATO Nations and the partner Nations, in support of NATO’s objectives, and contributing to NATO’s ability to enable and influence security and defence related capability development and threat mitigation in NATO Nations and partner Nations, in accordance with NATO policies.

The total spectrum of this collaborative effort is addressed by six Technical Panels who manage a wide range of scientific research activities, a Group specialising in modelling and simulation, plus a Committee dedicated to supporting the information management needs of the organization.

• AVT Applied Vehicle Technology Panel

• HFM Human Factors and Medicine Panel

• IST Information Systems Technology Panel

• NMSG NATO Modelling and Simulation Group

• SAS System Analysis and Studies Panel

• SCI Systems Concepts and Integration Panel

• SET Sensors and Electronics Technology Panel

These Panels and Group are the power-house of the collaborative model and are made up of national representatives as well as recognised world-class scientists, engineers and information specialists. In addition to providing critical technical oversight, they also provide a communication link to military users and other NATO bodies.

The scientific and technological work is carried out by Technical Teams, created under one or more of these eight bodies, for specific research activities which have a defined duration. These research activities can take a variety of forms, including Task Groups, Workshops, Symposia, Specialists’ Meetings, Lecture Series and Technical Courses.

The content of this publication has been reproduced directly from material supplied by STO or the authors.

Published August 2022

Copyright © STO/NATO 2022 All Rights Reserved

ISBN 978-92-837-2407-0

Single copies of this publication or of a part of it may be made for individual use only by those organisations or individuals in NATO Nations defined by the limitation notice printed on the front cover. The approval of the STO Information Management Systems Branch is required for more than one copy to be made or an extract included in another publication. Requests to do so should be sent to the address on the back cover.

STO-TR-SET-263 iii

Table of Contents

Page

List of Figures ix

List of Tables xi

List of Acronyms xii

Glossary xv

SET-263 Membership List xvi

Chapter 1 – Introduction 1-1 1.1 Description 1-1

1.1.1 Context 1-1 1.1.2 Goals 1-1

1.2 Scope 1-2 1.3 Identification 1-3 1.4 References 1-4

Chapter 2 – Use Case Vignettes 2-1 2.1 Swarming Operations 2-1

2.1.1 Definition and Concepts 2-1 2.1.2 Swarming Operations Tenets 2-1

2.1.2.1 The Sustainable Pulsing 2-1 2.1.2.2 Command and Control Delegation 2-2 2.1.2.3 Stealthy Ubiquity 2-2 2.1.2.4 Ubiquitous Sensing 2-2

2.1.3 Swarming Operational Scenario 2-2 2.2 Swarm-Squad Symbiotic Teaming 2-3 2.3 Scenario “Ubiquitous Sensing” 2-4

2.3.1 Vignette “Build-Up” 2-6 2.3.2 Vignette “Engagement” 2-7

2.4 References 2-9

Chapter 3 – Capability 3-1 3.1 Capability Vision 3-1

3.1.1 Vision Statement 3-1 3.1.2 Vision Capability Goals 3-1

3.2 Forces Capability 3-2 3.2.1 Capability Group 1: Command, Control, Communications, 3-2

Computing (C4) 3.2.1.1 Multimodal Human RAS Interaction (HRI) 3-2 3.2.1.2 Shared Awareness 3-2

iv STO-TR-SET-263

3.2.1.3 Information Sharing 3-2 3.2.1.4 Fratricide Situation Prevention 3-2 3.2.1.5 Human – Robotic and Autonomous System 3-2

Teaming 3.2.1.6 EW Resilient Robust Navigation Systems 3-2

3.2.2 Capability Group 2: Intelligence, Surveillance, Target 3-3 Acquisition and Reconnaissance (ISTAR) 3.2.2.1 Multimodal Enriched Information 3-3 3.2.2.2 Pervasive Sensing 3-3 3.2.2.3 Focused Situational Awareness 3-3 3.2.2.4 Robust Tactical Network 3-3

3.2.3 Capability Group 3: Effective Engagement 3-3 3.2.3.1 Target Hand-Over Support 3-4 3.2.3.2 UxV Sensor-Shooter Integration 3-4

3.2.4 Capability Group 5: Protection and Survivability 3-4 3.2.4.1 Counter-Drone Protection 3-4 3.2.4.2 Dirty, Dusty, Dangerous Environment Protection 3-4 3.2.4.3 Protection from Remote Threats 3-4

3.2.5 Capability Group 6: Mobility 3-4 3.2.5.1 Forces and Material Mobility 3-5

3.3 References 3-5

Chapter 4 – Operational Activity 4-1 4.1 Operational Needs 4-1

4.1.1 Situational Awareness 4-1 4.1.2 Friend or Foe Identification 4-1 4.1.3 Protection from Threat 4-1 4.1.4 Self-Synchronized Operations 4-2

4.1.4.1 Forces Agility 4-3 4.1.4.2 Multi-National Interoperability 4-4

4.2 Generic Tasks for SS4ISR during a Mission 4-5 4.2.1 Information Gathering 4-5 4.2.2 Networking 4-5 4.2.3 Surveillance 4-6 4.2.4 Self-Protection Electronic Measures 4-6 4.2.5 Explosives Ordnance Detection 4-7 4.2.6 Airspace Control 4-7

4.3 Tasks for SS4ISR during Battle/Engagement 4-7 4.3.1 Sense and Response 4-7 4.3.2 Maintain Area Dominance 4-7 4.3.3 SS4ISR Control Handover 4-7

4.4 References 4-7

Chapter 5 – Service View 5-1 5.1 Service View Description 5-1 5.2 Detection and Tracking 5-2

STO-TR-SET-263 v

5.2.1 Swarm Detection and Tracking as a Search and Task 5-2 Allocation Problem

5.2.2 Swarm Communication 5-3 5.2.2.1 Full System Communication 5-3 5.2.2.2 Limited System Communication or Stealth Operation 5-3

5.2.3 Swarm Detection and Tracking as a Service 5-3 5.2.3.1 User Specification 5-4 5.2.3.2 Swarm System Operation Feedback 5-4 5.2.3.3 User Operation Feedback 5-4 5.2.3.4 System Simulation / Shared Cognition 5-4

5.3 Human-Swarm Interaction 5-6 5.3.1 Human-Swarm Interaction Challenges 5-6 5.3.2 State of Art 5-6 5.3.3 Symbiotic Human-Swarm Teaming 5-6

5.3.3.1 Overall 5-6 5.3.3.2 The Human-Swarm Teaming Vision 5-7 5.3.3.3 An Innovative Paradigm: Symbiotic 5-7

Human-Swarm Teaming 5.3.3.4 Symbiotic Teaming: A Possible Scenario 5-8 5.3.3.5 Flexible Autonomy 5-9 5.3.3.6 Shared Situation Awareness 5-16

5.3.4 Human-Swarm Interaction Services Description 5-18 5.3.4.1 Swarm Management Service 5-19 5.3.4.2 Swarm Mission Control Services 5-20 5.3.4.3 Swarm Payload Control Service 5-20

5.4 Swarm Navigation and Control as a Service 5-21 5.4.1 Preamble/Assumptions 5-21 5.4.2 Task Assignment 5-21

5.4.2.1 Goal Assignment and Trajectory Planning 5-21 5.4.2.2 Search and GNC 5-21 5.4.2.3 Tracking and GNC 5-22

5.4.3 Swarm Motion Planning Services 5-22 5.4.3.1 Cooperative Mission Planning 5-22 5.4.3.2 Path Planning 5-22 5.4.3.3 Velocity Planning 5-22 5.4.3.4 Trajectory Planning 5-22 5.4.3.5 Fast Planning and Replanning 5-22

5.4.4 Modes of Operation 5-23 5.4.4.1 Centralized Mode 5-23 5.4.4.2 Decentralized Mode 5-24 5.4.4.3 Distributed Mode 5-25

5.4.5 Collision Avoidance 5-25 5.4.6 Trajectory Following and Disturbance Rejection 5-25

5.5 Robot-Robot Interaction 5-25 5.5.1 Swarm-Centric System Organization: Concepts and 5-25

Architecture

vi STO-TR-SET-263

5.5.1.1 RAS Interaction 5-25 5.5.1.2 RAS-RAS Interaction Capability Levels 5-26 5.5.1.3 RAS-RAS Interaction Parameters 5-27 5.5.1.4 Swarm Cooperation 5-27 5.5.1.5 Swarm Perception 5-29

5.5.2 Localization and Mapping in Swarm Systems 5-30 5.6 Data Exchange Services 5-33

5.6.1 Data Exchange Services for Swarm-Centric Systems 5-33 and Operations 5.6.1.1 Swarm Communication Needs 5-33 5.6.1.2 Quality of Networking 5-33 5.6.1.3 Technological Challenges 5-34 5.6.1.4 The Information-Centric Networking 5-34 5.6.1.5 The Loose Coupling: The Publish-Subscribe 5-35

Protocol 5.6.2 Data Exchange Service Description 5-35

5.6.2.1 Data Delivery 5-35 5.6.2.2 Streaming 5-36 5.6.2.3 File Transfer 5-36

5.7 Networking 5-37 5.7.1 Swarm Networking Considerations 5-37 5.7.2 Interference and Coexistence Management 5-38

5.8 References 5-38

Chapter 6 – System View 6-1 6.1 System View Description 6-1 6.2 Detection and Tracking 6-1

6.2.1 Hardware Architecture 6-1 6.2.1.1 Communication System 6-2 6.2.1.2 Platform System 6-2 6.2.1.3 Sensor System 6-3 6.2.1.4 Companion Computer 6-3

6.2.2 Software Architecture 6-3 6.2.2.1 Perception Autonomy 6-3 6.2.2.2 World Model 6-4 6.2.2.3 Decision Autonomy 6-4

6.3 Human-Swarm Interaction 6-4 6.3.1 System Domains 6-4 6.3.2 Human-Swarm Interaction in the Small Tactical Unit Domain 6-5 6.3.3 Human-Swarm Interaction in the Inter-Platform Domain 6-6

6.4 Swarm Control and Navigation 6-7 6.4.1 Adaptive Planning Module (APM) 6-8

6.4.1.1 Sensor 6-8 6.4.1.2 World Model 6-8 6.4.1.3 Graduated Optimization: Global and Local Planners 6-8

STO-TR-SET-263 vii

6.4.1.4 Collision Avoidance 6-8 6.4.1.5 Tracking Error Bounds 6-8

6.5 Robot-Robot Interaction 6-9 6.5.1 Cooperative Robot Integration Platform 6-9

6.5.1.1 Scope 6-9 6.5.1.2 Context 6-9 6.5.1.3 CRIP Concepts 6-9 6.5.1.4 CRIP Services 6-10 6.5.1.5 Deployment Layer 6-11 6.5.1.6 Data Exchange Layer 6-11 6.5.1.7 Cooperation Layer 6-12 6.5.1.8 Cooperation Layer Modules 6-12

6.5.2 Localization and Mapping in Swarm Systems 6-14 6.5.2.1 Hardware Architecture 6-14 6.5.2.2 Software Architecture 6-15

6.6 Data Exchange Services 6-17 6.6.1 Swarm Data Exchange Services 6-17 6.6.2 Coalition Domain 6-19

6.7 Networking 6-21 6.7.1 Swarm Networking 6-21

6.7.1.1 Centralized Architecture 6-21 6.7.1.2 Decentralized Architecture 6-21 6.7.1.3 Single Group Networks 6-22 6.7.1.4 Multi-Group Networks 6-24 6.7.1.5 Multi-Layer Network 6-24

6.8 References 6-24

Chapter 7 – Technical View 7-1 7.1 Standards and Technologies 7-1

7.1.1 Human-Swarm Interaction 7-1 7.1.1.1 Command and Control 7-1 7.1.1.2 Media 7-2

7.1.2 Robotic Platform and Services 7-5 7.1.2.1 Joint Architecture for Unmanned Systems 7-6 7.1.2.2 Eurobotics Multi-Annual Roadmap (MAR) 7-8 7.1.2.3 ROS 7-10

7.1.3 Data Exchange Services 7-10 7.1.3.1 Data Delivery Service Protocols 7-10 7.1.3.2 Streaming Protocols 7-12

7.2 Algorithms 7-15 7.2.1 Detection and Tracking Algorithms 7-15 7.2.2 Swarm Control Algorithms 7-16 7.2.3 Swarm Networking Algorithms 7-17 7.2.4 Localization and Mapping in Swarm Systems Algorithms 7-18

7.3 References 7-22

viii STO-TR-SET-263

Chapter 8 – Interoperability 8-1 8.1 Interoperable Open Architecture 8-1

8.1.1 Introduction 8-1 8.1.2 Open Architecture 8-1 8.1.3 Interoperable Open Architecture Definition 8-1 8.1.4 System-Level Interoperability 8-1 8.1.5 An IOA Implementation: The NATO Generic Vehicle 8-2

Architecture 8.1.6 Design Guidelines for Interoperability 8-2 8.1.7 Expected Benefits 8-3

8.2 SS4ISR Data Modeling Approach 8-4 8.2.1 SS4ISR Data Model Rationale 8-4

8.2.1.1 SS4ISR Data Model Concepts 8-4 8.2.1.2 How Data Model Improves Interoperability 8-5

8.2.2 Data Categories 8-6 8.2.3 SS4ISR Data Categories vs NGVA Data Model 8-6 8.2.4 Data Modeling Process Guidelines 8-8

8.2.4.1 Platform Independence 8-9 8.2.4.2 Model Translation 8-12 8.2.4.3 The System Data Model 8-12 8.2.4.4 The SS4ISR Data Model Process 8-12

8.3 References 8-14

Chapter 9 – Relationships Matrixes 9-1 9.1 Capability Goals vs Capability Mapping 9-1 9.2 Capability vs Operational Activity Mapping 9-2 9.3 Capability vs Services Mapping 9-3 9.4 Capability vs Operational Scenario Mapping 9-5 9.5 “Swarm System” Node to System Node Relationships 9-10

STO-TR-SET-263 ix

List of Figures

Figure Page

Figure 2-1 A Swarming Operational Scenario 2-3 Figure 2-2 Symbiotic Teaming: A Possible Scenario 2-4 Figure 2-3 Swarm ISR Scenario 2-5 Figure 2-4 “Build-Up” Vignette 2-6 Figure 2-5 Updated Information Change Position of Allied Force and 2-7 Sensor Units Figure 2-6 “Engagement” Vignette 2-8 Figure 2-7 Compensating and Force Loading Sensors into the Battlefield 2-8

Figure 4-1 The Network-Centric Operations Conceptual Framework 4-2 Figure 4-2 Key Network-Centric Warfare Domains 4-4 Figure 4-3 NCW Maturity Model 4-5

Figure 5-1 SS4ISR Service Overall Dependencies 5-1 Figure 5-2 Swarm Search and Task Allocation 5-3 Figure 5-3 Diagram Illustrating High-Level Swarm Detection and Tracking as a Service 5-5 Figure 5-4 Swarm Detection and Tracking as a Service with an Added 5-5 Simulation/Sheared Cognition Component for Estimation of Consequences of Updating System Parameters and as a Dynamical Database for Current Cognition Figure 5-5 The Human-Swarm Teaming Vision 5-8 Figure 5-6 Symbiotic Teaming: A Possible Scenario 5-9 Figure 5-7 A Possible Scenario of Flexible Autonomy 5-10 Figure 5-8 Autonomy Use Shifts Dynamically Based Situational Factors 5-10 Figure 5-9 Human-Swarm Interaction Services 5-19 Figure 5-10 Modes of Operation 5-23 Figure 5-11 Trajectory Snapshots of 32 Agents Moving in a 5-24 10 m x 10 m Space Figure 5-12 Trajectory Snapshots of 8 Heterogenous Agents 5-24 Navigating a Maze Figure 5-13 Swarm Coordination Cycle 5-28 Figure 5-14 Cooperative Mission Task: Threat Control 5-28 Figure 5-15 Task Failure Detection and Recovery 5-29 Figure 5-16 Example Multi-Agent SLAM Scenario 5-31 Figure 5-17 Example Centralized Collaborative SLAM Architecture 5-32 Figure 5-18 Example Distributed SLAM Architecture 5-33

x STO-TR-SET-263

Figure 6-1 SS4ISR Service Design Solution 6-1 Figure 6-2 Logical Architecture for the Hardware on a Swarm-Compliant 6-2 Robo Figure 6-3 Human-Swarm Interaction for STU Domain 6-5 Figure 6-4 Human-Swarm Interaction for Inter-Platform Domain 6-6 Figure 6-5 Adaptive Planning Module 6-7 Figure 6-6 Cooperative RAS Architecture 6-10 Figure 6-7 Hardware Architecture for Collaborative SLAM System 6-15 Figure 6-8 Software Architecture for Collaborative SLAM 6-16 Figure 6-9 Swarm Data Exchange Services Architecture 6-18 Figure 6-10 Data Exchange Services in Coalition Domain 6-20 Figure 6-11 Centralized Architecture 6-21 Figure 6-12 Decentralized Architecture 6-22 Figure 6-13 Ring Architecture 6-22 Figure 6-14 Star Architecture 6-23 Figure 6-15 Mesh Architecture 6-23

Figure 7-1 H.323 Architecture 7-13 Figure 7-2 Multi-Agent Trajectory Planning Architecture 7-16 Figure 7-3 Experimental Results of Ref. [56] with 4 UAVs 7-19 Figure 7-4 Hand-Held Camera System with APRIL Tag Utilized in 7-20 JORB-SLAM Figure 7-5 Experimental Setup and Results of DOOR-SLAM 7-21

Figure 8-1 SS4ISR Application Layered Architecture 8-5 Figure 8-2 Data Modelling Steps and Processors 8-10 Figure 8-3 Example Build Set Components: Domains 8-13 Figure 8-4 A Possible Data Modelling Process for the SS4ISR 8-14 Data Model

STO-TR-SET-263 xi

List of Tables

Table Page

Table 7-1 JAUS Specification Documents 7-6 Table 7-2 JAUS Service Sets 7-7 Table 7-3 JAUS Relevant Reports 7-7 Table 7-4 JAUS Future Service Sets 7-8

Table 9-1 Capability Goals to Capability 9-1 Table 9-2 Capability to Operational Activity Mapping 9-2 Table 9-3 Capability to Services Mapping 9-3 Table 9-4 Capability to Operational Scenarios Mapping 9-6 Table 9-5 “Swarm System” Node to System Node Mapping 9-10

xii STO-TR-SET-263

List of Acronyms

A2AD Anti-Access Area Denied ACP Autonomous Cooperating Platform AI Artificial Intelligence APCs Armoured Personnel Carriers API Application Programming Interface APM Adaptive Planning Module AV Air Vehicle BMS Battlefield Management System C2 Command and Control C4 Command, Control, Communications, Computers C4ISR Command, Control, Communications, Computers, Intelligence, Surveillance, and

Reconnaissance CBRNE Chemical, Biological, Radiological, Nuclear, high yield Explosive CCI Command and Control Interface CCISM Command and Control Interface Specific Module CONOPS Concept of Operations COP Common Operational Picture C-RAS Cooperative RAS CRIP Cooperative Robotic Autonomous System Integration Platform CRIP-CL Cooperative Robotic Autonomous System Integration Platform-Cooperation Layer CROP Common Relevant Operational Picture CUCS Core UCS DDS Data Distribution Service for real time system DLI Data Link Interface DoS Denial of Service EW Electronic Warfare FCPS Fault, Configuration, Performance, Security FFI Friend or Foe Identification FTP File Transfer Protocol GMTIF Ground Moving Target Indicator Format GPA Government Procurement Agencies HCI Human Computer Interaction HF Human Factors HRR High Range Resolution HSI Human-Swarm Interaction HTTP HyperText Transfer Protocol HTTPS HTTP over TLS ICN Information Centric Networking IE Information Element IFVs Infantry Fighting Vehicles ILS Integrated Logistics Support

STO-TR-SET-263 xiii

IOA Interoperable Open Architecture IoT Internet of Things IP Internet Protocol IRTF Internet Research Task Force ISR Intelligence, Surveillance and Reconnaissance ISR Intelligence, Surveillance and Reconnaissance ISTAR Intelligence, Surveillance, Target Acquisition and Reconnaissance LDM Land Data Model LOI Level of Interoperability LS Level-Set MAR Multi-Annual Roadmap MAV Micro Air Vehicle MI Motion Imagery MQTT Message Queue Telemetry Transport MTP Multi-agent Trajectory Planner NCO Network Centric Operations NCW Network Centric Warfare NDN Named Data Networking NGVA NATO Generic Vehicle Architecture NIIA NATO ISR Interoperability Architecture NSIF NATO Secondary Imagery Format OA Open Architecture OMG Object Management Group PIM Platform Independent Model PS Pseudospectral PSI Platform Specific Implementation PSM Platform Specific Model PTZ Pan, Tilt, Zoom QoS Quality of Service RAS Robotic and Autonomous System RSTA Reconnaissance, Surveillance, and Target Acquisition RTP Real-Time Transport Protocol RTSP Real-Time Streaming Protocol SA Situation Awareness SDD System Data Dictionary SDM System Data Model SDP Session Description Protocol SEU Swarm Elementary Unit SFTP Secure File Transfer Protocol SIP Session Initiation Protocol SIs System Integrators SoC System on Chip SoS System of Systems SoSA System of Systems Architecture SS4ISR Swarm-centric Systems for ISR SSB System Software Bus

xiv STO-TR-SET-263

SSDM Swarm-centric systems for ISR System Data Model STU Small Tactical Unit SW Software SWaP Space, Weight, and Power TEB Tracking Error Bound UAS Unmanned Aerial System UAV Unmanned Aerial Vehicle UGV Unmanned Ground Vehicle UML Unified Modeling Language UxV Generic Unmanned Vehicle VSM Vehicle Specific Module

STO-TR-SET-263 xv

Glossary

Cooperative RAS deployment context

With the term deployment context, we refer to the given hardware, software, and external resources with which a Cooperative RAS interacts (sensors and actuators, a network, a database, etc.).

Mechanized Infantry Infantry equipped with Armoured Personnel Carriers (APCs) or Infantry Fighting Vehicles (IFVs) for transport and combat.

Small Tactical Unit An organized set of dismounted soldiers, which may include soldiers with different roles/specialist equipment and also shared equipment. Each STU has a Leader Role, who acts as front-end with other operational nodes such as vehicles, or other STUs.

Surveillance Surveillance is one of the key aspects in a mission. It is the continuous systematic observation of the surrounding environment with the purpose of detecting possible threats.

Surveillance is also a key part of safeguarding. The company safeguards itself during mission at every location, at every time and in every situation. This must be performed without a special order within its allocated area.

World-of-Interest A world-of-interest is application-specific and refers to the part of the world (in cyber space, the physical world) that is relevant to the mission application and its domain.

xvi STO-TR-SET-263

SET-263 Membership List

CHAIR

Dr. Francesco FEDI* Leonardo SpA

ItalyEmail: [email protected]

MEMBERS

Mr. Guven CETINKAYA*1 ASELSAN A.Ş TURKEY Email: [email protected]

Dr. Eugene CHABOT* Naval Undersea Warfare Center UNITED STATES Email: [email protected]

Dr. Ayodeji(Deji) COKER US Office of Naval Research Global UNITED STATES Email: [email protected]

Dr. Katia ESTABRIDIS NAWCWD UNITED STATES Email: [email protected]

Ms. Mekisha MARSHALL* Office of the Director of National Intelligence (ODNI) UNITED STATES Email: [email protected]

1* Contributing Author or Editor

Dr. Hans Jonas MOEN* Norwegian Defence Research Establishment (FFI) NORWAY Email: [email protected]

Mr .Ferdinand PETERS* Defence Materiel Organisation NETHERLANDS Email: [email protected]

Dr. Aleksander SIMONSEN* Norwegian Defence Research Establishment (FFI) NORWAY Email: [email protected]

Lt. Bugra TURAN Turkish Naval Forces Command TURKEY Email: [email protected]

Mr. Burak YENIGUN* ASELSAN A.Ş TURKEY Email: [email protected]

STO-TR-SET-263 xvii

PANEL/GROUP MENTOR

Dr. Massimiliano DISPENZA Leonardo SpA

ITALY Email: [email protected]

xviii STO-TR-SET-263

STO-TR-SET-263 ES - 1

Swarm System for Intelligence Surveillance and Reconnaissance

(STO-TR-SET-263)

Executive Summary Future NATO Joint Forces will incorporate autonomous and semi-autonomous ground, aerial and sea platforms to improve the effectiveness and agility of Forces. These autonomy-enabled systems will deploy as force multipliers at all echelons from the squad to the brigade combat teams. They will help commanders develop and maintain situational understanding by providing persistent surveillance and reconnaissance across a wider area and for extended durations in areas inaccessible by human operators. Swarming robots/sensors can provide a collaborative, multi-robot/sensor system that will provide desired collective behaviors to realize systems that can cover these larger areas, share information, and provide advanced behaviors not realizable by individual systems.

The RTG SET-263 “Swarm System for Intelligence Surveillance and Reconnaissance” analyzed operational and system issues of swarm systems which could facilitate their integration in current battlefield tactical systems from both operational, system, and technological point of views. This final report provides for a High Level Reference Architecture for Swarm-centric Systems for ISR (SS4ISR) which integrates and extends the outcomes of the previous two years of the SET-263 Research Study. The reference architecture address both:

1) Operational issues, in terms of relevant operational scenarios described via vignettes, key capabilitygoals and the set of capabilities which support each of them, and relevant SS4ISR operationalactivities which relates to each capability;

2) System issues, in terms of key system services provided from SS4ISR, the set of system nodes whichsupport the system services;

3) Technologies, in terms of current and foreseen standards and algorithms to achieve the expectedsystem capabilities; and

4) The System-level Interoperability design guidelines for the adoption of swarm system injoint/multinational coalition and their integration with the legacy ones.

The document also provides the main relationships between Operational and System issues via a set of relationship matrixes, which provides for the following mapping:

1) Capability Goals vs Capability Mapping;

2) Capability vs Operational Activity Mapping;

3) Capability vs Services Mapping; and

4) “Swarm System” Nodes to System Nodes Relationships.

The SET-263 Research Study addressed the following research topics: Detection and Tracking, which analyses the adoption of swarm system for detection and tracking of area of interests, Human-Swarm Interaction identified capabilities and services for a symbiotic teaming between the swarm and the human operator(s), Swarm Control and Navigation, which analyses the configurations and modes of operations with the end goal of a solution to address dynamic and uncertain environments where swarms must overcome

ES - 2 STO-TR-SET-263

many challenges including fast planning/re-planning and resilience to pop-up threats, which are fundamental requirements for mission success, Robot-Robot Interaction, which provides for multi-agent system design based on network-centric, autonomous decision making paradigms as emerging design approach to Robotic and Autonomous Systems (RAS), Localization and Mapping in Swarm Systems, which addresses the adoption of Simultaneous Localization and Mapping capability for Swarm system, Data Exchange Services, which analyses the adoption of information-centric architecture as support to the data exchange in swarm systems, Networking, which addresses network architectures and protocols for swarm systems.

STO-TR-SET-263 ES - 3

Système en essaim pour la surveillance et la reconnaissance intelligentes

(STO-TR-SET-263)

Synthèse Les futures forces conjointes de l’OTAN intégreront des plateformes terrestres, aériennes et maritimes autonomes et semi-autonomes pour améliorer l’efficacité ainsi que la souplesse des forces. Ces systèmes autonomes se déploieront en tant que multiplicateurs de force à tous les échelons, de l'escadron aux équipes de combat des brigades. Ils aideront les commandants à développer et à maintenir une compréhension de la situation tout en assurant une surveillance ainsi qu'une reconnaissance constantes dans une zone plus large et pour des durées prolongées dans des zones inaccessibles aux opérateurs humains. Les robots/capteurs en essaim peuvent fournir un système collaboratif, un multirobot/capteur qui fournira les comportements collectifs souhaités pour réaliser des systèmes qui peuvent couvrir ces zones plus vastes, partager des informations et fournir des comportements évolués non réalisables par des systèmes individuels.

Le RTG SET-263 « Système en essaim pour la surveillance et la reconnaissance » a analysé les problèmes opérationnels et de systèmes en essaim afin de faciliter leur intégration dans les systèmes tactiques actuels du champ de bataille, tant du point de vue opérationnel que du point de vue du système et de la technologie. Ce rapport final fournit une architecture de référence de haut niveau pour les systèmes centrés en essaim pour ISR (SS4ISR) qui intègre et prolonge les résultats des deux années précédentes de l’étude de recherche SET-263. L’architecture de référence traite aussi bien :

1) les Problèmes opérationnels, en termes de scénarios opérationnels pertinents décrits via des vignettes, les principaux objectifs de capacité et l’ensemble des capacités qui soutiennent chacun d’entre eux et les activités opérationnelles SS4ISR pertinentes qui se rapportent à chaque capacité, que ;

2) les Questions relatives au système, en termes de services système clés fournis par SS4ISR, l’ensemble de nœuds système qui supportent les services système ;

3) les Technologies, en termes de normes et d’algorithmes actuels et prévus pour atteindre les capacités attendues du système ; et

4) les lignes directrices de conception de l’Interopérabilité au niveau du système pour l’adoption du système en essaim dans la coalition conjointe / multinationale et leur intégration aux systèmes existants.

Le document fournit également les principales relations entre les questions opérationnelles et celles liées au système via un ensemble de matrices relationnelles, qui prévoient la cartographie suivante :

1) Objectifs de capacité vs Cartographie de capacité ;

2) Cartographie des capacités par rapport aux activités opérationnelles ;

3) Cartographie des capacités par rapport aux services ; et

4) Relations entre les nœuds du système et les nœuds du système en essaim.

L’étude de recherche SET-263 a abordé les sujets de recherche suivants : Détection et suivi, qui analyse l’adoption d’un système en essaim pour la détection et le suivi du domaine d’intérêt; Interaction

ES - 4 STO-TR-SET-263

homme-essaim a identifié les capacités et les services pour une association symbiotique entre l’essaim et les opérateurs humains; Contrôle en essaim et navigation, qui analyse les configurations et les modes d’exploitation avec l’objectif final d’une solution pour traiter les environnements dynamiques et incertains où les essaims doivent surmonter de nombreux défis, y compris la planification / replanification rapide et la résilience aux menaces éphémères, qui sont des exigences fondamentales pour la réussite de la mission; Interaction robot-robot qui fournit une conception de système à agents multiples basée sur des paradigmes de prise de décision autonomes et centrée sur le réseau comme approche de conception émergente des Systèmes robotiques et autonomes (RAS); Localisation et cartographie dans les systèmes en essaim, qui traite l’adoption de la localisation simultanée et la capacité de cartographie pour le système en essaim; Services d’échange de données, qui analyse l’adoption d’une architecture centrée sur l’information pour soutenir l’échange de données dans les systèmes en essaim; et Réseaux, qui traite les architectures réseau et les protocoles pour les systèmes en essaim.

STO-TR-SET-263 1 - 1

Chapter 1 – INTRODUCTION

1.1 DESCRIPTION

1.1.1 Context Future NATO Joint Forces will incorporate autonomous and semi-autonomous ground, aerial and sea platforms to improve the effectiveness and agility of Forces. These autonomy-enabled systems will deploy as force multipliers at all echelons from the squad to the brigade combat teams [1]. They will help commanders develop and maintain situational understanding by providing persistent surveillance and reconnaissance across a wider area and for extended durations in areas inaccessible by human operators [1]. Swarming robots/sensors can provide a collaborative, multi-robot/sensor system that will provide desired collective behaviors to realize systems that can cover these larger areas, share information, and provide advanced behaviors not realizable by individual systems [2]. The ability to scale the number of platforms from a few, to tens, to hundreds and to adopt swarm-centric behaviors will improve NATO Forces’ ability to: 1) Establish and maintain the superiority in the battlefield; and 2) Prevent an enemy to respond effectively.The integration of the Forces with these swarm-centric systems will be a key requirement to realize andmaintain tactical superiority and operational effectiveness. Symbiotic human-swarm teams [3] will allowforces to effectively understand, adapt, fight and win in uncertain scenarios and conditions.

Distributed collaborative autonomous systems, teamed with Soldiers, offer a tactical offset strategy: a means to operate in complex urban and other domains at high tempo, with significantly reduced risk and fewer Soldiers [4]. Integration of intelligent systems into the future force will enable key capabilities such as: increased situational awareness in complex terrain; resilient operations in the face of adversary contested environments; increased standoff distances and reach into areas where manned systems cannot reach; increased operational safety; and improved reaction time for commanders in contested urban environments, forward operating bases, and convoy operations by improving soldiers’ and commanders’ understanding about enemy formations allowing for an early reaction via either conventional long distance weapon or specific armed swarm elements. The adoption of swarm-centric behaviors will further improve the effectiveness of intelligent systems by allowing large numbers of systems work and move in coordinated ways with reduced communications and control requirements. It will also enable large numbers of systems to operate in dispersed fashion and then concentrate at specific areas to overwhelm potential threats. The integration of intelligent systems and swarming capabilities will expand the time and space at which NATO Forces can operate and improve maneuverability and the ability to overcome obstacles in Anti-Access/Area Denial (A2AD) environments by providing commanders with the ability to take operational risks previously unimaginable with solely manned formations [2]. With less human exposure to hazards, the risks inherent with deception operations, penetrations behind enemy defences, and exploitation and pursuit operations become less costly, giving commanders greater options and more reliable freedom of maneuver [2]. In addition to these Army based applications, scenarios are also seen in harbor protection, maritime surveillance emitter localization, and Anti-Submarine Warfare (ASW) surveillance, where the adoption of Swarm of Underwater or Surface Unmanned Systems at NATO level could 1) Enable, with a certain amount of permanence, to detect transit or presence of subs and 2) Allow for a shared tracking among NATO nations, reducing the uncertainty and the loss of contacts. This could both improve NATO-space safety and acting as national dissuasion forces.

1.1.2 Goals Robotic and Autonomous System (RAS) are increasingly important to ensuring freedom of maneuver and mission accomplishment with the least possible risk to Soldiers. To incorporate swarm of autonomous and semi-autonomous ground, aerial and sea platforms in future NATO Joint Forces will improve the safety, effectiveness and agility of Forces.

INTRODUCTION

1 - 2 STO-TR-SET-263

Swarm Systems for ISR affect the following operational capabilities:

• To gather persistent ISR Data about advisory actions;

• Force Protection and Interdiction; and

• Anti-Access Area Denied (A2AD) Operations.

The adoption of Swarm Systems results in added value for ISR operations as reported below: • ISR Operations Current State:

• Deployment of entities with limited observation capabilities which provide quasi-static data; and

• In-efficient operator to robot control ratio.

• Swarm System Added Value for ISR Operations:

• Deployment of artificial intelligence driven swarm systems capable of:

i) ad hoc, autonomous observation;

ii) Optimized wide-area coverage; and

iii) Dynamic situational awareness.

• Dynamic determination of relevant objects of interest in order to provide timely engagement information, with high precision and fidelity.

• Optimized Human-machine interaction to reduce operator workload and to enhance efficiency.

• Reduced logistic footprint, using future low SwaP (Space, Weight, and Power) UASs with relevant onboard processing and performant multi-sensor suite.

1.2 SCOPE

This document describes the high level reference architecture of a Swarm-centric Systems for ISR (SS4ISR). The reference architecture address:

• Operational issues, in terms of:

• Relevant operational scenarios described via vignettes, see Chapter 2.

• Key capability goals and the set of capabilities which support each of them, see Chapter 3.

• Relevant SS4ISR operational activities which relates to each capability, see Chapter 4.

• System issues, in terms of:

• Key system services provided from SS4ISR, see Chapter 5.

• The set of system nodes, and related components, which support the system services, see Chapter 6.

• Key technologies and algorithms to achieve the expected system capabilities, see Chapter 7.

• System-level interoperability, see Chapter 8.

INTRODUCTION

STO-TR-SET-263 1 - 3

The document also provides for a set of relationship matrixes, which provides for the following mapping among key architecture elements:

• Capability Goals vs Capability Mapping.

• Capability vs Operational Activity Mapping.

• Capability vs Services Mapping.

• “Swarm System” Nodes to System Nodes Relationships.

The document is organized as follows: • Chapter 1 – Introduction, the basic information about the document to improve its readability. • Chapter 2 – Use Case Vignettes, which describes a set of relevant operational scenarios via

vignettes. • Chapter 3 – Capability, which describes both the Capability Goals and the set of Forces

Capabilities which support them. A matrix which defines the relationships between Capability Goals, and Forces Capability is also provided.

• Chapter 4 – Operational Activity, which describes the key SS4ISR operational activities for the set of Capability identified in the Section 3.2.

• Chapter 5 – Service View, which describes a set of relevant services the system provides. • Chapter 6 – System View, which describes possible design solution to implements each service. • Chapter 7– Technical View, which identifies a set of Technologies and or Algorithm, which is

considered critical for a given service. • Chapter 8 – Interoperability, which describes a possible approach to achieve a system-level

interoperability as bases for swarm system adaptability and evolutionary development. • Chapter 9 – Relationships Matrixes, which provides for the mapping among the key elements of

this architecture, namely Capability Goals, Capabilities, Operational Scenarios, Services, and System Nodes.

1.3 IDENTIFICATION This document represents the deliverable D3: SET-263: Final Report of the RTG SET-263.

SET-263 team members from the following organizations edited the document: • Leonardo SpA, ITA. • ASELSAN, TUR. • Norwegian Defence Research Establishment (FFI), NOR. • Naval Air Warfare Center, Weapons Division (NAWCWD), USA. • US National Maritime Intelligence Integration Office (NMIO), USA.

SET-263 team members from the following organizations revised the document: • Naval Undersea Warfare Center (NUWC) Division, USA. • Defence Materiel Organisation (DMO), NLD.

INTRODUCTION

1 - 4 STO-TR-SET-263

1.4 REFERENCES

[1] The U.S. Army Operating Concept, “Win in a Complex World,” 31 October 2014.

[2] The U.S. Army Robotic and Autonomous Systems Strategy, March 2017.

[3] Zhou, X., Wang, W., Wang, T., Li, X. and Li, Z., “A Research Framework on Mission Planning of the UAV Swarm,” 2017 12th System of Systems Engineering Conference (SoSE), Waikoloa, HI, 2017, pp. 1-6.

[4] Sadler, B.M., “Collaborative Autonomy – A Tactical Offset Strategy,” Army AL&T, April – June 2017.

STO-TR-SET-263 2 - 1

Chapter 2 – USE CASE VIGNETTES

This chapter identifies a set of representative operational scenarios and describe each of them via a vignette.

2.1 SWARMING OPERATIONS

2.1.1 Definition and Concepts The rise of advanced information operations will bring swarming to the fore, establishing a new pattern in conflict. This concept derives insights from examples of swarming in nature [1] and in history. Both areas are plenty of examples of omnidirectional yet well-timed assaults. From ants and bees and wolf packs to ancient Parthians and medieval Mongols, swarming in force, or of fire, has often proven a very effective way of fighting [2].

From a military conflict viewpoint, swarming can be defined as a seemingly amorphous, but deliberately structured, coordinated, and strategic way to strike from all directions simultaneously, by means of a sustainable pulsing of force and/or fire, close-in as well as from stand-off positions [2].

Swarming will work best, and perhaps will only work, if it is designed mainly around the deployment of myriad, small, dispersed, networked maneuver units which also act as sensory organization in the battlespace and provides for stealthy ubiquity.

It depends completely on nimble information operations enabling swarm forces communication and coordination.

This puts a premium on robust, adaptive communications that help with both the structuring and distribution of information which enable swarm force to engage the enemy most of the time – a key aspect of swarming.

2.1.2 Swarming Operations Tenets Swarming has two fundamental operational needs:

• Sustainable pulsing, which is the capability to strike at an adversary from multiple directions, via alarge number of small units of maneuver that are tightly internetted – i.e., that can communicate andcoordinate with each other at will, and are expected to do so.

• Ubiquitous sensing, which is the additional capability for a swarm force to provide the surveillanceand synoptic-level observations necessary to the creation and maintenance of “top-sight.”

These two fundamental requirements may necessitate creating new approach to 1) Information Operations; and 2) Systems for Command, Control, Communications, Computers, and Intelligence (C4I), as described below.

2.1.2.1 The Sustainable Pulsing

If the informational needs of a swarming military force can be fulfilled, then it will be possible to undertake the “signature” act of a swarm: the “sustainable pulsing” of forces and/or their fire. This essential notion consists of the ability of swarmers, which takes their positions in a dispersed fashion, to repeatedly strike the adversary – with fire or force – from all directions simultaneously, then to separate from the attack, re-disperse to blanket the battlespace, and repeat the cycle as battle conditions require.

USE CASE VIGNETTES

2 - 2 STO-TR-SET-263

2.1.2.2 Command and Control Delegation

Sustainable pulsing requests for the devolution of a great deal of Command and Control (C2) authority to a large number of small maneuver units. These units will be widely dispersed throughout the battlespace and will likely represent all the various sea, air, and ground services – putting a premium on inter-service coordination for purposes of both sharing information and combining in joint “task groups”.

C2 delegation requires for a communication system which is adaptive, decentralized, and interoperable.

2.1.2.3 Stealthy Ubiquity

The swarm force will be far stealthier, since its order of battle will be characterized by amorphousness, at least to the eyes of the enemy. The small size and dispersed deployment of its units of maneuver will help to convey an image simultaneously stealthy and ubiquitous – a kind of “stealthy ubiquity.” Thus, the force will be largely unseen and undetectable, but it will be able to congeal and strike decisively anywhere in the battlespace – with no limitation imposed by lines or fronts. Indeed, there may be no “front” per se. This is the potential of a swarming force, whose basic tenets must be to pursue centralized strategic control while at the same time decontrolling tactical command, dispersing units, and redesigning logistics.

2.1.2.4 Ubiquitous Sensing

To perform a sustainable pulsing and obtain stealthy ubiquity, the swarm units must not only be internetted with each other, but also must coordinate and call upon other assets in the area. To achieve this, swarming depends upon the operation of a vast, integrated sensory system that can selectively distribute both specific targeting information and overall top-sight about conditions in and around the battlespace. The swarming will turn the military into a “sensory organization” composed by internetted operational units.

As sensory organization, the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) system may generate so much information that it will be necessary to come up with new ways to segregate the often time-urgent need of the operational unit from the higher command’s need to retain clear “top-sight” – a “big picture” view of what is going on.

2.1.3 Swarming Operational Scenario Swarming is already emerging as an appropriate doctrine for networked forces to wage information-age conflict. This nascent doctrine derives from the fact that robust connectivity allows for the creation of a multitude of small units of maneuver, networked in such a fashion that, although they might be widely distributed, they can still come together at will and repeatedly, to deal resounding blows to their adversaries. Against an elusive opponent trying to fight in an irregular fashion, the coordinated swarming of networked forces should enable them to defeat the enemy in detail. As depicted in Figure 2-1, a typical swarming operation scenario envisions the deployment of elementary military units, the Swarm Elementary Unit (SEU), that can operate in “clusters”. The SEU should be dispersed to mitigate the risk posed by hostile fire. Yet, they would feature great mobility, modest logistical requirements, and “top-sight” (i.e., they will know much of what’s going on in the overall campaign – as will their top commander). Possessing both mobility and situational knowledge, they will be able to strike, swarming from all directions, either with fire or in force.

USE CASE VIGNETTES

STO-TR-SET-263 2 - 3

(a) (Stealth) Ubiquitous Sensing. (b) Threat Discovery.

(c) Raise Alarm. (d) Sustainable Pulsing.

(e) Threat Neutralization. (f) Dispersion to Stealthy Ubiquity.

Figure 2-1: A Swarming Operational Scenario.

2.2 SWARM-SQUAD SYMBIOTIC TEAMING

Figure 2-2 depicts a possible scenario where robotics could make the difference.

Let’s consider an urban operation where the Squad is composed by a couple of vehicles each hosting a team of soldier and equipped with RAS composed by both UGV and UAV units, each acting as sensorial resources to discover the presence of threats, both outdoor and indoor. The operator of each vehicle can share the RAS, and the actual control of a given swarm of UxV is coordinated by a site control center acting as arbiter, this is very similar to what envisaged by the NATO Generic Vehicle Architecture (NGVA) [3] about the coordinated sharing of the vehicle’s resources among the operators.

USE CASE VIGNETTES

2 - 4 STO-TR-SET-263

Figure 2-2: Symbiotic Teaming: A Possible Scenario.

Should the operational scenario require for a squad of dismounted soldiers to be activated, e.g., to neutralize an enemy, then this squad could request for the control of a swarm of UxVs, which will provide information, e.g., (thermal) images and sounds, of the interior of the building where the enemy could be hidden.

Each operator will require a specific set of sensing capabilities and level of autonomy, typically related to the kind and criticality of the task to be performed. Typically, the more critical the task, the less autonomy will be delegated to the swarm. For example, the patrolling of the road around a building will be characterized by a higher level of autonomy than the indoor inspection of the building where could be hidden an armed enemy.

In both scenarios the human operator situational awareness will be improved and typically shared with the other operative nodes involved, e.g., soldier squads, vehicles, site control center, to achieve a Common Operational Picture (COP).

From an Operational viewpoint the solution decreases the repetitive activities of unmanned vehicles operator by allowing him/her to concentrate on specific mission goals and improve the Human-Machine interaction effectiveness in terms of the adaptive behavior the actual operator needs in each operational scenario.

2.3 SCENARIO “UBIQUITOUS SENSING”

A land area is to be defended and controlled by allied forces. The enemy is suspected to attack but when, where and how the attack will be conducted is not known. The allied forces are not able to monitor the entire conflict domain in time, space, and frequency due to limited resources available. Nevertheless, dynamic circumstances, like availability of intelligence on enemy assets, weather, level of conflict, terrain, etc., will indicate likely location of attack, but this could change rapidly.

Site Control Center

Mobile Control CenterMobile Control Center

Dismounted Soldier Squad

Flexible Autonomy

Coordinated Common Control

Flexible Autonomy

Shared situation awareness

Control Hand-over

Control Hand-overShared

situation awareness Requestor

RequestorRequestor

Arbiter

USE CASE VIGNETTES

STO-TR-SET-263 2 - 5

This scenario is depicted in Figure 2-3, where green color indicates allied territory, blue color is used for allied units and red color describes enemy territory and units. Units could be both force and sensor units as illustrated by the symbols employed. Shades of grey are reserved for describing environment and terrain elements. The total ISR area of interest is the combined enemy and allied territory.

Figure 2-3: Swarm ISR Scenario.

A dynamic swarm ISR system could be deployed to help gather intelligence on the enemy build-up, detect enemy movements and obtain real-time target information in dense battlefield conditions.

This scenario is consistent with two distinct modes of ISR operation that could be expressed by two vignettes:

• The “build-up” vignette is characterized by no contact between opposing forces, stand-off surveillance of opponent territory using high value sensors in combination with disposable stealth sensors in enemy territory. Global navigation and full communication between units in own territory is assumed in this pre-war vignette.

• The “engagement” vignette is characterized by battle contact between opposing force units, communication and navigation is challenged by jamming activity and, typically, close proximity sensors are needed in order to produce positive target identification in this full-blown war vignette.

In the following these two vignettes as discussed in more detail. Focus is on how we could use a swarm ISR system of high value and disposable sensors to guide the deployment of high value effectors for optimal efficiency. In this context the effectors could be either manned, unmanned or autonomous systems. Also, an important focus is how the choice of swarm tactics deployed depends on the navigational precision, communication bandwidth and computational resources of the swarm units. The vignettes are especially framed as to express the requirements of a swarm reference architecture, both in terms of software and hardware specifications.

USE CASE VIGNETTES

2 - 6 STO-TR-SET-263

2.3.1 Vignette “Build-Up” In the “build-up” vignette the purpose of the deployed swarm ISR sensor system is to obtain situational awareness about the opposing forces build-up to an attack and use this information to optimize the allocation of own units in order to challenge the opposing forces at expected Contact Points (CP), typically road crossings, narrow valleys, etc. Every new information obtained could possibly change current deployment of own forces and sensor network. In Figure 2-4, a combination of stand-off high value swarm ISR sensors and a “tripwire” network of disposable swarm sensors deployed at each possible contact point (e.g., CP1, CP2 and CP3) and covertly in enemy territory are used to gather information on enemy troops. The disposable sensors are assumed to be “stealth ubiquitous”, meaning that platform with sensor is hard to detect and track by the enemy. All swarm ISR sensors are assumed to be mobile by some means of transportation.

Figure 2-4: “Build-Up” Vignette. Where will the enemy attack?

The stand-off sensors are typically used for grasping the “big picture” and would give limited resolution on objects of interest due to the long observation distance corresponding to accepted risk of operation. On the other hand, the disposable sensors are used for obtaining high resolution “local snap-shots” of enemy activity close by. A denser “tripwire” sensor network is better in terms of enemy target detectability and identification. The optimal movement of allied force units is dependent on the total information provided by the complete swarm ISR system of stand-in and stand-off sensors.

In Figure 2-5 the process of updated information on enemy movement is depicted. The stand-off sensors have indicated that enemy force units are moving toward CP2. This is confirmed by stand-in sensors forward CP2. This could trigger a redistribution of the “tripwire” sensor network to concentrate sensors at CP2 and provide some additional coverage at CP3 though at the expense of diluting the network at CP1. Also, the stand-off sensors would dynamically reconfigure as to maximize information retrieval. The dynamical repositioning of all the sensors, both the high value and the disposable sensors, are done autonomously by the swarm system itself, with user interaction given as high level assignments and guidelines. All vital ISR information is feed into the allied Battle Management System (BMS).

USE CASE VIGNETTES

STO-TR-SET-263 2 - 7

Figure 2-5: Updated Information Change Position of Allied Force and Sensor Units.

2.3.2 Vignette “Engagement” In this complex environment the purpose of the swarm ISR sensor system is to generate target detections and identifications sufficient for engagement. This is depicted in Figure 2-6, when enemy units cross the border, detected and identified by the “tripwire” network, and confronts allied force units. In this dense battle environment close proximity sensors are typically needed for the required precision in geolocation and object identification discriminating friend from foe. A swarm ISR system consisting of many small disposable sensors could offer the timeliness, precision, coherence and coverage of the battlefield sufficient for engagement.

The performance of the swarm ISR system is dependent on the sensor density throughout the battlefield. The target identification list provided by the swarm ISR system is feed into the allied BMS for further battle directions. It is assumed that the battlefield is an electromagnetically congested environment. As the battle progresses swarm ISR sensors could move within the battlefield. This is depicted in Figure 2-7 where sensors from CP1 and CP3 are moved into CP2.

This is either done to force load sensor density for increased performance or in order to maintain optimal density compensating for the loss of existing sensors. The dynamical movements of sensors are done autonomously by the swarm system itself, with user interaction given as high level assignments and guidelines based on current BMS information available. Communication and navigation are expected to be challenged in the battlefield, thus a robust swarm ISR system is required in order to have full operational capacity.

USE CASE VIGNETTES

2 - 8 STO-TR-SET-263

Figure 2-6: “Engagement” Vignette. Which unit to attack?

Figure 2-7: Compensating and Force Loading Sensors into the Battlefield.

USE CASE VIGNETTES

STO-TR-SET-263 2 - 9

2.4 REFERENCES

[1] Bonabeau, E., Dorigo, M. and Theraulaz, G., “Swarm Intelligence, from Natural to Artificial Systems,” Oxford University Press, New York, 1999.

[2] Alberts, D.S., and Haye, R.E., “Power to the Edge: Command and Control in the Information Age,” CCRP, DoD. Washington, DC, USA, 2005.

[3] STANAG 4754, “NATO Generic Systems Architecture (NGVA) for Land Systems,” Edition 1, January 2018.

USE CASE VIGNETTES

2 - 10 STO-TR-SET-263

STO-TR-SET-263 3 - 1

Chapter 3 – CAPABILITY

This chapter identifies the set of Capabilities that relate to the adoption of Swarm for ISR missions/tasks.

3.1 CAPABILITY VISION

The capability vision provides a strategic context for the capabilities described in this chapter. It also provides a high-level scope for the Swarm-centric system architecture. Capability in the context of a NATO Multi-National Force or National Force within or affiliated to the NATO would entail having the ability to meet most, if not all, of the envisaged operational scenarios and challenges optimally. This would imply that the Swarm-centric System for ISR (SS-ISR) should provide a competitive edge over its adversaries in terms of every desired combat and combat support capability to the NATO forces.

Capability development of a force is a complex activity and the SS4ISR, therefore, needs to look at all aspects of combat capability from observation, detection, target acquisition, target dissemination, decision support, effective engagement, reporting mission status, and logistics.

It is therefore important to spell out what such a vision encapsulates in terms of a vision statement and the vision goals which will help realize such a capability vision.

3.1.1 Vision Statement The Capability Vision for state-of-the-art Swarm Systems for ISR Operations is to act as force multiplier effects with enhanced scalability, flexibility, faster responsiveness, accuracy, and at a reduced operational cost.

Specifically, SS4ISR shall enhance multi-national forces’ fighting capabilities by empowering them with information superiority, enhanced mobility, effective engagement, interoperability and resilience to operate in diverse, complex and contested current and future environments.

3.1.2 Vision Capability Goals The SS4ISR shall support the following forces’ vision goals:

• Capability Goal 1: Continuous Provision of ISR Data about Advisory Actions, which improves:

• Information Superiority through efficient information sharing with joint multi-national forces;

• Effective and rapid decision making at all levels, supported by enhanced situational awareness in complex, congested battlefields.

• Capability Goal 2: Force Protection and Interdiction, which improves:

• Effective engagement to include countering remotely piloted and autonomous unmanned aerial systems including UAV swarms to allow forces to minimize collateral damage while disrupting the adversary’s capabilities.

• Capability Goal 3: Improvement of Anti Access Area Denied (A2AD) Operations, which improves:

• Mobility to allow forces to engage in joint maneuver, more flexible, agile deployments and operations in complex, contested and hazardous environments.

• Independent and self-sustainable deployments through the provision of integral electronic manned/unmanned surveillance and reconnaissance means at the Squad level.

CAPABILITY

3 - 2 STO-TR-SET-263

3.2 FORCES CAPABILITY

3.2.1 Capability Group 1: Command, Control, Communications, Computing (C4) The SS4ISR should be able to self-coordination and self-monitoring inside the Squad allowing only very small delays for information processing or data transmission. Orders, reports and tactical information should be provided, as appropriate to the situation. Through the use of information processing devices and technical information exchange capabilities, the SS4ISR should relieve the forces of ISR tasks which can be semi-automated or automated.

According to ACT, Command and Control is the overarching concept and functions that connect and direct the three other components Connecting, Decision Making and Effecting. As the main outcome of collecting is to transform raw data into recognized information, which is then input to Decision Making. In the case of the SS4ISR, this will consist of 1) Situation awareness information for decision support; and 2) Effective engagement to achieve the desired result.

The forces capabilities which mainly relate with SS4ISR are briefly described in the following paragraphs.

3.2.1.1 Multimodal Human RAS Interaction (HRI)

SS4ISR should be able to interact via the full spectrum of user interfaces for all types of human senses (visual, audio, haptic). Increasingly tactile user interfaces, utilizing augmented reality, tactical vests, (3D) – audio instead of the more traditional voice and computer screen interfaces are being used today.

3.2.1.2 Shared Awareness

SS4ISR should not hinder the soldier in his actions by providing for automation of all possible operational functions/interactions, including predictive activity sequences, e.g., using AI based decision support.

3.2.1.3 Information Sharing

SS4ISR should distribute information in the best format/type for an operation. Information shall range from simple text to video streaming depending on the goal and recipient of the information.

3.2.1.4 Fratricide Situation Prevention

SS4ISR shall support the prevention of fratricide situations, by the continuous provision for data which support a complete situational awareness picture, where all friendly forces are depicted of all neighboring units.

3.2.1.5 Human – Robotic and Autonomous System Teaming

Improve mission planning through non-human intelligence cooperation (using IT/AI tools) to assess mission parameters (configuration, duration, decision support, etc.).

3.2.1.6 EW Resilient Robust Navigation Systems

Improve mission planning by using alternative redundant navigation sensor technologies allowing to navigate under limited or no GNSS available. Also, by applying adequate systems safety engineering processes, the availability of the SS4ISR systems during operation should be increased under abnormal operational conditions.

CAPABILITY

STO-TR-SET-263 3 - 3

3.2.2 Capability Group 2: Intelligence, Surveillance, Target Acquisition and Reconnaissance (ISTAR)

In general, the SS4ISR should help to acquire, identify and engage targets even in adverse visibility and weather conditions in a fast and reliable manner. The combined importance of C4ISTAR Capability is to enable the conduct of Network Centric Operations (NCO) resulting in better synchronized effects in the battle space; achieve greater speed of command and control; increased lethality, survivability and responsiveness; and diminish adversaries’ courses of action. C4ISTAR Capability enables high quality of shared situational awareness and develops a shared understanding, including the commander’s intent, to promote self-synchronized operations at the Squad or Team level.

A force unit, supported by a SS4ISR which includes suitable sensor and computing devices, is required to carry out surveillance, target acquisition and reconnaissance, which is processed in near real time to obtain operational or actionable intelligence.

The forces capabilities that mainly relate to SS4ISR are briefly described in the following paragraphs.

3.2.2.1 Multimodal Enriched Information

Provision of integrated data from diversified sources modes, e.g., visual, audio, haptic, to provide enriched/holistic perception of the relevant operational picture.

3.2.2.2 Pervasive Sensing

Enables single units to act as a synoptic sensor, who continuously provides data to higher echelons.

3.2.2.3 Focused Situational Awareness

Ability to select a Common Relevant Operational Picture (CROP) to focus the situational awareness on specific areas of interest with varying ranges and details.

3.2.2.4 Robust Tactical Network

SS4ISR should improve the robustness of ISTAR tasks in the battlefield by increasing the redundancy of Tactical network against EW threats.

3.2.3 Capability Group 3: Effective Engagement The SS4ISR should support the forces coordination with respect to the engagement of an effector suitable for the specific task:

• Target Acquisition: The detection, identification, and location of a target in sufficient detail to permit the effective employment of weapons [1].

• Target Prioritization and Designation: The act of prioritizing the order of engagement of targets and the assignment of weapons to targets.

• Engagement Assessment: The determination of the effect of attacks on targets [1].

The forces capabilities that mainly relate to SS4ISR are briefly described in the following paragraphs.

CAPABILITY

3 - 4 STO-TR-SET-263

3.2.3.1 Target Hand-Over Support

Forces should be able to hand over of targets from one unit to another unambiguously and with high precision e.g., enable units with better shooting positions to improve the initial hit probability may be required. The SS4ISR should appropriately feed the Fire Control Algorithms, which enable the Squad Commander to choose and automatically designate the target to the soldier who is in the best position to engage the enemy target.

3.2.3.2 UxV Sensor-Shooter Integration

Another key feature impacting effective engagement is the integration of UxV sensor and Shooter unit in terms of target data designation and engagement speed. Automation of factors impacting effector engagement and munition ballistics like target location, direction and speed, wind speed, humidity, temperature, known protection level, etc. in the fire control system greatly enhance effective engagement.

3.2.4 Capability Group 5: Protection and Survivability Survivability is the ability to remain mission capable during and after an operational engagement. It requires provision of protection against all possible means that can inflict damage to the soldiers or their soldier systems.

The forces capabilities that mainly relate to SS4ISR are briefly described in the following paragraphs.

3.2.4.1 Counter-Drone Protection

The SS4ISR should enhance the protection against Drone(s) attack by increasing near real time situational awareness and faster decision-making capability, which increase the chances of survivability.

3.2.4.2 Dirty, Dusty, Dangerous Environment Protection

The SS4ISR should act as an avatar of human units to perform tasks in Dirty, Dusty, Dangerous Environment still maintaining the needed effectiveness of the action and situational awareness to human operator.

3.2.4.3 Protection from Remote Threats

The SS4ISR shall provide for early warning for CBRNE and Improvised Explosive Device (IED).

It shall also alert against the presence of snipers with long distance weapons, and provide support for the detection and identification of such remote threats.

3.2.5 Capability Group 6: Mobility SS4ISR shall improve forces capabilities to move and navigate in diversified environments.

This will enable the scouts or lead elements to use only the extra light weight basic configuration of the SS4ISR in difficult terrain to establish a fix or foothold for faster follow up of the rest of the Tactical Unit.

The forces capabilities that mainly relate to SS4ISR are briefly described in the following paragraphs.

CAPABILITY

STO-TR-SET-263 3 - 5

3.2.5.1 Forces and Material Mobility

Tactical Mobility is dependent on the tactical movement of the complete tactical units and not just the individual mobility of a soldier. In order to correctly coordinate her/his actions each unit need to know in real time where their neighbors’ friendly forces are placed, where they are moving and how their positions relate to the enemy actions.

Similarly, Operational Mobility is dependent on the ability to move men (operational plan) and material (operational logistics) at the point of decision.

3.3 REFERENCES

[1] AAP-6 (version 2015), NATO Glossary of Terms and Definitions, 2015.

CAPABILITY

3 - 6 STO-TR-SET-263

STO-TR-SET-263 4 - 1

Chapter 4 – OPERATIONAL ACTIVITY

This chapter identifies the set of ISR Operational Needs and related Activities that can be performed by or with the support of SS4ISR.

4.1 OPERATIONAL NEEDS

This paragraph briefly describes the more relevant operational needs, which relates to the adoption of SS4ISR to accomplish most mission types.

4.1.1 Situational Awareness For all missions, units usually operate together with forces of other branches or in coalition with other nations which needs to be carefully planned and coordinated. It is important for decision making, effective engagement, mobility, protection and survivability, sustainability and logistics to achieve a high situational awareness and keep it current during the operation.

The units, e.g., (squad of) soldiers, of the future Information Age forces will act as a synoptic information node for C2-related capabilities. As depicted in Figure 4-1, it is important to note that SS4ISR will act as source of high quality Organic Information, which provides services not only for the individual unit sense making and decision making, but will also provide a set of services that pertain to unit as synergic element of the sense-making and decision-making capabilities at different levels of cooperation and coordination, e.g., cluster, platoon. SS4ISR will contribute to team, group, and organizational services, such as:1) Information sharing; 2) Shared situational awareness;1 and 3) Timely intelligence.2

It is worth noting that an SS4ISR unit which serves a force unit can take advantages from the outcomes of peer SS4ISR units serving other forces units. Organizational services are at the heart of the collaborative processes and self-synchronizing behaviors of a network-centric force and the related SS4ISR units.

4.1.2 Friend or Foe Identification Friend or Foe Identification (FFI) is a feature which is required to distinguish between friendly/neutral and enemy units. This capability is crucial for preventing fratricide, especially in Multi-National Forces (MNF) operations. The SS4ISR can improve FFI capability via diversified technologies ranging from the “classic” Baseband as well as Radio Frequency tracking, to more innovative ones such as biometric recognition, wherein the degree of recognition could be significantly increased through data correlation from multiple means.

4.1.3 Protection from Threat The faster pace of technology progress, and ingenuity of the adversary force requires for agile, and evolutive architectures to adapt the systems to improve existing protection capabilities and set-up completely new one in short term. The design of the SS4ISR should therefore allow add on capabilities to early detect and counter such threats in its current or upgraded versions.

1 When the term situation awareness is used, it describes the awareness of a situation that exists in part or all of the battlespace

2 Timely intelligence is the key to all operational actions and involves the timely assessment and dissemination of focused intelligence.

at a particular point in time. Awareness occurs in the cognitive domain, in people’s heads, not within the information systems that support people [1].

OPERATIONAL ACTIVITY

4 - 2 STO-TR-SET-263

Denying actionable intelligence, especially radio / electronic intelligence through configurable means such as reduced power, brevity of transmissions, and directional antennas in the communications architecture design in the SS4ISR would help.

Figure 4-1: The Network-Centric Operations Conceptual Framework [2].

4.1.4 Self-Synchronized Operations Self-organization in this context is taken to mean the coming together of a group of individuals to perform a particular task. On receipt of a specific mission goal, the group members themselves 1) Choose to come together; and 2) Decide what they will do and how it will be done. A feature of these groups is that they are informal and often temporary. Self-organizing systems can, as their name implies, develop local organization within the system in order to evolve towards an attractor.

The future Information Age forces will act as a node of a network-centric force, whose information sharing is based on a robust networked environment. The SS4ISR will be able to act both collectively and quickly so to increase the likelihood to act at the appropriate time. Real-time or near real-time intelligence enables precision strike with minimal collateral damage and enhances the pace and momentum of operations. Intelligence may take several forms like analysis of enemy Order of Battle (ORBAT), location and activity data, change detection through electronic support measures to gain intelligence through enemy communication networks.

OPERATIONAL ACTIVITY

STO-TR-SET-263 4 - 3

In order for a force to possess the capabilities described above, the SS4ISR needs, in addition to specific mission and task-related capabilities, two key attributes: interoperability and agility. Such a SS4ISR can support forces to do traditional missions tasks more effectively (faster, better, more efficiently) and importantly; he/she can be involved in missions which may differs for both kind and intensity.

4.1.4.1 Forces Agility

The fast paced and fluid battlespace demands frequent alterations in plans during its execution, as well as contingency plans for less likely courses of action of the enemy to deny him the benefit of tactical surprise. This necessitates that the forces should be able to receive instant updates in the CROP as well as concurrent decision support information.

Forces effectiveness is greatly enhanced by agility: the ability to be quick and nimble; the ability to be effective in changing, nonlinear, uncertain, and unpredictable environments. The more uncertain and dynamic an adversary and/or the environment, the more valuable agility becomes.

Since agility is a property that is manifested over a space (a range of values, a family of scenarios, a spectrum of missions) and time rather than being associated with a point in a space (e.g., a specific circumstance, a particular scenario, a given mission) or time, agility requires for SS4ISR services that may result in operational effectiveness across a range of scenarios or environments, then the ability services and system resources shall be designed to be adaptive and responsive to changing circumstances.

The essential capabilities for an agile SS4ISR are:

• The ability to adaptively select the set of appropriate means to optimize effectiveness for eachspecific mission.

• The ability to orchestrate the means to respond in a timely manner.

• The ability to work in a coalition environment including non-military (interagency, internationalorganizations and private industry, as well as contractor personnel) partners.

• The ability to make sense of the situation, as detailed below.

Making sense3 of a situation begins with putting the available information about the situation into context and identifying the relevant patterns that exist.

This implies that SS4ISR needs to be robustly networked with information management capabilities that enable widespread information sharing and support simultaneous collaborations.

To make sense of the situation requires that SS4ISR acts as a node, of an information network which can feeds multiple Network-Centric Warfare (NCW) domains as described in Figure 4-2. Each NCW domain being characterized by a specific set of services. A given domain service may be either individual or distributed, i.e., it includes components allocated to a set of (heterogeneous) system nodes e.g., soldier, vehicle, base.

3 Sensemaking encompasses the range of cognitive activities undertaken by individuals, teams, organizations, to develop awareness and understanding and to relate this understanding to a feasible action space [1].

OPERATIONAL ACTIVITY

4 - 4 STO-TR-SET-263

Figure 4-2: Key Network-Centric Warfare Domains [2].

4.1.4.2 Multi-National Interoperability

To have interoperable forces it is imperative that a SS4ISR should have the inherent ability to achieve the minimum level of compatibility and endeavor to achieve the ultimate level of ambition of complete interoperability in a multi-national environment.

NATO defines Interoperability as “the ability to act together coherently effectively and efficiently to achieve Allied tactical, operational and strategic objectives.” It also differentiates the terms Standardization; Compatibility; Interchangeability; and Commonality as mentioned below.

• Standardization: The development and implementation of concepts, doctrines, procedures, and designs to achieve and maintain the compatibility, interchangeability or commonality which are necessary to attain the required level of interoperability.

• Compatibility: The suitability of products, processes, or services for use together under specific conditions to fulfil relevant requirements without causing unacceptable interactions.

• Interchangeability: The ability of one product, process, or service to be used in place of another to fulfil the same requirements.

• Commonality: The state achieved when the same doctrine, procedures or equipment are used.

The degree to which forces are interoperable directly affects their ability to conduct network-centric operations. The level of interoperability achieved and the characteristics of command and control processes will determine the extent that information is shared, as well as the nature and quality of the interactions that occur between and among a network-centric system nodes. Interoperability can be understood as a spectrum of connectedness that ranges from unconnected, isolated entities to fully interactive, sharing enterprises. The levels of network-centric capability, as defined in the Network-Centric Warfare (NCW) maturity model depicted in Figure 4-3 directly correspond to the degree to which interoperability has been achieved.

OPERATIONAL ACTIVITY

STO-TR-SET-263 4 - 5

Figure 4-3: NCW Maturity Model [3].

4.2 GENERIC TASKS FOR SS4ISR DURING A MISSION

4.2.1 Information Gathering Combat intelligence and reconnaissance needs for timely, reliable and complete information to guard own forces from surprises, and it provides the foundation for a targeted approach.

A SS4ISR squad may consist of a variable number and typologies of UxV depending on mission environment and goals.

Such a squad operates for:

• Making exploratory contacts with the enemy;

• Reporting on type, strength, expansion, status, as well as behavior of the enemy;

• Detecting of barriers and accessibility of the terrain for own forces;

• Discovery of enemy’s flanks as well as terrain unoccupied by the enemy;

• Watching unoccupied areas and gaps between own military elements; and

• Establishing and maintaining contact to military elements at front as well to neighbors.

4.2.2 Networking Network connections, and related communications shall be established and maintained:

• From UxV Squad to related C2 node;

• Between UxV Squad elements; and

• Between neighbor Squads of the same SS4ISR node.

OPERATIONAL ACTIVITY

4 - 6 STO-TR-SET-263

4.2.3 Surveillance Surveillance is one of the key aspects in a mission. It is the continuous systematic observation of the surrounding environment to detect possible threats. An SS4ISR Node should extend soldier senses, which are the first tool for his/her surveillance, via the adoption of specific sensors, e.g., thermal cameras, sound sensors.

The primary goal of surveillance task is to safeguard the forces during mission at every location, at every time and in every situation. This must be performed without a special order within its allocated area.

The purpose is:

• To safeguard own forces against enemy attacks on the ground and via air; • To achieve time and space for countermeasures in case of enemy attacks; • To protect objects and areas from the enemy; and • To deny or hinder reconnaissance by the enemy.

Continuous measures are:

• Avoiding detectable lights and noise;

• Scouting and preparation of alarm positions;

• Intruder detection; and

• All-arms air defence and NBC protective measures.

4.2.4 Self-Protection Electronic Measures SS4ISR must consider always being detected because of their own electro-magnetic emission. Electronic protective countermeasures are intended to hinder the enemy doing that.

Tactical countermeasures are: • Suitable battle position; • Use of alternative command and control means; • Change of position; and • Silence for radio, RADAR and LASER.

Technical countermeasures are: • Data encryption and disguise; • Frequency hopping; • Store and forward protocol; • Name-centric addressing; • Mobile ad hoc network; • Observation with laser-protected optics; • Low power transmission; • Short antennas; and • Dynamic antennas beam.

OPERATIONAL ACTIVITY

STO-TR-SET-263 4 - 7

4.2.5 Explosives Ordnance Detection The company is threatened by systematically or erratically laid mines as well as the risk of unexploded ordnance. To avoid this threat, SS4ISR should provide with reconnaissance information in order to overcome the minefield as well as for clearing the explosive ordnances.

4.2.6 Airspace Control The forces are constantly threatened by the enemies’ aerial reconnaissance and air to ground attacks. The SS4ISR should support company protection by providing air defence measures such as:

• Observation of the airspace; and

• Engagement of enemy low flying aircrafts and other airborne vehicles.

4.3 TASKS FOR SS4ISR DURING BATTLE/ENGAGEMENT

Forces are engaged with missions and operations within an ampler and ampler spectrum of intensities and complexities. Combat operations shall face, and manage, fast and frequent changes in the conflict rate, e.g., fighting, fire and movement, as well as de-concentration and concentration of forces.

4.3.1 Sense and Response From the NATO perspective, Engagement is “In the context of rules of engagement, action taken against a hostile force with intent to deter, damage or neutralize it.” In the context of a SS4ISR this means to support forces to detect and react to hostile actions by civil or non-conventional combatants.

4.3.2 Maintain Area Dominance The SS4ISR can support the deterrence of enemy forces by improving the set-up and maintaining dominance over an area of interest, which can be achieved by smart coordination and deployment of unit forces in the battlefield, each equipped with an appropriate SS4ISR squad.

4.3.3 SS4ISR Control Handover Relief in place is “An operation in which, by direction of higher authority, all or part of a unit is replaced in an area by the incoming unit.” The responsibilities of the replaced elements for the mission and the assigned zone of operations are transferred to the incoming unit. The incoming unit continues the operation as ordered. This means the SS4ISR squad shall be able to handover its control from the current controlling team to the one, who is replacing them.

4.4 REFERENCES

[1] Alberts, D.S., Gartska, J.J., Hayes, R.E., and Signori, D.A., “Understanding Information Age Warfare,”. CCRP Cooperative Research, August 2001.

[2] Gashler, M., Venture, D., and Martinez, T. “Manifold Learning by Graduated Optimization,” IEEE Trans. on Syst., Man, and Cybern., Part B: Cybern., vol. 41, pp. 1458-1470, December 2011.

[3] AAP-6 (version 2015); NATO Glossary of Terms and Definitions, 2015.

OPERATIONAL ACTIVITY

4 - 8 STO-TR-SET-263

STO-TR-SET-263 5 - 1

Chapter 5 – SERVICE VIEW

5.1 SERVICE VIEW DESCRIPTION This chapter defines a key set of services the SS4ISR should provide to support the set of Capabilities defined in Section 3.2.

Each service provides a set of system wide functions, which shall be allocated on different system node and components defined by an appropriate system architecture and defined in Chapter 6 – System View.

Where applicable the service taxonomy and the dependency on other services are also defined.

Figure 5-1 depicts the dependencies among the key SS4ISR services, each arrow is directed from the services to the ones it uses.

Figure 5-1: SS4ISR Service Overall Dependencies.

SS4ISR Services

Swarm Infrastructure Service Layer

Swarm Social Service Layer

Swarm Mission Service Layer

Detection & Tracking Command & Control

Human Swarm InteractionRobot-Robot InteractionSwarm Control & Navigation

Networking

Data Exchange

SS4ISR

SERVICE VIEW

5 - 2 STO-TR-SET-263

The services have been grouped into the following logical layers:

• Swarm Mission Service Layer, which provides for system functions, which directly supportsoperational activities e.g., C4 ISTAR operations. This layer is described in Section 5.2, whichprovides an example of ISR Mission.

• Swarm Social Service Layer, which offers support to:

1) Regulate the Swarm organization; and

2) Mediate interaction between swarm RASs.

This layer is described in Section 5.5 for the RAS-RAS Interaction, in Section 5.3 for the Human-Swarm Interaction, and in Section 5.4 for the coordinated motion of a swarm.

• Swarm Infrastructure Service Layer, which provides for services which abstract the swarm servicesfrom the underlying physical platform. This layer is built upon the Networking Services, describedin Section 5.7, and Data Exchange Services, described in Section 5.6, which provides a platformindependent interface and focus onto the management of system interoperability, described inChapter 8.

5.2 DETECTION AND TRACKING

Swarm systems have the potential for continuous detection and tracking with greater efficiency and lowered system cost compared to traditional surveillance technology. This is mainly due to an increase in the availability of cost-efficient surveillance nodes, resulting from the ongoing trend in miniaturization of electronics and advances in autonomous systems research. By the sheer increase in number of surveillance nodes, swarms can improve performance in detection and tracking in the following ways: greater confidence, extended completeness, better precision, enhanced timeliness and increased robustness [1]. A dynamic swarm ISR system could be deployed in an area defined by some user in order to gather intelligence on objects of interest like enemy forces build-up, detect movements and obtain real-time target information in dense battlefield conditions. (This is described in Section 2.3 Scenario “Ubiquitous Sensing.”)

5.2.1 Swarm Detection and Tracking as a Search and Task Allocation Problem Detection and tracking could be considered a classical swarm Search and task allocation (STA) problem [2], [3]. The balance in utilizing resources between search (in our case the system detection performance) and task allocation (the tracking performance of each detected object) is the main challenge in every STA problem. The detection efficiency is directly related to the node density over the specified search area. Typically, each node would have a sensor detection system of some given confidence and the detection capacity in the total area would be the union of the local detection performance of all nodes combined. Detections flagged by single nodes would be subject to tracking based on object importance. An increase in tracking capability would be expected if multiple nodes were assigned to the specified object-tracking task. However, this would decrease the number of available nodes in the search for new detections, lowering the total detection capacity in the area specified. Consequently, the balance between keeping search pressure up and at the same time assigning enough resources to keep tracking capabilities at an acceptable level is the main problem in swarm detection and tracking as a service. This is illustrated in Figure 5-2, where Black marks are sensor nodes (typically UAVs) with sensor detection area depicted in yellow. The red marker is a detection to be tracked.

The STA problem is “saturated” in terms of number of agents if both detection and tracking efficiencies are sufficiently addressed, illustrated in picture (a). If either the detection or the tracking performance is unsatisfactory, the STA problem is “undersaturated,” as clearly depicted in picture (b) where precision of tracking requires a high number of nodes diluting search pressure in the rest of the surveillance area.

SERVICE VIEW

STO-TR-SET-263 5 - 3

Figure 5-2: Swarm Search and Task Allocation.

5.2.2 Swarm Communication The lack of ability to shear information between nodes could severely limit system responsiveness and restrict user real-time situational awareness and control.

5.2.2.1 Full System Communication

In this case, the user would be able to stay fully informed of all detections and object tracks and could give real-time feedback on the load balancing between detection and tracking resources allocated. The system could either be controlled by a central computer or control could be distributed between nodes in order to handle the system complexity of optimal placement of nodes.

5.2.2.2 Limited System Communication or Stealth Operation

In this case, the user could not expect to by fully informed in real time, but rather have to piece together a delayed picture of detections and object tracks as nodes go in and out of communication range throughout the surveillance area. User would have to rely on pre-programmed behaviors for dynamically balancing the load between detection and tracking and have to accept some kind of latency in user feedback in order to adjust and update system operation parameters. In the case of limited communication, the system would have to rely on distributed autonomy for dynamic placement of nodes. Furthermore, if stealth operation is required user would enforce limited communication.

A more detailed analysis of human-swarm interaction is described in Section 5.3.4.

5.2.3 Swarm Detection and Tracking as a Service In the following, we assume a given set of swarm nodes of heterogeneous information sharing and processing capabilities and focus on the high-level user interaction.

SERVICE VIEW

5 - 4 STO-TR-SET-263

5.2.3.1 User Specification

The main parameters for user to specify in swarm detection and tracking as a service are:

• A specification of available resources;

• A specification of area of interest; and

• A specification on objects of interest and conditional behaviors.

These key parameters are discussed in more detail below.

Available Resources

The specification of available resources would mainly be summed up by the number of accessible nodes in the total swarm system. Each node would have to be specified in terms of hardware and software performance in order to determine level of system node heterogeneity, including computational and communicational abilities.

Area of Interest

The specification of area of interest would typically be a bounded area on a map of the operational environment. The search area could either be an enclosed area with strict demarcation lines or more graded as in a heat map indicating level of interest.

Objects of Interest and Conditional Behavior

The specification of the objects of interest would typically be given by parameters related to the employed detection algorithm. This would usually be of type trained neural nets, thresholds in pattern recognition, etc. Furthermore, a specification of the importance of the various objects would also be required. This weighted object list, along with a specification of related conditional behaviors, would give basis to the expected collective behavior of the swarm system and give the dynamical balance between search and tracking performance. One could also envision a more advanced system that allow for real-time (un-)supervised learning when updating the object list.

5.2.3.2 Swarm System Operation Feedback

Updated object detection and track list, quality assessments of key operational system parameters such as node status (position, energy, sensor/effector, etc.), network connectivity, and current behaviors.

5.2.3.3 User Operation Feedback

Update of user specification; revision of committed resources, operation area and object/behavior list.

The complete swarm system for detection and tracking as a service is depicted in Figure 5-3.

5.2.3.4 System Simulation / Shared Cognition

The expected collective performance based on the user specification could be estimated using a system simulator. This simulator would mainly be used to highlight two important aspects of system performance: firstly, as a model of current cognition and situational awareness and, secondly, as a tool for predicting consequences of user specification parameters. The user would typically experiment with user input (resources, area, objects/behaviors) until some performance criteria is satisfied. The simulator would also

SERVICE VIEW

STO-TR-SET-263 5 - 5

function as the shared cognition between user and swarm system, acting as a distributed “database” for the swarm system when operating in autonomous mode in communication limited environments. This added simulator/ shared cognition component is depicted in Figure 5-4.

Figure 5-3: Diagram Illustrating High-Level Swarm Detection and Tracking as a Service.

Figure 5-4: Swarm Detection and Tracking as a Service with an Added Simulation/Sheared Cognition Component for Estimation of Consequences of Updating System Parameters and as a Dynamical Database for Current Cognition.

SERVICE VIEW

5 - 6 STO-TR-SET-263

5.3 HUMAN-SWARM INTERACTION

5.3.1 Human-Swarm Interaction Challenges Human-Swarm interaction raises non-trivial challenges about the:

• Situation Awareness and Out-of-the Loop Performance Problems: situation awareness when working with autonomous systems is critical for ensuring that they are operating in ways that are consistent with operational goals.

• Optimal Workload Levels (Irony of automation): autonomy often increases workload during high workload phases of mission and decreases workload during low workload phases of mission.

• Integrating Human and Autonomous Decision Making: evidence shows that people actually take-in system assessments and recommendations, which they combine with their own knowledge and understanding of the situation.

• Informed, Situational Trust in Autonomy: trust is a function of not just the overall reliability of the system, but also a situationally determined assessment of how well it performs particular tasks in particular situations. Appropriate calibration of trust in autonomy is critical.

Moreover, operators working with autonomous systems have to answer a number of questions to properly oversee the system and to determine when interventions or shifts in level of autonomy are needed:

• How much confidence to place in the autonomous system? • Is the autonomous system working properly? • Is it getting good data? • Is it operating within the envelope of situations it is programmed to handle? • Will the system’s actions meet the operational goals?

5.3.2 State of Art Current Command and Control Systems for UxV are dedicated to a given kind of UxV and typically poorly integrated in the overall C2 system.

According to platform-centric architecture currently adopted each single unmanned vehicle is typically (tele-) operated by a couple of operators: 1) One operator controls the motion of the UxV; and 2) The other controls the mission payload(s). Typically, only automatic devices are supported, this meaning that the operator must decide for almost all the tasks an unmanned vehicle shall carry out. This will result in a very heavy workload on operators, even with a very limited, typically only one, unmanned vehicle to control, and for a very specific scope in the task to be performed.

Obviously, such a solution is only viable for well determined, and scoped tasks to be assigned to unmanned vehicle nodes.

It is worth noting that currently the trustiness of the operator in the vehicle to be controlled is one of the key barriers to adopt vehicles/systems with high degree of autonomous decision making.

5.3.3 Symbiotic Human-Swarm Teaming

5.3.3.1 Overall Taking into account the above listed questions, this section will focus on two key features to be faced when designing a swarm system that should provide for an effective interaction with human operators:

SERVICE VIEW

STO-TR-SET-263 5 - 7

• Flexible Autonomy, which will allow the control of tasks, functions, sub-systems, and even entire swarm to pass back and forth over time between the human and the autonomous system, as needed to succeed under changing circumstances. Many UxV functions will be supported at varying levels of autonomy: from fully manual, to recommendations for decision aiding, to human-on-the-loop supervisory control of an autonomous system, to one that operates fully autonomously with no human intervention at all.

• Shared Situation Awareness, which is needed to:

• Ensure that the swarm and the human are able to align their goals;

• Track function allocation and re-allocation over time;

• Communicate decisions and courses of action; and

• Align their respective tasks to achieve coordinated actions.

The following paragraphs describe Symbiotic Human-Autonomy Teaming [4], an innovative paradigm, which will be addressed to evaluate how to directly support high levels of shared situation awareness between the operator and the swarm, creating situationally relevant informed trust, ease of interaction and control, and the manageable workload levels needed for future swarm mission success.

5.3.3.2 The Human-Swarm Teaming Vision

Effective teaming between operator(s) and autonomy will need to be designed into future swarm systems. This is for two reasons:

• First, because it is unlikely that swarm systems will have the capabilities to act in a fully autonomous manner, and deal with the full range of mission, environmental and adversarial situations facing it.

• Second, command and control (C2) are essential for any effective military operations, and there will always be a need for controlling swarm systems (if only at a task/mission level), assessing their task/mission success, and coordinating with other forces in the mission space.

Following this vision, swarm will be designed to serve as a part of a collaborative team (see Figure 5-5) with human operator(s), both at C2 station and “in the field”. It is worth noting that a C2 station could be both fixed and mobile, e.g., on board a ground vehicle. In the following, the terms “Swarm,” “Autonomy,” and “Autonomous System” will refer to the system hosting the Autonomous Cooperating Platform (ACP) i.e., the ACP will be enhanced with the set of functions described in this document for the provision of the Human-Swarm Teaming capability.

5.3.3.3 An Innovative Paradigm: Symbiotic Human-Swarm Teaming

Instead of paradigms that created brittle automation, with limited capabilities and limited consideration of human operators, this section will explicit focus on synergistic operator(s)-swarm teams.

This new paradigm directly supports high levels of shared situation awareness between the operator(s) and the swarm and aims to create:

1) Situationally relevant informed trust;

2) Ease of interaction and control; and

3) The manageable workload levels needed for mission success.

SERVICE VIEW

5 - 8 STO-TR-SET-263

Figure 5-5: The Human-Swarm Teaming Vision.

5.3.3.4 Symbiotic Teaming: A Possible Scenario

Figure 5-6 depicts a possible scenario where a Symbiotic Human-Swarm Teaming could make the difference. Let’s consider a convoy protection scenario where the convoy are also equipped with a swarm composed by both UGV and UAV squads, each acting as sensorial resources to discover in advance the presence of threats. The operator of each vehicle shares the swarm and the effective control of a given squad is coordinated by a site control center acting as arbiter, this is very similar to what envisaged by the NATO GVA about the sharing of the vehicle’s resources among the hosted operators. If at a given time, the operational scenario requires for a squad of dismounted soldiers to be activated e.g., to neutralize an enemy, then this squad could request for the control of one of a squad composed by more, different UxVs acting as a single autonomous system, which support the soldier squad, e. by providing images of the interior of the building where the enemy could be hidden.

Each operator will require a different set of swarm capabilities and level of autonomy, typically related to the criticality of the task to be performed, the more critical the task, the less autonomy will be delegated to the swarm. For example, the patrolling of the convoy path will be characterized by a higher level of autonomy of the inspection of the building where could be hidden an armed enemy.

In both scenarios the human operator situational awareness will be improved and typically shared with the other operative nodes involved, e.g., soldier squads, vehicles, site control center.

SERVICE VIEW

STO-TR-SET-263 5 - 9

Figure 5-6: Symbiotic Teaming: A Possible Scenario.

The following paragraphs briefly describe the key system capabilities the Symbiotic Human-Swarm Teaming is based upon.

5.3.3.5 Flexible Autonomy

5.3.3.5.1 Description

Flexible autonomy allows the control of tasks, functions, sub-systems, and even entire swarm to pass back and forth over time between the human and the autonomous system, as needed to succeed under changing circumstances.

Depending on the current operational scenario, on one hand human operator can select both:

1) Different levels of control, as defined in Section 5.3.3.5.2.2; and

2) The safety level the swarm shall operate, as defined in Section 5.3.3.5.2.4.

On the other hand, swarm functions may be supported by varying levels of both:

1) Decisional autonomy, as defined in Section 5.3.3.5.3; and

2) Human behavior interpretation, as defined in Section 5.3.3.5.2.3.

Figure 5-7 depicts a possible scenario where each autonomous system function may assume a different the level of autonomy depending on specific operational scenarios, here identified by the time axis.

Site Control Center

Mobile Control CenterMobile Control Center

Dismounted Soldier Squad

Flexible Autonomy

Coordinated Common Control

Flexible Autonomy

Shared situation awareness

Control Hand-over

Control Hand-overShared

situation awareness Requestor

RequestorRequestor

Arbiter

SERVICE VIEW

5 - 10 STO-TR-SET-263

Figure 5-7: A Possible Scenario of Flexible Autonomy.

The operator will be able to make informed choices about where and when to invoke autonomy based on:

1) Considerations of trust;

2) The ability to verify its operations;

3) The level of risk and risk mitigation available for a particular operation;

4) The operational need for the autonomy; and

5) The degree to which the system supports the needed partnership with the human.

This shifting of control can depend on a number of factors, as shown in Figure 5-8.

Figure 5-8: Autonomy Use Shifts Dynamically Based Situational Factors [4].

0

1

2

3

4

010

2030

Leve

l of A

uton

omy

Time

Flexible Autonomy

Data Fusion

Object Recognition

Guidance

Obstacle Avoidance

Strike

Level of Autonomy ValueFully Manual 0Automatic 1Situational Awareness Support 2Decision Aiding 3Fully Autonomous 4

SERVICE VIEW

STO-TR-SET-263 5 - 11

In certain limited cases the system may allow the autonomy to take over automatically from the operator, for example: 1) When timelines are very short, or 2) When loss of lives is imminent. However, note that human decision making for the exercise of force with weapon systems remains a fundamental requirement.

It is worth noting that in the following we adopt the term Robotic and Autonomous System (RAS), or simply the system, to refer indifferently to a (portion of) a swarm, or a single robot, i.e., the single element of a swarm. This is due to the reflective nature of the swarm capabilities, which are defined in following.

5.3.3.5.2 Flexible Human-Swarm Interaction Capabilities

5.3.3.5.2.1 Capability Description The ability of the Robotic and Autonomous System (RAS) to adaptively interact both cognitively and physically with users. The interaction may be as simple as the use of a communication protocol, or as advanced as the ability: 1) To work interactively with people as if like a person; or 2) To interpret human commands delivered in natural language or gestures. The adopted interaction level shall adapt to specific operative conditions i.e., the RAS shall guarantee to safely interact with human i.e., it is the system operation as a whole that expresses the level of safety for the assigned task. It is worth noting that this study does not specifically address the Human-Swarm Interaction Safety, but for sake of completeness it provides for a brief description of key definitions about the different levels of safety which characterize the swarm operation when interacting with a human operator.

5.3.3.5.2.2 Swarm Control Capability Levels Both the swarm as a whole or an its own portion down to a single element, shall be able to move among a range of Swarm Control levels. A possible set of Swarm Control capability levels is listed below [5]. The ampler the supported range, the more adaptive the swarm system.

Level 1 – Direct Control

The user provides control of the RAS moment to moment. The RAS can translate, alter, or block these controls within parameters set by the user or system. The user controls are in the form of parameters that alter the control of the robot. These parameters may be continuous quantities, for example a steering direction, or binary controls.

Level 2 – Direct Physical Interaction

The user controls the RAS by physically interacting with it. The RAS reacts to the user interaction by feeding back physical information to the user via the contact point. For example, the user teaches a motion sequence to the RAS, or feels the surface of an object the RAS is in contact with.

Level 3 – Position Selection

The RAS is able to execute pre-defined actions autonomously. The user selects the subsequent action at the completion of each action. For example, a RAS is able to move between defined waypoints in its environment or carry out a fixed action such as releasing an object, as commanded by the user.

Level 4 – Traded Autonomy

The RAS can operate autonomously during some parts of a task or in some tasks. Once this task or sub-task is complete the user will either select the subsequent task or intervene to control the system by direct interaction to carry out a task. This results in alternating sequences of autonomous and direct control of the RAS by the user.

SERVICE VIEW

5 - 12 STO-TR-SET-263

Level 5 – Task Sequence Control The RAS can execute sub-tasks autonomously, these sub-tasks will involve a higher level of decisional autonomy than the predefined tasks in Level 3. On completion of the sub-task user interaction is required to select the next sub-task resulting in a sequence of actions that make up a completed task.

Level 6 – Supervised Autonomy The RAS system can execute a task autonomously in most operating conditions. The system can recognize when it is unable to proceed or when it requires user input to select alternative strategies or courses of action. These alternatives may involve periods of direct control.

Level 7 – Task Selection The RAS can autonomously execute tasks but requires the user to select between strategic task alternatives to execute a mission.

Level 8 – Mission Goal Setting The RAS is able to execute tasks to achieve a mission. The user can interact with the system to direct the overall objectives of the mission.

5.3.3.5.2.3 Human Behavior Interpretation Capability Levels

Both the swarm as a whole or an its own portion, down to a single element shall be able to move among a range of Human Command Interpretation levels. A possible set of Human Command Interpretation capability levels is listed below [5]. The ampler the supported range, the more adaptive the swarm system.

Level 1 – Fixed Interaction Interaction between the user and the RAS follows a fixed pattern. Typically, this takes place via a user interface with well-defined inputs and outputs. Fixed interaction also includes interaction via a computer-based user interface where interactions directly control the RAS according to predefined sets of commands with specific meaning. The connection between the user and the RAS may involve a wireless link. Any interpretation of commands is fixed and embedded.

Level 2 – Task Context Interaction The RAS is able to interpret commands from the user that utilize task context semantics within a domain specific communication framework appropriate to the range of the task. The system is able to relay task status to the user using task context semantics suitable for the task.

Level 3 – Object and Location Interaction The RAS is able to interpret user interactions that refer to objects, locations or actions in as is appropriate to the task. This includes the ability to interpret user interactions that identify object’s locations and actions as well as processing commands that reference locations, objects, and actions relevant to the task. Dialogues are initiated by the user.

Level 4 – RAS Triggered Interaction The RAS is able to start a dialogue with the user in a socially appropriate manner relevant to its task or mission. The RAS has a basic understanding of the social interaction appropriate to the task/mission domain. Interaction may continue throughout the operating cycle for each task as is appropriate to the task/mission.

SERVICE VIEW

STO-TR-SET-263 5 - 13

Level 5 – Social Interaction The RAS is able to maintain dialogues that cover more than one type of social interaction, or domain task. The RAS is able to manage the interaction provided it remains within the defined context of the task or mission.

Level 6 – Complex Social Interaction Dialogues cover multiple social interactions and tasks, where the RAS is able to instruct the user to carry out tasks or enter into a negotiation about how a task is specified. The interaction is typified by a bi-directional exchange of commands.

Level 7 – Intuitive Interaction The RAS is able to intuit the needs of a user with or without explicit command or dialogue. The user may communicate to the RAS without issuing explicit commands. The RAS will intuit from the current context and historical information the implied command.

5.3.3.5.2.4 Human-Swarm Interaction Safety Capability Levels

Both the swarm as a whole or in part, down to a single element, shall be able to move among a range of Human-Swarm Interaction Safety levels. A possible set of Human-Swarm Interaction Safety capability levels is listed below [5]. The ampler the supported range, the more adaptive the swarm system.

It is assumed that all of the swarm elements meet safety criteria appropriate to their operating environment with respect to electrical and battery safety requirements, typically specified by European CE marking criteria. It is also expected that appropriate safety criteria have been applied with respect to consumables used by each swarm element. For example, heated liquids, liquids under pressure, or chemical agents.

Level 1 – Basic Safety The RAS operates with a basic level of safety appropriate to the task. Maintaining safe operation may depend on the operator being able to stop operation or continuously enable the operating cycle. The maintenance of this level of safety does not depend on software.

Level 2 – Basic Operator Safety The RAS is made safe for the operator by physically bounding the operating space of the RAS. Access gates trigger stop commands to the RAS. The RAS will not operate unless the bounding space is closed.

Level 3 – User Detection The RAS is informed when a user enters the work zone. The RAS operates in a safe way while the user is present in the operating zone.

Level 4 – Work Space Detection The RAS operates within a well-defined space where a zone of safe operation is identified to the operator and programmed into the robot. While the RAS is occupying the safe zone it will control its motion such that it is safe. The system may also use sensing to detect that the user does not enter the unsafe zone.

Level – Dynamic User Detection The RAS or its support systems detect users within its operating zone and dynamically defines a safe zone that envelopes the user where the RAS controls its motion to be safe.

SERVICE VIEW

5 - 14 STO-TR-SET-263

Level 6 – Reactive Safety

The RAS is designed to be safe under all reasonable circumstances such that if it impacts a person the impact forces are minimized below the level that may cause injury during the impact.

Level 7 – Dynamic Safety

The RAS is able to exert strong forces as a part of an interaction task with a user but recognizes when the use of these forces may endanger the user. In this case the RAS alters its motion to ensure safe operation.

Level 8 – Context Dependent Safety

The RAS is able to recognize circumstances where it needs to behave in a safe way because it is uncertain about the nature of the environment.

5.3.3.5.2.5 Capability Parameters

The selection of a given level of interaction are also modulated by parameters of the interaction. These factors can increase or decrease the difficulty of achieving levels of interaction ability:

• Interaction time: The length of time over which the interaction takes place. Longer sequences of interaction will in general be harder to achieve than shorter interaction times.

• Interaction Environment: The environment where the interaction occurs will also affect the difficulty. Interactions in controlled environments will be easier than interactions taking place in work or everyday environments where the RAS needs to focus attention on the user. Highly dynamic or hazardous environments will also significantly affect the interaction.

• User expectation: The level of expectation of the user, the level of user experience and training will impact on difficulty. Trained users able to understand how to command the RAS and users that have realistic bounded expectation, or experience, will reduce the difficulty in achieving a particular level of ability.

5.3.3.5.3 Flexible Decisional Autonomy Capability

5.3.3.5.3.1 Capability Description

The ability of the RAS to act autonomously. Nearly all systems have a degree of autonomy. It ranges from the simple motion of an assembly stopped by a sensor reading, to the ability to be self-sufficient in a complex environment.

5.3.3.5.3.2 Capability Levels

Both the swarm as a whole or or in part, down to a single element, shall be able to move among a range of Decisional Autonomy levels. A possible set of Decisional Autonomy capability levels is listed below [5]. The ampler the supported range, the more adaptive the swarm system.

Level 1 – Basic Action

A RAS that executes a sequence of actions that are unaffected by the environment and makes decisions based on the locations of actuators to proceed to the next action step.

SERVICE VIEW

STO-TR-SET-263 5 - 15

Level 2 – Basic Decisional Autonomy

The RAS makes decisions based on basic perceptions and user input and chooses its behavior from predefined alternatives.

Level 3 – Continuous Basic Decisional Autonomy

The system alters the parameters of a behavior in response to continuous input from perceptions or based on input control from a user interacting continuously with the system. The system may be able to override or ignore user input when certain criteria are encountered.

Level 4 – Simple Autonomy

The system uses perception to make moment to moment decisions about the environment and so controls interaction with the environment to achieve a predefined task.

Level 5 – Task Autonomy

The system utilizes its perception of the environment to sequence different sub-tasks to achieve a higher level task. For example, cleaning a room based on a self-constructed room map where it returns to areas that have been missed and to a recharging station when the battery runs low. The events that cause behavioral changes are external and often unpredictable.

Level 6 – Constrained Task Autonomy.

The system adapts its behavior to accommodate task constraints. These might be negative impacts in terms of failed sensors, or the need to optimize power utilization or other physical resources the process depends on, (water, chemical agents, etc.). Alternatively, these might be constraints imposed by sensing ability, the environment or the user.

Level 7 – Multiple Task Autonomy

The system chooses between multiple high-level tasks and can alter its strategy as it gathers new knowledge about the environment. Will also consider resource limitations and attempt to overcome them.

Level 8 – Dynamic Autonomy

The system is able to alter its decisions about actions within the time frame of dynamic events that occur in the environment so that the execution of the task remains optimal to some degree.

Level 9 – Mission Oriented Autonomy

The system is able to dynamically alter its tasking both within and between several high-level tasks in response to dynamic real-time events in the environment.

Level 10 – Distributed Autonomy

The source for task and mission decisions can originate from outside of the system. The system can balance requests for action with its own tasking and mission priorities and can similarly communicate requests for action.

SERVICE VIEW

5 - 16 STO-TR-SET-263

5.3.3.5.3.3 Capability Parameters

The selection of a given level of autonomy may depend on the following: • Environmental factors: The operating environment will significantly affect the ability to achieve

any particular level of decisional autonomy. In particular cluttered, dynamic environments are more likely to affect perception and thus decision making. Extreme environments will similarly cause a reduction in the ability to make decisions.

• Decision cost: Higher levels of decisional risk and reduced recovery options will decrease the confidence required to raise autonomy levels. In tactical scenarios where decisions have high cost implications the confidence levels required in the interpretation of sense data are significantly higher.

• Time scale: The longer a system must maintain autonomous decision making the harder it will become to rise through the ability levels.

• Decision range: A system that is only required to make a small range of decisions will be more likely to have a high level of decisional autonomy.

5.3.3.6 Shared Situation Awareness

5.3.3.6.1 Description A high level of shared situation awareness between the operator(s) and the RAS will be critical. Shared situation awareness is needed to:

1) Ensure that the RAS and the human(s) can align their goals,

2) Track function allocation and re-allocation over time,

3) Communicate decisions and courses of action, and

4) Align their respective tasks to achieve coordinated actions.

RAS Cognitive ability, briefly described in Section 5.3.3.6.2, are key to achieve a human-swarm common understanding of the current operational scenario, and the relevant operational picture.

Situation awareness requirements that communicate not just status information, but also comprehension and projections associated with the situation (the higher levels of situation awareness), are critical and shall be taken into account for two-way communications between the operator(s) and the swarm.

Shared situation awareness also results in Common Operational Picture among operator(s) who are eligible for the control of the RASs, a swarm is composed by, e.g., C2 repository are synchronized in real time to enable a sound coordination in the case of hand-over of the swarm control, as described in Section 5.3.3.4.

5.3.3.6.2 Cognitive Capability

5.3.3.6.2.1 Capability Overall Description

The ability to interpret the task and environment such that tasks can be effectively and efficiently executed even where there exists environmental and/or task uncertainty. The ability to interpret the function and interrelationships between different objects in the environment and understand how to use or manipulate them. The ability to plan and execute tasks in response to high-level commands.

Different aspects and faculties of the Cognitive Ability as a whole may have different degrees of maturity and pose different challenges. The cognitive ability of a system can be assembled and described more

SERVICE VIEW

STO-TR-SET-263 5 - 17

accurately by referring to a mixture of component abilities. Cognitive ability grows out of the framework built by the other abilities, particularly perception, see Section 5.5.1.5, swarm-human interaction and flexible decisional autonomy, see Sections 5.3.3.5.2 and 5.3.3.5.3 respectively, and is composed from a number of underlying components:

• Interpretive capability;

• Envisioning capability;

• Learning capability; and

• Reasoning capability.

5.3.3.6.2.2 Interpretive Capability

The interpretation of sense data is key to the ability to identify, recognize, classify and parameterize objects in the environment. It particularly refers to the ability to amalgamate multi-modal data into unified high-level object descriptions that create knowledge for tasks to draw on. The ability to interpret also engages knowledge sources to build increasingly complex interpretations of the environment and human interaction, in particular building frameworks of relationships between the environment and objects and between objects.

This ability can range from Fixed Sensory Interpretation, where the RAS has a fixed interpretation of the perceptions that occur because they are pre-categorized, e.g., all sensed objects are applied to an occupancy grid and assumed to represent actual objects in the environment, up to Environmental Affordance, where the RAS is able to interpret the environment in terms of what it affords, e.g., it is able to interpret the ground conditions in a muddy field as being too unstable for the load it is carrying.

5.3.3.6.2.3 Envisioning Capability

Envisioning refers to the ability of the RAS system to assess the impact of actions in the future. This may reduce to prediction but in the higher levels involves an assessment of the impact of observed external events.

This ability can range from Motion prediction, where RAS is able to project the effect of its motion to predict short term local interactions with detected objects in the environment i.e., the RAS only has the ability to predict its motion with respect to static objects, up to Envisioning user responses, where the system is able to envision the actions of a user responding to events in the environment.

5.3.3.6.2.4 Acquired Knowledge Capability

Operating environments will always contain several unknowns. In many proposed application areas RASs will encounter unknown objects and environments as a normal part of task execution. The acquisition of knowledge about both environments and objects is fundamental to the success of these new application areas.

This ability can range from Sense Data Knowledge, where the system is able to acquire knowledge about its environment based on sense data gathered moment to moment, up to Observation learning, where the system is able to acquiring knowledge indirectly from observing other RASs or people carrying out tasks.

5.3.3.6.2.5 Reasoning Capability

Reasoning ability is the glue that holds the cognitive structures together. Perception, knowledge acquisition, interpretation and envisioning all rely to a certain extent on the ability to reason from uncertain data. As application tasks become more complex the need to provide task and mission level reasoning increases.

SERVICE VIEW

5 - 18 STO-TR-SET-263

This ability can range from Reasoning from sense data, where the RAS is able to make basic judgments of sense data sufficient to allow actions to be controlled, up to Task hypothesis, where the system is able to reason about the priorities of different tasks within a mission and propose priorities based on its knowledge of the mission and the tasks, i.e., the system will be able to fix on a task that must be achieved but make decisions about how tasks will sequence to achieve mission objectives.

5.3.3.6.2.6 Cognitive Parameters

The achievement of a given level of a cognitive component depends on a number of characteristics of the task and environment:

• Environment: If the environment is unstructured and contains a wide variety of objects this will increase the difficult in achieving cognitive ability levels. If the environment contains dynamic elements or complex relationships between objects, then this will also increase the difficult in achieving higher levels of cognitive ability.

• Object Density: The object density of an environment refers to the number of different objects that a system will encounter simultaneously. Where there are many objects within the perception range of the system their number will make cognition harder, the more objects there are the harder it will be to envision, learn, interact and interpret. This parameter is orthogonal to the complexity of each object and the variety.

• Prior Knowledge: The ability to achieve cognitive abilities with respect to environments, objects and interactions is strongly influenced by the level of prior knowledge about each element. Prior knowledge may range from knowledge about specific instances of an object or room, to no prior knowledge. It will always be harder to achieve a cognitive ability level where there is no prior knowledge of the elements that will be encountered.

• User Expectation: The level of user expectation and experience will impact on the perceived attainment of cognitive ability levels. Users with realistic bounded expectation, or experience, will reduce the difficulty of achieving a particular level of ability.

• Time Scale: If the time span of operation is longer then the difficulty of achieving higher levels of cognitive ability increases. Similarly, if the time scales for observation and knowledge acquisition are longer there will be an increase in difficulty levels.

• Task Risk: The difficulty in guaranteeing outcomes and the potential need for the certification of decision-making mechanisms in tasks with high levels of risk will make the attainment of high levels of ability more difficult.

5.3.4 Human-Swarm Interaction Services Description Human-Swarm Interaction Services (HSI) provides an operator for the set of capability to manage and control a (squad) of autonomous UxV(s) (Figure 5-9).

Typically, these services are available at DSS Squad level i.e., a Swarm is controlled by an assigned operator role, e.g., Reconnaissance, Surveillance, and Target Acquisition (RSTA) Operator.

It shall be possible to control a given Swarm also from a Vehicle, via the Arbitration Services.

It includes:

• Swarm Management Services, which manage the Swarm as a System.

• Swarm Mission Control, which support the operator(s) during the mission tasks. It provides functions and tools for Human-Swarm Interaction.

SERVICE VIEW

STO-TR-SET-263 5 - 19

The HSI Services may exchange data in both Inter-Platform and Squad Data Exchange Contexts, as described in Section 5.6.

Figure 5-9: Human-Swarm Interaction Services.

5.3.4.1 Swarm Management Service

Swarm Management Service provides functions and tools to manage the Swarm as a system.

Typical system management functions are:

• Swarm Configuration Management.

• Swarm Fault Management.

• Swarm Security Management.

• Swarm Performance Management.

The Swarm Management Services requests:

• Resource Registration Services, for the registration of the Swarm to the controlling operator.

• Arbitration services to support the Swarm sharing among different operational nodes e.g., STU, Vehicle Operator; and

• NGVA Registration Services, to register to an NGVA Vehicle.

Human Swarm Interaction Service

Situational Awareness Services

Data Exchange Services

System Management Services

Battlefield Management Service

<<Service Interface>>

Swarm Mission Control <<Service Interface>>

Swarm System Management<<Service Interface>>

Swarm Payload Control

<<Service Interface>>

STU Data Exchange Servicesuses

<<Service Interface>>

InterPlatform Data Exchange Services

<<Service Interface>>

Video Services

<<Service Interface>>

Tactical Sensor Control

uses

<<Service Interface>>

NGVA Registration Servicesuses

<<Service Interface>>

Arbitration Servicesuses

<<Service Interface>>

Resource Registration Servicesuses

<<Service Interface>>

Geolocation

<<Service Interface>>

Land Combat Operations

<<Service Interface>>

Combat Support

<<Service Interface>>

General Information Exchange

<<Service Interface>>

Mission Preparation

uses

uses

uses

uses

SERVICE VIEW

5 - 20 STO-TR-SET-263

5.3.4.2 Swarm Mission Control Services

Swarm Mission Control Service provides functions and tool to support the Swarm Commander to manage the mission tasks performed via a RAS.

The key functions are related to Human-Autonomy Teaming, as described in Section 5.3.3.

Specifically, this services support:

• Flexible Autonomy, which allows the control of Tasks; Functions; Sub-systems; and even Entire swarm to pass back and forth over time between the human and the autonomous system, as needed to succeed under changing circumstances. Many UxV functions will be supported at varying levels of autonomy:

• From fully manual;

• To recommendations for decision aiding;

• To human-on-the-loop supervisory control of an autonomous system;

• To one that operates fully autonomously with no human intervention at all.

• Shared Situation Awareness, which is needed to:

• Ensure that the swarm and the human are able to align their goals;

• Track function allocation and re-allocation over time;

• Communicate decisions and courses of action; and

• Align their respective tasks to achieve coordinated actions.

Swarm Mission Control Service require the following services:

• Geolocation service, to define the correct location of the Swarm Payload Control Station(s).

• Land Combat Operation, to control the Swarm Mission in the Battlefield and coordinate with the STU Commander.

• Mission Preparation, to plan the Swarm Mission.

• General Information Exchange to support the coordination between the Swarm Commander with the STU / Vehicle Commander.

5.3.4.3 Swarm Payload Control Service

Swarm Payload Control Service provides functions and tools to support the RSTA Operator to control the set of payloads, which equip the Swarm.

Swarm Payload Control Service require the following services:

• Video Service to control one or more PTZ (Pan, Tilt, Zoom) Camera(s), which equip one or more Swarm elements;

• Tactical Sensor Control services to control one or more tactical sensor(s), e.g., Laser Range Finder, which equip one or more Swarm elements; and

• Geolocation service, to define the correct location of the Swarm Payload Control Station(s).

SERVICE VIEW

STO-TR-SET-263 5 - 21

5.4 SWARM NAVIGATION AND CONTROL AS A SERVICE

5.4.1 Preamble/Assumptions Presenting a survey of swarm control methodology is a challenging task. To permit a meaningful discussion we restrict the scope to multi-vehicle trajectory swarm planning and limit the scope further by assuming that the on-board vehicle sensors, computing capability and the swarm communication architecture are adequate on each vehicle to accomplish the mission task. Autonomous operation of teams of cooperative vehicles often requires a supporting local or global communication network – the ability of the fleet to exchange information in a timely and reliable manner. The communication architecture defines how information is exchanged between Unmanned Vehicles (UVs) or between UVs and the central control center as well as who is coupled to whom. Data Exchange services are described in Section 5.6. Furthermore, the vehicles are equipped with sensors that provide knowledge of the vehicles states as well as means to perceive external objects in the environment within a given detection range, including localization services as described in Section 5.5.2. Swarm Perception services are described in Section 5.5.1.5.

5.4.2 Task Assignment Assignments of swarm agents to destination goals must be provided by an algorithm or specified by the user. Agents or bots must be given areas to either search or navigate to, as described in Section 5.2.3.1. Even if an area is assigned to an agent, there are at least two questions that must be addressed: how each agent navigates to the area of interest and how each agent searches the area of interest.

5.4.2.1 Goal Assignment and Trajectory Planning Simultaneously finding optimal trajectories and assignments of agents to target areas can be utilized to minimize a cost function that reflects energy consumption, time to complete a task or other variables that are relevant to the mission. The more traditional method is to decouple the task assignment algorithm from that of trajectory planning, but it has been shown in Refs. [6] and [7] that coupling goal assignment with trajectory planning paradoxically reduces complexity. Ideally both task assignments approaches should be services available to the user, but the decoupled approach is easier to implement.

5.4.2.2 Search and GNC A random search approach requires different set of capabilities than a lawnmower type search and elicits different modes of operations. The random search approach requires each agent to have its own sensors in order to navigate the environment, perform collision avoidance and plan its own trajectory which basically describes a distributed and asynchronous approach. The agent could be collaborating with other agents in the swarm, but it is acting independently when generating its own random trajectories. Bio-inspired algorithms like Levy-search [8] algorithms have been demonstrated for searching moving and static obstacles where each agent in the swarm is acting independently but still communicating and collaborating with other agents in the swarm.

A lawnmower/sweeping type search can be executed with a centralized approach given that a communication system is in place. If each agent in the swarm is assign to an area, then the sweeping pattern can be executed in a coordinated fashion. There are variations to the modes of operations that could be applied to different types of search and track algorithms, but we just want to illustrate the different services from a GNC perspective that may be needed depending on mission parameters and modes of operation.

Either search method requires a means to generate trajectories and execute them.

SERVICE VIEW

5 - 22 STO-TR-SET-263

5.4.2.3 Tracking and GNC

Navigation and control while tracking a target must include sensor and gimbal parameters in order to avoid losing track of a target. When dealing with aerial vehicles, parameters such as banking angles must be included when calculating the next control input to the platform.

5.4.3 Swarm Motion Planning Services Motion planning, replanning, collision avoidance, and trajectory following are key components of any swarm navigation and control architecture in support of the operational scenarios described in Chapter 4.

5.4.3.1 Cooperative Mission Planning

In order to carry out single agent or cooperative tasks it is important to plan motions that are both dynamically feasible and collision-free in cluttered environments for teams of autonomous vehicles executing cooperative missions with common objectives. Task specifications must be converted into trajectory plans on how to coordinate and move within an environment. Time and path dependent trajectory instructions explain how to move. These instructions can be categorized in a hierarchical fashion, namely, path planning, velocity planning, and trajectory planning. Because the environment can be complex, cooperative missions need to generate collision-free paths. Planning strategies can be further divided into centralized and decentralized cooperative strategies.

5.4.3.2 Path Planning

Path planning involves the computation of a collision-free path from start to goal without considering the vehicles dynamics. It is restricted to the geometric aspects of motion planning. It is only possible when a map of the environment is available. If the map is decomposed into a grid, then algorithms can find paths between nodes. Dijkstra’s algorithm is shortest path algorithm between feasible points on the grid (e.g., road network). Modifications and variants of his algorithm underpin online driving instructions. Such methods can generate cooperative path following for swarms. However, the temporal and spatial assignments are separated. These methods are computationally efficient and may be adequate for platforms traversing a path at constant speeds and /or when computational resources are limited.

5.4.3.3 Velocity Planning

Velocity planning involves the computation of the velocity profile along a given path, satisfying system kinematics. The temporal and spatial assignments are not separated. It is clearly necessary if other objects in the environment are moving or if the speed of the vehicle is variable. Velocity planning is also computationally efficient and may provide adequate services when optimality is not factor.

5.4.3.4 Trajectory Planning

Trajectory planning requires simultaneous movements in time and space. It encompasses path planning and is parametrized by time. It goes beyond kinematics by imposing the physical dynamics constraints and by considering the limited control inputs of the vehicles. Trajectory optimization is a powerful tool for motion planning, enabling the synthesis of a dynamic motion plan for complex swarms although planning is more computationally expensive. Trajectory planning for time critical applications requires simultaneous optimization in time and space resulting in a challenging solution space.

5.4.3.5 Fast Planning and Replanning

The battlefield is a dynamic environment that requires fast response to pop up threats, previously unknown keep out zones, and other uncertainty parameters that call for replanning services in real or near real time.

SERVICE VIEW

STO-TR-SET-263 5 - 23

Fast planning and replanning call for short-horizon planning methods which coupled with collision avoidance algorithms provide the resilience needed to operate in dynamic and uncertain environments.

5.4.4 Modes of Operation A key enabling element for the realization of cooperative missions is the availability of efficient cooperative planning strategies. A planning algorithm has to deal with a complex set of constraints which influence swarm behaviors and operational modes. Decoupling methods are often used when dealing with large teams to divide the dimensionality into sub-problems in exchange for feasible solutions rather than optimal performance. Teams composed of heterogeneous agents impose additional constraints unique to each platform, thus requiring algorithms capable of incorporating a flexible framework. Centralized, decentralized and distributed modes of operations present different challenges while offering different degrees of optimality, flexibility, scalability, resilience, and safety guarantees. Centralized planning and control with the aid of an external positioning infrastructure has been demonstrated with significantly larger swarms (up to 2066 MAVs). The numbers are lower for decentralized control with external positioning, or centralized control with local sensing. The numbers are even lower for decentralized control that do not rely on external positioning systems (~10 MAVs) [9] (Figure 5-10).

The Kilobots, a swarm of one thousand simple but collaborative ground robots have been demonstrated in [10]. The simple ground bots received an initial set of instructions from a central controller and afterwards are capable of creating their own coordinate systems and even correct their own mistakes in a seemingly distributed manner.

Figure 5-10: Modes of Operation.

5.4.4.1 Centralized Mode

Centralized execution of these trajectories assumes an “omniscient” central computer that can execute trajectory planning online, needs to have access to the information of all vehicles in the network, and an external positioning sensing capability. This centralized framework may be desirable for some portions of

SERVICE VIEW

5 - 24 STO-TR-SET-263

the mission, most likely at the initial planning phase when there is a need to mobilize the swarm from point ‘A’ to point ‘B’ in an optimal way. The centralized framework service represents a single point of failure and it should only be utilized when the needed supporting infrastructure is guaranteed to be present. Additionally, as the size of the swarm grows so does its complexity, thus rendering the approach potentially unscalable. A centralized-sequential mode was explored in Ref. [11] in order to reduce problem complexity while allowing planning in larger numbers in non-convex environments with heterogeneous systems. Figure 5-11 and Figure 5-12 illustrate some of the results of the approach for coordinated time of arrival.

Figure 5-11: Trajectory Snapshots of 32 Agents Moving in a 10 m x 10 m Space.

Figure 5-12: Trajectory Snapshots of 8 Heterogenous Agents Navigating a Maze.

5.4.4.2 Decentralized Mode

Decentralized control assumes that the individual vehicles are capable of on-board planning, able to sense their environment and their neighbors locally then react to other vehicles relative behavior. It can still rely on external positioning systems when available. Decentralized control methods are appealing in coordination of multiple vehicles due to their low demand for long-range communication and their robustness to single-point failures. Moreover, more fully autonomous behavior means higher-level coordination so that the swarm can achieve a common goal. While varying the degree of decentralized control can create different autonomous behavior, putting such visions into practice can be very challenging and is the subject of current research.

A realistic compromise is a hybrid approach which blends elements of centralized and decentralized control. One hybrid setup assigns the central unit responsibility for the mission planning and communication with the vehicles before the beginning of the mission. Subsequently, decentralized controllers embedded on board the vehicles ensure that the mission is accomplished in a safe manner by exchanging information with each other. Each vehicle must be able to react in a timely fashion to other vehicles’ failures and potentially hazardous maneuvers, without having to communicate with a central node. Thus, the centralized single point of failure mode is minimized. The hybrid theme has many possible variations that can be found in the swarming literature and has been successfully implemented in flight tests.

SERVICE VIEW

STO-TR-SET-263 5 - 25

5.4.4.3 Distributed Mode

Distributed navigation and control are the hardest services to implement but the most scalable and robust to single failures. A distributed system must be able to perform all of its planning functions on board, based on local information from its nearest neighbors and its environment. The agent must have collision avoidance capabilities and enough computing power to generate new trajectories in response to changes in the environment while respecting the platform’s dynamical capabilities. Swarm behaviors will emerge in response to environmental and platform’s dynamic constraints. These constraints must be accounted for, in order to achieve mission goals.

5.4.5 Collision Avoidance Collision avoidance is a necessary capability for a single agent or multiple agents. It must be designed to deal with a combination of complex in cluttered scenarios to insure safe operations. They must avoid collisions with static or moving objects in the environment and with other agents or even sudden changes due to the dynamic nature of the environment because of unexpected obstacles or vehicles. Spatial separation through altitude variations may be sufficient, but unsuitable in restricted airspace near no-fly zones or when large number of agents operate in confined scenarios. Reactive methods are suitable for dynamic environments because they only use the local position and velocity data for the neighboring agents and obstacles (i.e., state information). Algorithmic techniques that enable fast reaction times to guarantee safety as more information about the environment becomes available are a challenge. Moreover, reactive methods using only local information cannot provide any global guarantees. Complete applications require algorithms that can rapidly adapt to the changes in the environment, if necessary, execute replanning as well as scale to a large number of swarm agents.

5.4.6 Trajectory Following and Disturbance Rejection A trajectory follower is needed to track the reference open loop trajectory generated during mission planning. The trajectory follower must be robust to uncertainties and disturbances that occur during mission execution. Each vehicle in the swarm must have at least a feedback controller to compensate for errors and to ensure safety. It must revise the plan according to internal and external state information and produce autopilot command updates to keep the vehicle on schedule relative to the updated plan while avoiding collisions within its local swarm environment. The need stems from the fact that there are uncertainties that include inaccuracies in the dynamic model used by the planner, inter-agent interactions (i.e., downwash from neighboring agents), and environmental disturbances that cannot be known at planning time, and therefore must be corrected during execution.

5.5 ROBOT-ROBOT INTERACTION

5.5.1 Swarm-Centric System Organization: Concepts and Architecture It is worth noting that in this section we adopt the term Robotic and Autonomous System (RAS), or simply the system, to refer indifferently to a (portion of) swarm, or a single robot, i.e., the single element of a swarm. This is due to the reflective nature of the swarm capabilities, which are defined in following.

5.5.1.1 RAS Interaction

A Swarm can interact both cognitively and physically either with users, or other RASs around it. The ability to interact may be as simple as the use of a communication protocol, or as advanced as holding an interactive conversation.

SERVICE VIEW

5 - 26 STO-TR-SET-263

Interaction depends on both the medium of interaction and on the context and flow of the interaction. The ability to interact covers three specific areas of interaction:

• Human-Swarm Interaction, which is described in Section 5.3.

• RAS-RAS Interaction, which is described in this section.

• Human-Swarm Interaction safety, which is not specifically addressed by this study, but it is briefly described in Section 5.3.3.5.2.4 for completeness.

This section addresses the RAS-RAS Interaction, which foresees different levels of complexity [5].

5.5.1.2 RAS-RAS Interaction Capability Levels

The following set of levels relate to the interaction between RASs in carrying out a task or mission. No distinction needs to be made between separate RASs that communicate and systems of dependent RASs that carry out a task. However, there is a distinction between systems that rely on a central controller and those that use distributed decision making.

A swarm can be composed by elements with different levels of RAS-RAS Interaction. The higher the interaction ability level of the set of RASs composing a swarm, the more effective and adaptive the resulting system.

Level 1 – Communication of Own Status

Two or more RASs communicate basic status information and task specific status. Status information is predefined for the task. The information communicated only relates to the state of the RAS within the task.

Level 2 – Communication of Task Status

Two or more RASs are able to communicate information about the task they are performing in terms of task completion, time to completion, and information about task barriers, resources, etc. This information is at a high level and will impact on the planning of a common task, or tasks in a common space.

Level 3 – Communication of Environment Information

Two or more RASs share information about their local environments or share wider scale information that they have acquired or been given. The RASs are able to assimilate the information and extract task relevant knowledge from it.

Level 4 – Team Communication

Two or more RASs are able to communicate task level information during execution of the task such that it is possible to implement dynamic planning between the RASs in the team. Each RAS carries out its own tasks with awareness of the other RASs in the team.

Level 5 – Team Coordination

Two or more RASs are able to collaborate to achieve a task outcome that could not be achieved by either RAS alone, or by each RAS operating independently.

Level 6 – Capability Communication

RASs are able to communicate their own task capabilities and utilize cooperative working between teams of heterogeneous RASs where there is no prior knowledge of the composition of the team.

SERVICE VIEW

STO-TR-SET-263 5 - 27

5.5.1.3 RAS-RAS Interaction Parameters

RAS-to-RAS interaction is governed by the parameters of the interaction channel. At a basic level this is governed by the standard communication channel parameters of:

• Communication bandwidth.

• Communication latency.

• Noise levels.

The values of these parameters are fundamentally governed by the data exchange services, as described in Section 5.6, and networking services as described in Section 5.7, which in turn will be determined by the environment of operation for each task.

The level of achievement in RAS-to-RAS interaction is also modulated by the level of generalization in the task being undertaken. For tasks that are specific and well-defined it is easier to achieve the higher levels of ability. Similarly in systems with central control node task specific communication mechanisms are likely to have been designed in.

5.5.1.4 Swarm Cooperation

Swarm Cooperation is a basic capability for collectively carrying out a task or mission. Typically, a command issue by the C2 system results into a set of tasks to be performed by either a human, a set of intelligent nodes(swarm), or both.

In a swarm system, the Cooperation is based on self-organization and clustering both of them implemented via feedback between each swarm element and the environment. It is worth noting that the “environment” of a given swarm node also includes the other nodes it is clustered with. As depicted in Figure 5-13, feedback task management is composed of the following key steps:

• Distribution, where each swarm node updates its own public state e.g., sensor data, status, on a global data space, see Section 5.6;

• Correlation, where the elementary data gathered by subscription to the global data space are correlated, as described in Section 5.5.1.5, to achieve higher-level information which feed the Decision step.

• Decision, which coordinates via the Dynamic Task Assignment protocols with other swarm nodes to activate appropriate node behaviors to carry out a (cooperative) mission task. It is worth noting that the execution of a given task typical changes the public state of each involved node i.e., task execution outcomes act as feedback on the other elements of the swarm.

Then, the swarm system acts as a network of autonomous entities each being able of flexible decision making, as described in Section 5.3.3.5.3, that coordinate to perform (cooperative) tasks as assigned by the high-level commands of the C2 system.

The Dynamic Task Assignment

The Dynamic Task Assignment (DTA) is a distributed C2 function, which enables swarm nodes to coordinate to carry out (cooperative) mission tasks (Figure 5-14). The DTA is suitable for any multi-agent system that needs to operate in a coordinated manner i.e., it can be hosted by:

• Intelligent sensors;

• SW agents at a C2 node; and

• Heterogeneous unmanned vehicles.

SERVICE VIEW

5 - 28 STO-TR-SET-263

Figure 5-13: Swarm Coordination Cycle.

Figure 5-14: Cooperative Mission Task: Threat Control.

C2 Swarm nodes dynamically set up a heterogeneous cluster, which carries out a task contributing to execute a command of the C2 Cognitive segment.

The DTA provides the following functionalities:

• Task Coordination, which enables the swarm to choose the best executor of a task when a new task is generated. According to a specific fitness function (based on the agents’ capabilities), each agent evaluates autonomously the best executor for the specific task and unanimously the swarm chooses which agent/agents will execute it according to a specific leader election algorithm.

• Task Execution, which enables the swarm to be aware of the task status and the agent status in every moment of the mission to manage the tasks even in situations caused by external events that could cause task pre-emption or an agent failure.

SERVICE VIEW

STO-TR-SET-263 5 - 29

• Task Prioritization, which enables the swarm to reassign a task in case of new higher priority task creation. In these situations, a lower priority task could be reassigned to another agent (or could be paused to be executed lately) to perform higher priority tasks.

• Task Failure Detection and Recovery enables the swarm to manage agent failures to reassign the failing agent task to another agent without losing the mission’s goal (Figure 5-15).

(a) The Red Vehicle Has a Failure During the Threat Management Task Execution.

(b) The Black Vehicle Changes its Task Due to the Higher Priority of the Threat Management Task.

(c) The Threat Management Task Is Performed.

Figure 5-15: Task Failure Detection and Recovery.

5.5.1.5 Swarm Perception

The information processing from sensing systems for situational awareness of an operator is a key issue for the success of a military mission. For cooperative unmanned swarms, light weighted, fast, reliable, and distributed algorithms are required since it must be assumed that the processing power of a mobile node is limited to some extent. Information fusion approaches for such scenarios also must be robust against typical challenges of swarm applications such as failure of singular nodes, transmission delays and data losses and must be well suited for a heterogeneous set of sensor information.

Perception of an unmanned system refers to the process of sensing, signal processing, data fusion, and cognition of situational awareness based on mounted sensors on an autonomous node. This addresses the following steps:

• Sensing and signal processing: A signal refers to a physical manifestation, which includes for instance sonar, radar, lidar, and video streams. Signal processing is the task to infer information from a signal for instance to detect, to localize, or to estimate other state parameters of an object. The result typically is a measurement of sensor specific parameters, which refers to a given instant of time.

• Data Fusion: Data fusion is the process of integrating information over time or of merging provided data from multiple sensors. To this end, given sensor data is evaluated by means of a statistical model to incorporate previous knowledge in terms of probability statistics for the computation of a posterior estimate. The result usually is one or multiple ‘tracks,’ each of which referring to a distinct object of interest.

• Cognition: in this step, the current scene is to be interpreted with respect to some parameters of interest. This can include object recognition, object arrangement detection, and threat detection. This is often achieved by correlating the estimated tracks and map information with given background information. Also, classifiers or rule-based detectors can be applied. The result is some higher-level understanding of the environment including a basis for autonomous decisions on further actions.

SERVICE VIEW

5 - 30 STO-TR-SET-263

The corresponding counterpart of unmanned perception in a swarm system comes with additional challenges since the information of multiple, spatially distributed, and possibly heterogeneous sensors must be integrated in a consistent manner:

• Data representation: the information on an object state or on the environment often is computed in terms of probability density functions and maps, respectively. For a swarm system, the exchange of data is a key challenge to profit from the spatial distribution of sensing nodes for an improved situational awareness. To this end, a common framework of data exchange services, as the one described in Section 5.6, must be used for enabling seamless multi-node data fusion.

• Fusion algorithms: multi-sensor data fusion algorithms must cope with communication delays, synchronization, sensor registration, cross-covariances of estimation errors, and data incest [12]. The optimal choice of a fusion algorithm depends on both, scenario constraints such as bandwidth, computational power of nodes, hierarchy structure, and agility of the nodes as well as on mission constraints as for instance the mission task, type of reconnaissance, and background knowledge on the scenario.

As a consequence, the concept of perception for swarm systems can be divided into

• Mapping: If the mission goal is primarily related to an exploration and mapping task, swarm nodes are used to increase the spatial coverage of investigation. The information stored and exchanged refers to possibly overlapping map parts and the fusion goal is to estimate a consistent map of the environment.

• Localization and Tracking: This refers to scenarios where one or multiple objects of interest are to be detected, localized, and tracked. In this case, multiple swarm nodes are often employed to increase the observability of the target state. This is achieved by an exchange of data with respect to common objects in the field of view.

Both, Figure 5-17, Figure 5-18 have in common that the perception system and algorithms have to cope with erroneous measurements, ambiguous data, and low probability of detections [13]. Thus, the data representation as well as the fusion algorithms have to take multi-modal acquisition into account.

The distributed information fusion scheme for such a scenario must be highly adaptive for improving the estimation accuracy by means of transmitted and received data. To this end, often Kalman filter-based approaches are used to provide a sophisticated treatment of sensor models and object behavior in a probabilistic manner. Due to the non-linearity of the measured parameters with respect to a Cartesian coordinate system, the information can be modelled in Gaussian mixtures.

5.5.2 Localization and Mapping in Swarm Systems Localization is determining the position of a RAS system in the environment. Localization is the very basic step that must be performed by an autonomous unmanned vehicle to be able to navigate freely in the surrounding environment and make autonomous decisions. If the RAS system is equipped with a GNSS (Global Navigation Satellite System), much of the localization problem is solved and absolute position of the vehicle in the Earth’s reference frame can be obtained [14]. But GNSS technologies have some weaknesses such as not being able to function indoors or in obstructed areas and having less accurate results (within several meters). Besides these disadvantages, knowing the absolute position of the vehicle is not always adequate since most of the time localization also implies calculating relative position of the robot with respect to the objects in the environment. Usually, the RAS system must be able to perform localization using the on-board sensor data such as cameras, LIDAR, etc. There are two possible scenarios for a RAS system in performing localization: 1) The map of the environment is already known; and 2) There is no information about the environment. In scenario-2 (which is the usual case for an autonomous RAS system), in order to perform localization, the map of the environment must also be generated. The map generation

SERVICE VIEW

STO-TR-SET-263 5 - 31

process is named as mapping. The process of performing both mapping and localization tasks at the same time is named as Simultaneous Localization and Mapping (SLAM).

In swarm systems, each robot must perform localization to determine its own position in the environment. The relative positions between the swarm members must be calculated as well. Each RAS system can perform SLAM itself and share the position and map results with other members of the swarm or the RAS systems can perform SLAM together. The process of performing localization and mapping by several RAS systems collectively is called Collaborative SLAM. The presence of multiple RAS systems can increase the robustness of the SLAM estimation process, since via sharing of information across the agents, every agent can profit from the measurements taken by the others [15].

When the RAS system knows its own position relative to other members of the swarm and the obstacles in the environment, it can easily navigate in the environment without any collision. An example to the Collaborative SLAM system performed on a multiple UAV system is given in Figure 5-16.

In this section mostly visual based SLAM techniques performed by mono or stereo cameras will be mentioned since they are feasible for all kind of RAS systems in different sizes, and they provide cheaper solutions. For example, LIDAR based SLAM systems are unlikely to be applied on small sized UAVs because of size, weight and power limitations, but they are quite suitable for UGVs.

Figure 5-16: Example Multi-Agent SLAM Scenario [16].

The main components of single SLAM system are localization (visual odometry), local mapping and loop-closure. Visual Odometry is the process of determining the relative motion of the vehicle by inspecting the motion of feature points (or pixels) on consecutive image frames. Since this is a dead-reckoning approach, the position errors grow as the vehicle moves. When the vehicle passes the same location and observes the same scene twice, this situation is detected and the errors in position and map are corrected. This process is called Loop Closure. New components such as map merging and inter-agent loop closure arises in Collaborative SLAM methods. In inter-agent loop closure, the scene points observed by more than one agent at the same time can be utilized to determine the relative positions of the agents w.r.t each other.

SERVICE VIEW

5 - 32 STO-TR-SET-263

In other approaches, the agents can detect and localize other agents when they observe them on the image frames. Once the relative pose (rotation and translation) between two agents is known, the maps produced by these agents can be merged by applying transforming on these maps using the pose information.

Collaborative Visual SLAM systems can be implemented in both centralized and decentralized architectures (Figure 5-17, Figure 5-18). Since the SLAM process is computationally expensive, the very first SLAM methods for multiple RAS systems were performed centrally on a server system (mostly Ground Control Station – GCS). In this approach, the agents send raw sensor data (image here) to the server and all computation is performed on the server. The position and map results are sent back to the agents. In more recent centralized systems, each agent can perform real-time parts of the computations (front-end) on-board and send processed data to the server for performing computation heavy computations (back-end) such as optimization and fusion of all agents’ data (map and position) [17]. In decentralized architecture, all the computations are performed on agents. Both architectures heavily rely on communication channel between agents and GCS. Recent attempts are focusing on developing distributed Collaborative SLAM systems requiring less bandwidth by decreasing the amount of data shared between agents. Making the system more robust to communication failures [18] and remove the requirement of fully connected communication network are other challenges on this topic.

Figure 5-17: Example Centralized Collaborative SLAM Architecture [19].

SERVICE VIEW

STO-TR-SET-263 5 - 33

Figure 5-18: Example Distributed SLAM Architecture [20].

5.6 DATA EXCHANGE SERVICES

5.6.1 Data Exchange Services for Swarm-Centric Systems and Operations

5.6.1.1 Swarm Communication Needs

Swarming will work best, and perhaps will only work, if it is designed mainly around the deployment of myriad, small, dispersed, networked maneuver units which also act as sensory organization in the battlespace and provides for stealthy ubiquity, and requires for information “share-ability.”

It depends completely on nimble information operations enabling swarm forces communication and coordination via an all-channel network architecture, which, based on innovative communication technologies and protocols, render an ability to connect and coordinate the actions of widely distributed “nodes” in almost unprecedented ways.

This puts a premium on robust, adaptive communications that help with both the structuring and distribution of information which enable swarm force to engage the enemy most of the time ‒ a key aspect of swarming.

5.6.1.2 Quality of Networking

The information “share-ability” will directly stem from “Quality of Networking” i.e., the availability of a fully networked collaborative environment. This environment, or suite of technologies, significantly increases the utility of the information exchange, helping to avoid information overload, improve timeliness, facilitate collaboration, and create the conditions for self-synchronization. Data distribution services are all enabled by the post and smart pull approach inherent to a robustly networked environment, where the owner of information “post” data in “virtual place” where any node who needs that information can gather them so decoupling the information owner from the information consumer. This approach shifts the problem from:

SERVICE VIEW

5 - 34 STO-TR-SET-263

1) The owner of information having to identify many potentially interested parties to; and

2) The problem of having the individual who needs information identifying potential sources of thatinformation.

The second problem is a far more tractable one. This is because it is much easier for the individual who has a need for information to determine its utility than for the producer to make this judgment.

5.6.1.3 Technological Challenges

A Networked Environment, which can provide the “Quality of Networking” as requested by Information Age forces raises a set of information-driven challenges to be surmounted:

• Information structuring: which implies serious advances in the management of information.This refers both to improving the speed of processing, but also to learning how to structure flowsand stocks of information more usefully. This is the more difficult one – it goes to the heart of thematter of how to differentiate important information from non-essential; and to the issue of who willknow what during mission.

• Information distribution: which implies to selectively distribute the information generated by eachnode to any other nodes that needs for it with guarantee of Quality of Service (QoS) even in highlydynamic scenarios with determined disruptive actions. These services must seamlessly work inscenarios where a large number of both fixed and mobile nodes communicate each other’s relyingon a communication infrastructure which is composed by a heterogeneous set of very differentcapable networks such as VHF, Wi-Fi, Bluetooth, satellite, cable and data bus.

• Information protection: which implies to protect one’s information against disruption by theenemy. The robustness of the communications networks that enable the operations of InformationAge forces in the field must be ensured against all manner of disruption of confidentiality, integrity,and availability. This problem is crucially important because orchestration of means stronglydepends upon uninterrupted flows of information to actualize its potential. Disruption of these flowswill not only render the units less effective but can also make them vulnerable to being “picked off”in detail ‒ one by one.

5.6.1.4 The Information-Centric Networking

Today’s Internet’s hourglass architecture centers on a universal network layer (i.e., IP) which implements the minimal functionality necessary for global interconnectivity. This thin waist enabled the Internet’s explosive growth by allowing both lower and upper layer technologies to innovate independently. However, IP was designed to create a communication network, where packets named only communication endpoints. Sustained growth in e-commerce, digital media, social networking, and smartphone applications has led to dominant use of the Internet as a (data) distribution network. Distribution networks are more general than communication networks and solving distribution problems via a point-to-point communication protocol is complex and error-prone.

Information-Centric Networking (ICN) [21] is an alternative paradigm, emerged to overcome some intrinsic limitations in the current Internet when supporting emerging communication needs like machine-machine communication and IoT, where all information (e.g., a sensor data) is given a name, which does not include references to its location; then, node’s requests for a specific information content are routed toward the “closest” copy of such information, which could be stored in a server, in a cache contained in a (mobile) router or even in another (mobile) system device, e.g., a sensor; then, the content is delivered to the requesting node. With ICN, the communication network becomes aware of the name of the information that it provides, and the routing decisions are made on the basis of the information name and content.

SERVICE VIEW

STO-TR-SET-263 5 - 35

More specifically, ICN changes the semantics of network service from delivering the packet to a given destination address to fetching data identified by a given name. The name in an ICN packet can name anything – an endpoint, a data chunk in a movie or a book, a command to turn on some sensors, etc. In addition, ICN secures each single information instance itself, instead of securing the communication channels.

Named Data Networking, which is described in Section 7.1.3.1.1.1, can be considered as a representative example of ICN. The Internet Research Task Force (IRTF) established an ICN research working group in 20121. The following paragraphs describe key ICN-based data exchange services, adopting the already available Named Data Networking (NDN) solutions [22].

5.6.1.5 The Loose Coupling: The Publish-Subscribe Protocol

The publish-subscribe interaction scheme is receiving increasing attention and is claimed to provide the loosely coupled form of interaction required in the command and control of Information Age warfare [23]. Subscribers can express their interest in an event, or a pattern of events, and are subsequently notified of any event, generated by a publisher, which matches their registered interest. An event is asynchronously propagated to all subscribers that registered interest in that given event. The strength of this event-based interaction style lies in the full decoupling in time, space, and synchronization between publishers and subscribers:

• Space decoupling: The interacting parties do not need to know each other. The publishers publish events through an event service and the subscribers get these events indirectly through the event service.

• Time decoupling: The interacting parties do not need to be actively participating in the interaction at the same time. In particular, the publisher might publish some events while the subscriber is disconnected, and conversely, the subscriber might get notified about the occurrence of some event while the original publisher of the event is disconnected.

• Synchronization decoupling: Publishers are not blocked while producing events, and subscribers can get asynchronously notified (through a callback) of the occurrence of an event while performing some concurrent activity.

Data Distribution Services (DDS) standard, which is described in Section 7.1.3.1, can be considered as a representative example of Publish/Subscribe paradigm for real-time systems. The DDS is specified by the Object Management Group (OMG), which edited a set of standards, the more relevant being:

• Data Distribution Services [24].

• The Real-time Publish-Subscribe Wire Protocol DDS Interoperability Wire Protocol Specification [25].

• Extensible and Dynamic Topic Types for DDS [26].

• Interface Definition Language [27].

5.6.2 Data Exchange Service Description

5.6.2.1 Data Delivery

Data Delivery service provides for data transfer among two or more nodes, i.e., it supports unicast, multicast, and optionally broadcast data transfer.

1 http://trac.tools.ietf.org/group/irtf/trac/wiki/icnrg

SERVICE VIEW

5 - 36 STO-TR-SET-263

This service is general purpose and typically supports applications such as sensor control, messaging, system management. They do not need to be optimized to work also in constrained environmental conditions, e.g., scarcity of computing/bandwidth resource, threats.

Data Delivery shall be able to serve: 1) Many nodes; 2) Different typologies of applications; and 3) Information flows with different level and kind of criticalities, then, a sound support to Quality of Services is a key feature for the protocol stacks which implements such a service.

Possible protocols for the provision of Data Delivery Services are:

• Data Distribution Services [24], [25]

• Message Queue Telemetry Transport (MQTT) [28].

• Lean Services Architecture (LSA) [29].

• HyperText Transfer Protocol (HTTP) [30].

Relevant Data Delivery Services protocols are briefly described in Section 7.1.3.1

5.6.2.2 Streaming

Streaming service support the delivery of continuous flows of data.

This service supports applications such as video and audio.

Protocol stacks supporting this service shall: 1) Be time-sensitive to maintain the time relationships between consecutive piece of information; and 2) Support the session concept.

Possible protocols for the provision of Streaming Services are:

• Real-Time Transport Protocol (RTP) [31].

• H.323 [32].

• Real-Time Streaming Protocol (RTSP) [33].

• Session Initiation Protocol (SIP) [34].

• Session Description Protocol (SDP) [35].

Relevant Streaming Services protocols are briefly described in Section 7.1.3.1.4.

5.6.2.3 File Transfer

File Transfer service provides for delivery on files, e.g., images, documents.

This service supports an ample set of applications, every time it is requested to move a file between two different nodes.

File Transfer shall be reliable, in the sense that all chunks of data that a file is composed by shall be transferred and recomposed in the correct order as the original copy.

SERVICE VIEW

STO-TR-SET-263 5 - 37

Possible protocols for the provision of File Transfer Service are:

• File Transfer Protocol (FTP) [36].

• Secure File Transfer Protocol (SFTP) [37].

• HTTPS [38].

Relevant File Transfer Services protocols are briefly described in Section 7.1.3.2.1.

5.7 NETWORKING

5.7.1 Swarm Networking Considerations The performance of swarm network infrastructure directly affects the success of the SS4ISR operations. The surveillance data that are collected by each UAV node need to be sent to the Command and Control Station with very low latency and the availability of the swarming network system should be high as much as possible.

The operation environment of Swarm networks is subject to environmental conditions, which may cause interrupts in the communication infrastructure. Most RF network transmissions depend on line of sight communication. However, Line of sight communication may not be established among all nodes always in operational areas. Besides, there are numerous broadcasting systems in urban areas, which may cause unintentional jamming effect on swarm networks causing communication interrupts. In addition, the receivers should eliminate multipath effects of RF transmission as much as possible.

A robust and reliable Swarming Network should have the following key service features in order to increase the success and efficiency of the SS4ISR operations.

• Scalability: The network should enable to increase and decrease the number of communication nodes dynamically without the need of manual operator intervention. This brings the ability and flexibility to plan SS4ISR operations with limited or very little network planning effort. Scalability also brings the advantage of operating over a high coverage area. The UAV nodes can calculate the maximum coverage area automatically and direct the users to spread the swarm network to its maximum area when needed. Automatic planning also enables the UAV nodes to send critical data through alternative nodes/routes in the network resulting in very little packet/data loss.

• Security: The network transmission is broadcast in the air and any antenna with the correct receiver frequency could listen and intercept the transmission. If any adversary has the necessary information to decode the RF waveforms and to decode the digital data transmitted throughout the network, then it should have access to the data shared in the network. Therefore, there has to be solution and prevention mechanism to block unwanted intrusion to the network. Using non-standard transmission data and messaging structures and indigenous encryption methods should increase the security level of the swarm network.

• Electronic Warfare Resiliency: The service availability of a network is also related with its performance under electronic warfare conditions. Resiliency to jamming as well as spoofing is another key feature that effects service availability of the network. Jamming causes the network to stop transmitting data among the network nodes and to the SS4ISR Command and Control Station. Resiliency to spoofing means that the systems is difficult to hack and difficult to take control of the network by outside means. From this perspective, it is also related with Security feature described above.

SERVICE VIEW

5 - 38 STO-TR-SET-263

• Timing and Low latency: The delay in the network should not jeopardize timing of the critical data to the SS4ISR Command and Control Station. For some scenarios, the location information of a target and related image or video information may be time critical. Delay and Timing of the data should be within the limits of the systems. Necessary timing information and time stamps should be incorporated in the messages in order to correctly evaluate the ISR data time wise. This approach is called Delayed Tolerant Networking (DTN).

• Redundancy: There has to be various redundancy mechanisms in the network in order to handle transmission interrupts, data/packet loss cases, etc. This can be ensured either by packet re-transmission or by sending the same messages through different nodes and performing a sanity check by the receiving nodes.

5.7.2 Interference and Coexistence Management A reliable and collision-free communication scheme is crucial to provide the required quality of service for data exchange in an ad hoc swarm network with multiple number of agents. Interference and coexistence issues are one of major challenges in both device-to-device network within the swarm (intraswarm) and swarm-to-ground networks.

Interference and Coexistence Management service should be able to provide reliable and collision-free communication among multiple swarm agents while maintaining the flexible and decentralized feature of the network. This service focuses on the mitigation of the coexistence problem by utilizing a cross-layer approach that considers dynamic spectrum sharing techniques.

A spectrum efficient Interference and Coexistence Management service should be able to:

• Perform channel and medium sensing with minimum transmission of extra power.

• Perform the fusion of information for cooperative sensing of the channel.

• Work in the opportunistic nature of swarm ad hoc networks.

5.8 REFERENCES

[1] Wooldridge, M.J., “An Introduction to Multiagent Systems,” Chichester, Wiley, 2009.

[2] Hamann, H., “Swarm Robotics: A Formal Approach,” Springer International Publishing, 2018.

[3] Ijspeert, A.J., Martinoli, A., Billard, A. and Gambardella, L.M., “Collaboration Through the Exploitation of Local Interactions in Autonomous Collective Robotics: The Stick Pulling Experiment,” in Proceedings of Fifth European Conference on Artificial Life, ECAL99, Lecture Notes in Computer Science, Springer Verlag, Berlin, p. 575, 1999.

[4] United States Air Force, Autonomous Horizons, Vol. I, AF/ST TR15-01, June 2015.

[5] Eurobotics, “Robotics 2020 Multi-Annual Roadmap,” December 2016.

[6] Turpin, M., Michael, N. and Kumar, V., “Computationally Efficient Trajectory Planning and Task Assignment for Large Teams of Unlabeled Robots,” In Proc. of the IEEE Int. Conf. on Robotics and Automation, May 2013.

[7] Yu, J. and LaValle, M., “Distance Optimal Formation Control on Graphs with a Tight Convergence Time Guarantee,” In 2012 IEEE 51st Annual Conference Decision and Control (CDC), pp. 4023-4028. IEEE, 2012.

SERVICE VIEW

STO-TR-SET-263 5 - 39

[8] Flenner, A., Flenner, J., Bobinchak, J., Mercier, D., Le, A., Estabridis, K. and Hewer, G., “Lévy Walks for Autonomous Search,” Proc. SPIE 8389, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR III, 83890Z, 24 May 2012. doi: /10.1117/12.918719.

[9] Coppola, M., McGuire, K.N., De Wagter, C., and de Croon, G.C.H.E., “A Survey on Swarming with Micro Air Vehicles: Fundamental Challenges and Constraints,” Frontiers in Robotics and AI, 7(18), 2020.

[10] Harvard Wyss Institute, A Self-Organizing Thousand-Robot Swarm. 14 Aug 2014. https://Wyss.Harvard.Edu/News/A-Self-Organizing-Thousand-Robot-Swarm/

[11] Robinson, D.R., Mar, R.T., Estabridis, K. and Hewer, G., “An Efficient Algorithm for Optimal Trajectory Generation for Heterogeneous Multi-Agent Systems in Non-Convex Environments,” in IEEE Robotics and Automation Letters 3(2) pp. 1215-1222, April 2018. doi: 10.1109/LRA. 2018.2794582.

[12] Govaers, F., “Enhanced Data Fusion in Communication Constrained Multi Sensor Applications,” Dissertation, Bonn, 2012.

[13] Koch, W., “Tracking and Sensor Data Fusion- Methodological Framework and Selected Applications,” Springer, 2013.

[14] Siegwart, R., Nourbakhsh, I., and Scaramuzza, D., “Introduction to Autonomous Mobile Robots,” 2nd edition, Intelligent Robotics and Autonomous Agents, 2011.

[15] Schmuck, P., and Chli, M.,” Multi-UAV Collaborative Monocular SLAM,” IEEE International Conference on Robotics and Automation (ICRA), 2017.

[16] Cunningham, A., Indelman, V., and Dellaert, F., “DDF-SAM 2.0: Consistent Distributed Smoothing and Mapping,” IEEE International Conference on Robotics and Automation (ICRA), 2013.

[17] Zou, D., Tan, P. and Yu, W., “Collaborative Visual SLAM for Multiple Agents: A Brief Survey,” Virtual Reality & Intelligent Hardware, 2019.

[18] Lajoie, P., Ramtoula, B., Chang, Y., Carlone, L. and Beltrame, G., “DOOR-SLAM: Distributed, Online, and Outlier Resilient SLAM for Robotic Teams,” IEEE Robotics and Automation Letters, 2019.

[19] Karrer, M., Schmuck, P. and Chli, M. “CVI-SLAM: Collaborative Visual-Inertial SLAM,” IEEE Robotics and Automation Letters, 2018.

[20] Zhang, H., Chen, X., Lu, H. and Xiao, J., “Distributed and Collaborative Monocular SLAM for Multi-Robot System in Large-Scale Environments,” International Journal of Advanced Robotic Systems, 2018.

[21] Xylomenos, G., Ververidis, C., Siris, V., Fotiou, N., Tsilopoulos, C., Vasilakos, X., Katsaros, K. and Polyzos, G., A Survey of Information-Centric Networking Research. IEEE Communications Surveys Tutorials, 2013.

[22] Jacobson, V., Smetters, D.K., Thornton, J.D., et al., “Networking Named Content,” ACM CoNEXT 2009.

SERVICE VIEW

5 - 40 STO-TR-SET-263

[23] Eugster, P.T., Felber, P.A., Guerraui, R., and Kermarrec, A.M., “The Many Faces of Publish/Subscribe,” ACM Computing Surveys, 35(2), pp. 114-131, June 2003.

[24] Object Management Group, “Data Distribution Services,” 1.4, April 2015.

[25] Object Management Group, “The Real-Time Publish-Subscribe Wire Protocol DDS Interoperability Wire Protocol Specification – Version 2.2,” November 2014.

[26] Object Management Group, “Extensible and Dynamic Topic Types for DDS,” Issue 1.2, August 2017.

[27] Object Management Group, “Interface Definition Language,” Issue 4.2, March 2018.

[28] ISO/IEC 20922-2016 Information Technology – Message Queuing Telemetry Transport (MQTT) v3.1.1, June 2016.

[29] MoD UK, “Lean Services Architecture Specification,” V2, June 2015.

[30] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach, P., and Berners-Lee, T., RFC 2616, “Hypertext Transfer Protocol – HTTP/1.1,” June 1999.

[31] Schulzrinne, H., Casner, S., Frederick, R. and Jacobson, V., “RTP- A Transport Protocol for Real-Time Applications,” RFC 1889, January 1996.

[32] ITU, H.323, “Packet Based Multimedia Communications Systems,” February 1998.

[33] Schulzrinne, H., Rao, A., and Lanphier, R., “Real Time Streaming Protocol (RTSP),” RFC 2326, April 1998.

[34] Handley, M., Schulzrinne, H., Schooler, E. and Rosenberg, J., “SIP – Session Initiation Protocol,” RFC 2543, March 1999.

[35] Handley, M., and Jacobson, V., “SDP – Session Description Protocol,” RFC 2327, April 1998.

[36] Postel, J., and Reynolds, J., “File Transfer Protocol,” RFC 959, October 1985.

[37] Ford-Hutchinson, P., “Securing FTP with TLS,” RFC 4217, October 2005.

[38] Rescorla, E. “HTTP Over TLS,” RFC 2818, May 2000.

STO-TR-SET-263 6 - 1

Chapter 6 – SYSTEM VIEW

6.1 SYSTEM VIEW DESCRIPTION

This chapter describes a possible design solution which map each key services onto a set of system nodes and components. It is worth noting that this version of the document provides design solution which are specific and focused on each single service.

Figure 6-1 depicts a typical design solution for a generic SS4ISR Service, which is implemented by one or more system nodes, each including components which host the service functions. One or more operator roles identify the service users.

Figure 6-1: SS4ISR Service Design Solution.

In order to maintain each service solution simple and readable, it is not requested to also include the set of nodes which supports a service design solution but do not belong to it, e.g., infrastructure nodes such as routers, middleware, and so on.

The design solution for the infrastructure services will provide the needed descriptions.

6.2 DETECTION AND TRACKING

This section describes the system overview for swarm-compliant detection and tracking robots. We suggest a hardware architecture which enables flexible implementation of the detection and tracking software and ensures access to all the required underlying hardware. In addition, we suggest a possible architecture for the software components in a detection and tracking application.

6.2.1 Hardware Architecture For each swarm asset or system node, a system architecture with a central companion computer is proposed. The companion computer is integrated with the platform hardware, the sensors and the radio(s) and communication hardware. This is a prerequisite for detection and tracking in swarm systems since the tracking software must access sensor data, communicate with other swarm assets as well as read and manipulate platform-specific details, like streaming navigation estimates and providing control outputs. If all the hardware is built around a central companion computer with a complete operating system, a much broader range of sensors and communication hardware can be considered, as most vendors supply some form of a driver or API for integration that requires a standard operating system, like Linux or Windows.

SS4ISR Service Solution

System Node A System Node B

System Node Component

System Node Component

System Node Component

System Node Component

Data

ControlHuman Operator Role

controls

data

SYSTEM VIEW

6 - 2 STO-TR-SET-263

In the case of commercially available UAV systems, most of them are tightly integrated with their sensor and their radio and do not provide a programmable companion computer. For example, most UAVs must have a stable connection to the hand-held controller in order to be able to fly and the camera mounted on the drone is streamed via the platform down to the same controller. It is often impossible or inconvenient to mount a different sensor (other than one derived by the same supplier) because the sensor is directly integrated with the platform, the controller and the communication link between them. It is therefore not possible to modify the sensor package, the platform or the communication link independently. Most UAV systems also communicate one-to-one with the radio controller or an operator station and do not have any means to communicate with other UAVs. This makes most commercially available UAV systems non-compatible with swarm systems. As depicted in Figure 6-2, by separating the communication, sensor and companion computer from the platform, the modularization of the assets guarantees the feasibility of detection and tracking in swarm systems. Specifically, the radio and sensor modules represent all communication and sensor hardware, respectively. The platform module represents the hardware of the physical robot, the autopilot, control systems and other necessary software for handling all of the robotic platform-specific capabilities. These hardware modules are all integrated with an on-board companion computer.

Figure 6-2: Logical Architecture for the Hardware on a Swarm-Compliant Robot.

6.2.1.1 Communication System

The communication system is responsible for implementing an interface to the underlying communication hardware. In multi-agent swarm systems, the system nodes are assumed to have the capability to communicate with other RAS, as well as with Swarm Mission Control systems, etc. From a detection and tracking perspective, the communication system should implement the ability to share detections and tracking information, both in local sensor frames as well as georeferenced global frames as well as updating search plans and coverage maps.

6.2.1.2 Platform System

The platform system implements all aspects necessary to operate the robotic system. The platform is responsible for handling the on-board navigation, low-level control systems, actuators and manipulators. In addition, the platform system implements all the necessary interfaces to the on-board hardware which are

SYSTEM VIEW

STO-TR-SET-263 6 - 3

necessary or relevant for the other system modules. For example, remaining battery or fuel levels which is essential for the higher-level task scheduling, access to the platform navigation estimates and precise time synchronization with these low-level systems are necessary for accurate detection and tracking. An autonomous mobile asset also needs direct access to the platforms control system. In order to ensure robustness against electronic warfare, the low-level control system must implement support for external navigation. For instance, GNSS-free navigation which can be achieved by means of visual sensors, e.g., SLAM, requires access to the sensor data and is then naturally implemented somewhere in the companion computer as an external navigation system.

6.2.1.3 Sensor System

The on-board sensor system implements the interface to all the on-board sensor payloads. This system must implement control over the sensor, e.g., orientation and/or zoom in the case of a maneuverable camera. In addition, the system must provide an interface to the sensor itself, both for reading sensor data streams and for manipulating sensor operation mode, like camera exposure time, framerate, time synchronization triggers, etc. A generic sensor interface is difficult to specify, the sensor vendor must supply sensor driver with a documented API that can run on the companion computer. The Interoperability Section 7.2.4 describe a possible approach to cope with proprietary equipment and evolve to an agile robotic platform as basic element of agile swarm systems.

6.2.1.4 Companion Computer

The companion computer is the central control unit inside individual system nodes. It should run a standard operating system for the broadest possible support for external radio and sensor APIs. In addition, edge processing on the nodes requires significant data processing capabilities for robust detection and tracking. As such, a companion computer would typically benefit from a high floating-point operation count.

6.2.2 Software Architecture Here we describe a generic software architecture necessary to implement the proposed hardware layout, as well as some general detection and tracking modules for a swarm system. For the case of ISR and detection and tracking applications, the software stack must implement some form of perception autonomy and some form of decision autonomy. The goal of the perception autonomy is to processes raw sensor data and produced high-level information like detected objects and tracks. It could also use any other form of available information like navigation estimates from the platform, information relayed by other nodes and mission parameters specified by an operator node, etc. to maximize its accuracy. The purpose of the decision autonomy is to translate the operator’s mission goal or intention into task to executes and then a sequence of actions/decisions to perform, given the current state of the system and taking the environment, e.g., other assets, into account. The output from the decision autonomy may be communication, sensor or control related, for example enter radio silence, activate tracking in the perception autonomy or moving the platform to a specific location, pointing the sensor in a given direction, etc. As such the autonomy modules need a rich level of control over the sensors, radios and autopilot systems.

6.2.2.1 Perception Autonomy

The perception autonomy module is a generic processing module which streams data from the sensors, navigation systems and other swarm assets and produces detections and tracking information.

A detection is a localized phenomenon associated with some geometric relationship with the detector like distance, direction or relative position. The detection is also associated with a detection time and optionally with other features and metadata like object class, identification or confidence.

SYSTEM VIEW

6 - 4 STO-TR-SET-263

Detection modules are software components which can extract detections from raw sensor data. Each detection module is associated with an individual set of detection constraints, e.g., the expected probability of detections for a certain sensor, in a given configuration. Constraints are related to the velocity of both the detected object and the detector, the distance between them as well as the aspect angle. Each detection module is also associated with a set of environmental constraints, e.g., the background temperature and humidity can be expected to impact the performance of detecting with a thermal camera. Also, local terrain features like forests, open plains or urban environments could pose different constraints with regards to sensor and detector efficiency.

6.2.2.2 World Model

Based on the detection constraints, an autonomous asset can implement a search model in the designated search area (described in Section 2.3). The world model contains information about the current coverage and which parts are still unexplored. The system nodes use the world model as a primary input in the decision-making process. The world model contains information about all the detected objects, current tracks and environmental information. Each system node maintains its own local world model which is shared with other assets when possible. This is crucial for system robustness as individual system nodes with the capability to operate independently have a broader range of applications, e.g., operating under radio silence and under electronic warfare attacks.

6.2.2.3 Decision Autonomy

The decision autonomy uses the world model to obtain information about other swarm assets, currently tracked objects, lost tracks and their last known status. The world model provides information about the current exploration status of the input area and the feasibility to explore local areas in the vicinity of the system node. The decision autonomy is then responsible for balancing the detection (search) and tracking objectives based on both the user priorities as well as the current feasibility of possible future actions. The output from the decision autonomy can be control commands to the low-level autopilot system, pointing the sensor payload in a different direction, change of communication modes, reconfiguring the detection and tracking pipeline based on environmental changes.

The level of decision autonomy plays a key role in the Human-Swarm Interaction, as described in Section 5.3.3.5.

6.3 HUMAN-SWARM INTERACTION

6.3.1 System Domains A set of domains has been identified each addressing a given context where a swarm can be operated either by a standalone unit, e.g., a STU, or by a set of coordinating units, e.g., an STU which coordinates with a (commanding) vehicle.

For each domain this section specifies the set of system components and interactions needed to achieve a Human-Swarm Interaction service as described in Section 5.3.

The following domains have been identified:

• Small Tactical Unit Domain, which addresses the needs of a Dismounted Soldier as a node of a Squad or Team.

• Inter-platform Domain, which addresses the needs of a Soldier as a node which interacts with another platform, e.g., a Vehicle.

SYSTEM VIEW

STO-TR-SET-263 6 - 5

6.3.2 Human-Swarm Interaction in the Small Tactical Unit Domain Figure 6-3 depicts the main components and related interconnections of the Human-Swarm Interaction (HSI), which equips a Swarm Control Team.

Figure 6-3: Human-Swarm Interaction for STU Domain.

The Swarm Control Team is typically composed by: • Swarm Commander, who acts as the Team Commander and is equipped by a Swarm Command and

Control (Swarm C2) Station. • RSTA Operator, who operates the Swarm Payload set and is equipped by a Swarm RSTA Station.

The Swarm C2 Station hosts the following SW Components: • Swarm Mission Control, which implements the Swarm Mission Control Services as described in

Section 5.3.4.2 • Swarm Management, which implements the Swarm Management Services as described in

Section 5.3.4.1.

The Swarm RSTA Station hosts the following SW Components: • Swarm Payload Control, which provides for specific controls to operate the (set of) payloads the

Swarm is equipped with, as described in Section 5.3.4.3.

The Swarm Mission Control interacts with: • The following SW components of the STU Commander Station:

• Battlefield Management System (BMS), which commands the Swarm Mission Control on mission goals and related operational tasks to perform.

• Situation Awareness (SA), which command the Swarm Mission Control about expected Payload usage.

• The following SW components of the Swarm RSTA Station: • Swarm Payload Control to command this module about the requested Payload Services,

e.g., Video Services.

Swarm Command & Control Station

Swarm RSTA Station

STU Commander Station

<<System Component>>

Swarm Mission Control

<<System Component>>

Swarm Management

Swarm Commander

RSTA Operator

controls

<<System Component>>

Swarm Payload Control

<<System>>BMS

Command

Command

Data

Data, Streaming

Data

<<System>>FCPS Management

<<System>>SA

Data, Streaming

Command

Command

Data, Streaming

controls

Data, Streaming

Legenda:STU: Small Tactical UnitBMS: Battlefield Management SystemSA: Situational AwarenessRSTA: Reconaissance, Surveillance, & Target AcquisitionFCPS: Fault, Configuration, Performance,Security

STU Commander

controls

Data

SYSTEM VIEW

6 - 6 STO-TR-SET-263

The Swarm Management interacts with: • The following SW components of the STU Commander Station:

• Fault, Configuration, Performance, Security (FCPS) Management, which command the Swarm System Control about the system management functions e.g., Configuration, Health Monitoring.

The Swarm Payload Control interacts with: • The following SW components of the STU Commander Station:

• SA, to which provides for Payload outcome in accordance with the SA requests to Swarm Mission Control

• The following SW components of the Swarm C2 Station: • Swarm Mission Control, which commands it on the tasks assigned to the Swarm Payload(s).

6.3.3 Human-Swarm Interaction in the Inter-Platform Domain Figure 6-4 depicts the main components and related interconnections of the Human-Swarm Interaction, which equips a Swarm Control Team who operates in the Inter-Platform Domain, e.g., the STU coordinates with an NGVA Vehicle for the control of the Swarm.

Figure 6-4: Human-Swarm Interaction for Inter-Platform Domain.

The NGVA Vehicle is equipped with the necessary components to operate the Swarm, e.g., Swarm Mission Control and Swarm Payload Control, then the Vehicle Commander coordinates with the STU Commander to acquire the control of the Swarm. This coordination procedure is supported by the following components hosted at the STU Commander Station:

• Resource Registration, which provides for Swarm registration as NGVA Resource to the Vehicle. • Arbitration, which provides for coordination protocol in the sharing of the Swarm as a NGVA

Resource.

RAS Command & Control Station

Swarm RSTA Station

STU Commander Station

SYS

NGVA Vehicle RSTA Station

NGVA Vehicle C2 Station<<System Component>>

Swarm Mission Control

<<System Component>>

Swarm Management

STU Swarm Commander

STU RSTA Operator

controls

<<System Component>>

Swarm Payload Control

Command

Data, Streaming

<<System>>SA

Data, Streaming

controls

Data, Streaming

<<System>>Arbitration

Arbitration Request

Arbotration Reply

<<System>>Resource Registration

<<System>>Resource RegistrationProtocol Control

Request

Reply

Data, Streaming Vehicle RSTA Operatorcontrols

Data, Streaming

Legenda:STU: Small Tactical UnitSA: Situational AwarenessRSTA: Reconaissance, Surveillance, & Target Acquisition

<<System>>SA

<<System>>Arbitration

RequestReply

<<System Component>>

Swarm Payload Control

<<System Component>>

Swarm Mission Control

Vehicle Commander

Protocol Control

controls

Data

Command

STU Commander

controls

Data

Command

SYSTEM VIEW

STO-TR-SET-263 6 - 7

Both listed components interact with the peer components hosted at NGVA Vehicle, typically at the NGVA Vehicle C2 Station.

Resource Registration hosted at STU Commander Station provides for registration services to the Swarm Management component.

Arbitration component hosted at STU Commander Station provides for Swarm Coordinate Sharing to the Vehicle SA Function, which in turn commands the Vehicle Swarm Mission Control about the ownership of the Swarm Control.

6.4 SWARM CONTROL AND NAVIGATION

The same assumptions described in Section 5.4.1 are still applicable in terms of communications, sensory and vehicle state information, and adequate computing capabilities for each vehicle to accomplish mission goals.

There are three different modes of swarm operation, centralized, decentralized and distributed, depicted in Figure 5-10, that call for a flexible architecture. The centralized mode requires a central station potentially enabled by the system depicted in Figure 6-3 Human Swarm Interaction for STU Domain. The STU can initially receive trajectories and mission goals from the central computer at the start of a mission. If the swarm is operating in a static environment, then those initial trajectories would be sufficient for mission success and can be accomplished with a Multi-agent Trajectory Planner (MTP) approach similar to the one described in Ref. [1] where the MTP is housed at the central computer. The MTP algorithm efficiently produces time-optimal collision-free trajectories in complex non-convex maze-like environments while enforcing a variety of nonlinear constraints. The approach combines two complementary numerical techniques for optimal control: Level-Set (LS) reachability analysis and Pseudospectral (PS) orthogonal collocation. Applied in a centralized multi-agent prioritized planning framework, the methodology allows for heterogeneous sizes, dynamics, endpoint conditions, and individualized optimization objectives, while achieving linear scaling in computation time relative to the number of vehicles.

Operating in uncertain and dynamic environment calls for adaptive methods that can react to an uncertain and changing environment. Figure 6-5 depicts an adaptive planning module that could provide the necessary framework and flexibility to operate in dynamic environments while supporting decentralized or distributed modes.

Figure 6-5: Adaptive Planning Module.

SYSTEM VIEW

6 - 8 STO-TR-SET-263

6.4.1 Adaptive Planning Module (APM) It is envisioned that each agent within the swarm will be equipped with an APM to allow different capabilities and modes of operations outlined in Sections 5.4.3 and 5.4.4. The APM depicted in Figure 6-5 is composed of five submodules that together provide the core elements for navigation and control of each agent within the swarm while operating in dynamic environments in support of fast planning and replanning.

6.4.1.1 Sensor

The sensor feed must provide the necessary information to not only detect and track targets of interest but also the capability to provide information about the world in order to allow for safe and effective navigation of agents in the operational environment. These capabilities include target detection, self-localization and the detection and tracking of other agents within the swarm when operation in a comms-denied environment.

6.4.1.2 World Model

Described in Section 6.2.2.2 must provide a real time feed to the Global and Local Planners (depicted in Figure 6-5) that describes the environment and objects within it.

6.4.1.3 Graduated Optimization: Global and Local Planners

Sensitivity to initial conditions is a major weakness of iterative trajectory optimization methods and non-convex optimization techniques in general. Graduated optimization is a heuristic global optimization technique that can greatly increase success for difficult problems by solving a simplified problem whose solution lies in the locally convex region around the global optimum of a more difficult problem. This solution to the simple problem is used to initialize the search for the solution to the difficult problem. The process can be repeated with increasing difficulty until the problem of interest is solved as demonstrated by Gashler et al. for manifold learning [2]. Although this may lead to globally optimal solutions, a more important result is that this technique can reliably generate locally optimal solutions for previously unsolved problems. However, problems with dynamic obstacles are sensitive to the timing of the initial trajectory, especially for highly congested scenarios. Therefore, an initialization method is required that handles differential constraints and moving obstacles and is robust to complex non-convex environments. Global methods such as the LS method in Ref. [1] are well matched to this task and help to overcome the basic weakness of local optimization methods, such as the PS method, where the success and convergence rates are highly dependent on the quality of the initialization. The APM described in Figure 6-5 outlines a Global and Local Planners that combined provide a form of graduated optimization.

6.4.1.4 Collision Avoidance

The collision avoidance capability is a form of a local re-active planner. Given that the map is update fast enough then CA algorithms can take the STU out of harm’s way. This would automatically trigger a replan at the global planner level. The GP and LP must continuously update their output in response to the changing environment. The planning horizon length is a key parameter for both planners that would have to be adjusted according to mission goals, operational environment and on-board computing capabilities. Longer horizon lengths take longer to calculate and consume more computing resources when compared to the short horizon version with sub-optimal response.

6.4.1.5 Tracking Error Bounds

The Tracking Error Bound (TEB) algorithm can determine the worst case tracking error of an inner loop controller by applying differential game theory between the inner and outer control loops [3]. TEBs can

SYSTEM VIEW

STO-TR-SET-263 6 - 9

provide a systematic way to compute a safety bubble around obstacles that account for system inaccuracies and even communication delays. FaSTrack [4] calculates TEBs for worst case scenarios and can result in overly conservative bounds. MetaFaSTrack [5] overcomes the FaSTrack conservatism by combining multiple planning models with different maximum speeds and hence different TEBs to provide ‘faster’ and ‘slower’ planning models. Faster planning models are used to navigate through the environment quickly but their larger TEBs prevent them from going through narrow passages. Slower planning models with smaller TEBs, take more time to traverse the environment but are able to maneuver more precisely near obstacles.

6.5 ROBOT-ROBOT INTERACTION

6.5.1 Cooperative Robot Integration Platform

6.5.1.1 Scope This Section describes a high level architecture of a Cooperative Robotic Autonomous System Integration Platform (CRIP). The CRIP is designed to be the SW infrastructure for supporting implementation of squad of RAS based on Cooperative Agents Technology.

We adopt the term “RAS” to address a variety of implementations such as:

1) Robots, e.g., UAV, UAS and, UGV;

2) Intelligent sensors/actuators;

3) Swarms; and

4) Virtual agents, which may simulate physical entities, e.g., an UxV, or support human decision process, e.g., in the Intelligence Surveillance and Rescue missions.

6.5.1.2 Context This section targets applications rooted in the physical world. These applications interact with entities in the real world through some intermediary (sensors and actuators).

This world-of-interest is part of the real world. Typically, the real-world entities evolve slowly from the perspective of computer processes.

Environments for this kind of application can be 1) Simple reflections of the current entities in the physical world; or 2) A combination of real-world and simulated world emulation. In the first case the CRIP offers an abstraction of the current real-world entities and their state to the application mission. In the second one the CRIP includes both abstractions of real-world entities and fully simulated elements. CRIP can also capture events in the past or provide projections of future states of the real-world entities. The CRIP mediates all of interaction between the agent system and the real-world to enforcing, where requested, specific usage policies.

Each RAS only interacts indirectly with the world-of-interest, using the CRIP as an intermediary. Except for the simplest applications, there exist compelling motivations to interact indirectly with the real world through CRIP services reflecting their real-world counterparts. The main incentive is the augmentation of the real world that can only be achieved by the CRIP when it is involved in, and to some extent in control of, all interactions between agents and the real world.

6.5.1.3 CRIP Concepts We complain about the environment as a first-class abstraction in Cooperative RAS (C-RAS) with a dual role that provides:

SYSTEM VIEW

6 - 10 STO-TR-SET-263

• The surrounding conditions for RAS to exist (which implies that the environment is an essential part of every C-RAS), and

• An exploitable design abstraction to build C-RAS applications.

This section describes CRIP layered architecture, where each layer provides for a set of services to separate the Mission Business Logic, i.e., the mission application, from the underlying services, resources and constraints they rely upon, i.e., the robotic platform and the physical environment.

This enables the engineers to creatively focus on the design of a C-RAS for a specific mission/application, e.g., Intelligence Surveillance and Rescue missions.

Distinguishing clearly between the responsibilities of RAS and environment both supports separation of concerns in C-RAS and helps to manage the huge complexity of engineering real-world applications.

In the proposed approach: 1) The RASs are the domain-specific entities that autonomously make decisions and act in the

environment; and 2) The CRIP provides the surrounding conditions for RAS to exist and mediates both the interaction

among agents and the access to resources.

Then, the CRIP provides the glue that connects RASs into a working system, on their own, RAS are the individual loci of control.

6.5.1.4 CRIP Services The high-level CRIP architecture is based on a hierarchy of CRIP service families, see Figure 6-6. Each level provides services whose abstraction level is higher, and typically builds upon, the underlying level(s).

Figure 6-6: Cooperative RAS Architecture.

As depicted in Figure 6-6:

• At the top, the CRIP is accessed by the Mission Application, which directly establishes and maintain the interaction with the Swarm Control Team via the with the Swarm C2 System. A description of the Swarm C2 System is provided in Section 6.3.

Swarm C2 System

Swarm C2 Station

Swarm RSTA Station

RAS

CRIP

Smart Sensor

RAS

CRIP

Robotic Platform

Swarm Control Team

controls

Data, Streaming

Mission Application

Cooperation Layer

Data Exchange Layer

Deployment Layer

Mission Application

Cooperation Layer

Data Exchange Layer

Deployment Layer

Robotic Platform

Mission Application

Data Exchange Layer

Sensor Fabric

controls

Data, Streaming

cooperates

uses

controls

*

Robotic Operation System

controls

Data, Streaming

*

*

SYSTEM VIEW

STO-TR-SET-263 6 - 11

• At the bottom, the CRIP requests for services to the robotic platform, typically by invocating the services of a Robotic Operation System.

The CRIP architecture is composed by a set of service layers, from the bottom they are:

• Deployment Layer. At the basic level, the CRIP decouples the RAS access to the relevant deployment context. Decoupling access to the deployment context to RAS is an essential functionality of the CRIP. This layer provides a platform independent interface and focus onto the management of key system performance parameters such as scalability, dependability and, security.

• Data Exchange Layer. The Data Exchange Layer provides for the Data Exchange Services as described in Section 5.6.

• Cooperation Layer. The Cooperation Layer offers support to:

• Regulate the C-RAS organization; and

• Mediate interaction between RASs.

With a Social level, the environment evolve:

• From a passive role, e.g., resource and services to be used; and

• To an active role, ruler of the C-RAS organization.

Support for this level enables RASs to exploit the environment to coordinate, and regulate, their (mutual) behavior.

The following paragraphs briefly detail the features of each layer.

6.5.1.5 Deployment Layer

The Deployment Layer represents the most elementary perspective on the environment in an RAS system. RASs invoke this service level if they need an access the deployment context resources and services. This level manages all of the environment resources, e.g., communication, processing, data acquisition. An example of services provided at this level are Networking services, is described in Section 5.7, sensor/actuator handling, (robotic) operating system services.

6.5.1.6 Data Exchange Layer

The Data Exchange provides each RAS for access to Data Exchange Services (DES), as described in Sections 5.6 and 6.5.2. This level shields the underlying deployment context details by providing specific service to access its resources.

As described in related sections Data Exchange also provides for Quality of Services (QoS), which allow for requesting for expected performance to be associated to the provisioning of a specific DES.

The DES QoS directly relate to C-RAS system performances features, such as:

• Predictability, the DES will guarantee timely accomplishment of a data delivery.

• Reliability, the DES will guarantee the correct delivery of data and manage the resources redundancy.

• Persistence, the DES will guarantee the availability of (a set of) data within the environment data space.

• Availability, the DES will guarantee a critical service is available despite of failure occurrence in the underlying deployment context.

SYSTEM VIEW

6 - 12 STO-TR-SET-263

• Robustness, the DES will cope with instability or dynamicity of the underlying networking services, e.g., intermittent communication link.

• Integrity, the DES will guarantee the data are not intentionally corrupted by entities during its processing, storing, and transmission.

• Confidentiality, the DES will guarantee the information is not accessible to entities which are not granted for their use.

The Data Exchange Layer allows for a partitioning of the deployment context resources among different node and different class of services.

In order to cope with (highly) dynamic environments the Data Exchange Layer will:

• Decouple the data provider from data consumer, this meaning that the RAS/entity which request for data will only specify the needed information, e.g., DES supports the Publish-Subscribe protocol.

• Only rely on distributed algorithms and services, i.e., Data Exchange Layer adopts a serverless architecture.

• Allow for a variable number of agents to entering and exiting the system in any moment, i.e., Data Exchange Layer provides for dynamic lookup and discovery services.

6.5.1.7 Cooperation Layer

The Cooperation Layer (CRIP-CL) 1) Supports the mediated interaction in the environment; and 2) Regulates C-RAS behavior for the specific application domain.

The support for RAS mediated interaction provides each RAS for the ability to interact with social units, such as organizations, group membership, and other normative social structures.

The C-RAS behavior regulation services define different types of rules on all entities in the C-RAS. Rules typically refer to laws imposed by the designer on a RAS’s activity, so allowing the CRIP-SL to act as an arbitrator that attempts to preserve the C-RAS in a consistent state according to the properties and requirements of the application domain. Then, laws represent application-specific constraints on the interactions of RASs in the environment, e.g., restrictions on RAS perception, interaction, and communication.

This level also provides the environment observability services via the semantic description of the domain, which can be defined by an environment ontology which covers the different structures of the environment as well as the observable characteristics of environment resources, and the regulating laws.

By the synthesis of:

1) The behavior regulation services; and

2) The environment observability services, it can build up a sublevel, the Reflective Layer, which allows to modify the behavior of the environment and the set of laws which regulate the C-RAS organization. It is a means for building adaptive, self-organizing C-RAS.

6.5.1.8 Cooperation Layer Modules

This section further details the Cooperation Layer by describing its key components for the support of the RAS-RAS Cooperation.

SYSTEM VIEW

STO-TR-SET-263 6 - 13

6.5.1.8.1 State Module The State module manages the actual state of the application environment. The environment’s state typically includes an abstraction of the deployment context-possibly extended with other states related to the C-RAS operational scenario.

The State module acts as a repository:

1) Which is distributed among the RAS nodes of the system; and

2) Where the RAS node modules can read all of the environment state and modify the (the portion of) environment state it is operating within.

6.5.1.8.2 Action Management Module The Action Management module deals with RAS node actions in the environment.

RAS node actions can be divided in two classes: actions that attempt to modify state of the operational scenario and actions that attempt to modify elements of the deployment context.

Action Management services includes: • Action sequencing to accomplish the assigned task. • Action coordination with other nodes to accomplish a concurrent task. • Task assignment and distribution. • Action execution by invoking the appropriate lower-level services.

This module encapsulates an action model that describes how the various-concurrently executed-action commands are executed.

The RAS node actions are subject to interaction laws, see Section 6.5.1.8.4.

6.5.1.8.3 The Dynamics Module The Dynamics module manages the environmental dynamics that occur independently of the RASs or the deployment context. • Virtual stigmergic worlds. Stigmergic systems rely on interactions through signs in the environment.

An environment can support a virtual world counterpart to the real world where the RASs can deposit and sense simulated pheromones. Since they are simulated, these virtual pheromones can be given capabilities beyond their real world counterparts. The CRIP may process information that is deposited analogous to and beyond physical/chemical interactions in the real world.

• Human interfacing. The CRIP-CL will provide for a rich infrastructure to build user interfaces, user interaction models, diagnostic, and performance analysis systems. It will provide a family of applications with a consistent omnipresent Human Computer Interface (HCI) library to design and develop application-specific HCI with negligible recurring implementation efforts.

Note: The inclusion of the Human interfacing services in the CRIP-CL is justified by the consideration that although the Human, whichever role it covers, belongs to the real- world he/she acts as an (external) “agent” the C-RAS interacts with. Refer to Section 5.3 for details about Human-Swarm Interaction.

SYSTEM VIEW

6 - 14 STO-TR-SET-263

6.5.1.8.4 Laws Laws represent application-specific constraints on the interactions or capabilities of RAS nodes in the operational scenario. Laws regulate the RAS node capabilities taking into account its own current role, e.g., assigned rights and operational context, e.g., resource availability. The current set of laws are foreseen:

• Perception laws, which regulate the RAS perception. • Interaction laws, which regulate the RAS actions. The CRIP will enforce real-world constraints or

application constraints (domain policies) on RAS actions. This makes the system more autonomic and provides high-level functionality, e.g., the CRIP will ensure that only safe actions are performed on the entities it manages.

• Communication laws, which regulate the exchange of information elements among RASs.

6.5.1.8.5 Perception Module The Perception module provides the functionality for RAS to perceive the environment. When a RAS senses the environment, the Perception module generates precepts according to the current state of the application environment and possibly data observed from the deployment context.

The underlying services provides for means to build perception abstraction based upon the environment data source and then improve the effectiveness of the RAS reasoning.

6.5.2 Localization and Mapping in Swarm Systems In this section, a system design of localization and mapping system for swarm systems including hardware and software components will be given. The system design will be kept as generic as possible in order to provide applicability to many different RAS systems.

6.5.2.1 Hardware Architecture

RAS systems usually include a mission computer that performs low-level controller functions which controls the movement of the robot. Mission computer is usually responsible for enabling the robot to perform the given comments from a Remote Control or Control Station. It can also perform some autonomous missions such as following a given set of target points or returning to home point and performs some fail-safe actions as well. But the level of autonomy that is implemented in mission computer is usually limited to safely controlling the movements of the RAS system and additional requirements are usually performed on a different component which is usually named as companion computer. With the help of connection between companion computer and mission computer, companion computer can give commands to the mission computer and can obtain status of the platform from it. In the proposed design, one companion computer will be present on each agent and there will be communication between different agents’ companion computers. Localization and mapping tasks will be performed on these companion computers.

Simultaneous Localization and Mapping (SLAM) algorithms can be implemented using different sensors such as monocular/stereo cameras, LIDAR, RGB-D sensors and IMU. According to the platform type, any of these sensors can be used. Each of these sensors have some advantages and disadvantages compared to the other sensors. For example, 3D LIDAR has 360o field-of-view but the quality of the map it generates is not satisfactory due to the sparseness of its physical property. In contrast, RGB-D camera generates a denser and more detailed 3D map, but it has limited field-of-view. Monocular/stereo camera is light and cheap compared to the other sensors and it can provide rich visual information which can be used in other applications as well. In order to propose a generic system design, we will keep all of these sensors in our design and advise different configurations for different platform types. For example, a UGV can use LIDAR,

SYSTEM VIEW

STO-TR-SET-263 6 - 15

RGB-D camera and IMU and fuse these sensors’ data in order to take advantage of all of them. But for a mini UAV system, the usage of a monocular camera together with IMU will be more appropriate since weight and power consumption issues will be more important on these types of platforms.

The application of SLAM on a swarm system has advantages over single-agent SLAM system. In order to benefit from more agents, these agents must share their data over a communication network. According to the networking system, there can be different approaches such as all agents communicating all the time or the agents can communicate only with closer agents. As stated, Section 5.5.2, the collaborative SLAM system can be designed as central or distributed. In a central design, since the computations will be performed on a central server, communication between each agent and this server must be established. In the proposed system design, the communication will be established over the modems placed on each platform. The general structure of hardware design is given in Figure 6-7.

Figure 6-7: Hardware Architecture for Collaborative SLAM System.

6.5.2.2 Software Architecture

In this section, the software architecture design running on top of the proposed hardware architecture will be given. Since SLAM functionality will be performed on companion computer, the proposed software system will execute on the companion computer (Figure 6-8). According to the choice of centralized or distributed system design, some software components may execute on the central server.

A typical SLAM software has following submodules: • Pose Estimation (Odometry): It is the estimation of relative position of the robot with respect to its

starting point. Many methods can be applied for this purpose such as wheel odometry for ground robots or visual-inertial odometry for UAVs. As new sensor data (image, IMU reading, etc.) is acquired, the relative position change since the last reading is calculated and changes are accumulated to obtain current relative pose. Since all of the odometry methods are based on integration of data and errors accumulate as the robot moves, there will be a drift in the estimated pose of the robot. By fusing more than one sensor’s data (e.g., camera + IMU), this drift may be reduced but cannot be totally eliminated. According to the platform type and choice of sensors, different odometry methods such as visual-inertial odometry, LIDAR odometry, stereo odometry, etc. can be implemented in this submodule. In proposed system design, every agent will run its own odometry submodule.

SYSTEM VIEW

6 - 16 STO-TR-SET-263

Figure 6-8: Software Architecture for Collaborative SLAM.

• Local Mapping and Inra-Agent Loop Closure: Mapping is obtaining the 3D position information of observed landmarks or feature points. If the robots current pose (rotation and translation) is known, then the 3D position of the observed landmarks can easily be calculated. But as the uncertainty in the pose of the robot increases, this will also cause uncertainty in the map data. The map data will be initialized by triangulation of the observed feature points in a couple of initial frames and then as the robot moves, newly observed feature points’ 3D position data will be added to map. When the vehicle passes the same location and observes the same scene twice (intra-agent loop detection), this situation is detected and the errors in position and map are corrected. This procedure is called loop closure. In proposed system design, every agent will execute its own local mapping and loop closure submodules.

• Map Merging and Inter-Agent Loop Closure: This submodule is not present in single-agent SLAM systems and comes with the swarm (or collaborative) localization and mapping concept. The maps generated by each agent must be merged and a single map must be produced as a result. The relative pose information between agents must be determined in order to perform this task. When the same scene is observed by two or more agents at the same time and this situation is detected (inter-agent loop detection), then the pose transformation matrices between these agents can be calculated and map merging process can be processed. This loop detection mechanism can also be used to correct the error in position and map estimates of the agents. This submodule can be implemented and executed in each agent in distributed system design or can be implemented and executed in a central server in centralized system design.

• Global Optimization: The most computationally expensive part of SLAM systems is the global optimization module since it performs an optimization of map points and pose data in a partial or full trajectory. According to the processing capabilities of the utilized companion computers, this part can be executed on board of the agent or can be executed on a central server.

• Communication: Collaborative SLAM systems need communication between agents. Map data and pose data produced on each agent is transferred to the other agents via communication channel. The content of the transferred data changes according to the selected method. Some methods require full communication between agents and for some methods communication between only closer agent’s is enough. But in any case, there must be a communication software executing on each agent which manages the transfer of data to and from other agents. The synchronization of data and tagging data with time stamps is critical tasks of this software component.

SYSTEM VIEW

STO-TR-SET-263 6 - 17

6.6 DATA EXCHANGE SERVICES

6.6.1 Swarm Data Exchange Services The Data Exchange Protocol component which is serving a Swarm provides to each SS4ISR node to exchange data with both peer Swarm nodes and with Swarm Operator(s) as specified below. It is worth noting that a Swarm Operator can be either a Dismounted Soldier, when the Swarm is serving a STU, or a vehicle operator, when the Swarm is serving a vehicle.

The Data Exchange Protocol component provides services via the following logical ports: • DEP.11, which interconnects the UxV Service Logic, e.g., ISR Application or Formation Control,

with: • The UxV Service Logic of peer Swarm nodes to: Send/Receive User Data and UxV Control.

• The UxV Application Service Logic with Operator(s) Swarm C2 Service Logic to: Send/Receive User Data and UxV Control.

• DEP.02, which interconnects the UxV Service Logic, e.g., ISR Application, with: • The Operator(s) Swarm C2 Service Logic to: 1) Send Streaming Data; and 2) Send/Receive

Service Control Data. • DEP.15, which interconnects the UxV Service Logic, e.g., ISR Application or Node Configuration,

with: • The Operator(s) Swarm C2 Service Logic to: Send/Receive files.

The Data Exchange Protocol component requires services to underlying layers namely, Execution Environment Services and Transport Services, via the following logical ports:

• DEP.13, which requests for communication services to: • Send/Receive: User Data, Streaming Data, Service Control Data, and Files.

• DEP.14, which requests for tactical communication services to: • Send/Receive: User Data, Service Control Data, and Files.

• DEP.05, which requests for Execution Environment services to: • Send/Receive: Service Control Data.

Figure 6-9 depicts the System-to-System Port Connectivity Diagram for Data Exchange Services for a Swarm, where the Data Exchange Protocol components of Swarm elements, i.e., UxVs, and interacts both each other and with Operator(s).

SYSTEM VIEW

6 - 18 STO-TR-SET-263

Figure 6-9: Swarm Data Exchange Services Architecture.

UxV

Com

putin

g Pl

atfo

rm

Data

Exc

hang

e Pr

otoc

ol

Swar

m

UxV

Com

putin

g Pl

atfo

rmDa

ta E

xcha

nge

Prot

ocol

Sw

arm

Swarm Operator(s)

Com

putin

g Pl

atfo

rm

Data

Exc

hang

e Pr

otoc

ol S

war

m

Data/Control/Files Control

<<Port>>DEP.11

<<Port>>DEP.02

<<Port>>DEP.05

<<Port>>DEP.13

<<Port>>DEP.14

Tactical Radio Transport Services

Radio Transport Services

Data/Streaming/Control/Files

Execution Platform API

<<System Component>>UxV Service Logic

Streaming/ControlData/Control

Data/Control

Data/Control/Files Control

<<Port>>DEP.11

<<Port>>DEP.02

<<Port>>DEP.05

<<Port>>DEP.13

<<Port>>DEP.14

Tactical Radio Transport Services

Radio Transport Services

Data/Streaming/Control/Files

Execution Platform API

<<System Component>>UxV Service Logic

Streaming/Control Data/Control

Data/Control/Files

Control

<<Port>>DEP.11

<<Port>>DEP.02

<<Port>>DEP.05

<<Port>>DEP.13

<<Port>>DEP.14

Tactical Radio Transport Services

Radio Transport Services

Data/Streaming/Control/Files

Execution Platform API

<<System Component>>Swarm C2 Service Logic

Streaming/Control

Data/Control

Data/Control

<<Port>>DEP.15

Streaming/control

<<Port>>DEP.15

Files

<<Port>>DEP.15

Files

Streaming/control

Data/Control

Files

Files Files

SYSTEM VIEW

STO-TR-SET-263 6 - 19

6.6.2 Coalition Domain The Data Exchange Protocol component which is serving a Swarm in the Coalition Domain provides to each SS4ISR node to exchange data with both SS4ISR node operated by allied nation and with Swarm Operator(s) belonging to allied nation as specified below. It is worth noting that a Swarm Operator can be either a Dismounted Soldier, when the Swarm is serving a Coalition STU, or a vehicle operator, when the Swarm is serving an allied vehicle.

In the Coalition Domain the Data Exchange is controlled via Coalition Gateways, which provides for data management services such as:

• Information Filtering, which selects the set of information allied nation can exchange each other’s. • Information Mapping, which translates IE of a given nation on equivalent IE of an allied nation. • Information Assurance, which provides a set of mechanism to guarantee key information quality

requirements such as integrity, confidentiality, and accountability in the coalition. • Information Forwarding, which routes each data element to the correct set of recipients.

It is worth noting that the adoption of the Information-centric paradigm results in the above listed services to act directly at semantic level, so being easier to integrate each of them with the served Service Logic hosted by each Swarm node.

The Data Exchange Protocol component provides services via the following logical ports: • DEP.02, which interconnects via Coalition Gateway the UxV Service Logic, e.g., ISR Application,

with: • The Soldier Application Service Logic to: 1) Send Streaming Data; and 2) Send/Receive

Service Control Data.

• DEP.11, which interconnects via Coalition Gateway the UxV Service Logic, e.g., ISR Application, with: • The Allied Peer UxV Application Service Logic, e.g., ISR Application or Formation Control,

to: Send/Receive User Data and UxV Control. • The Allied Operator(s) Swarm C2 Service Logic to: Send/Receive User Data and UxV Control.

• DEP.15, which interconnects via Coalition Gateway the UxV Service Logic, e.g., ISR Application or Node Configuration, with: • The Allied Operator(s) Swarm C2 Service Logic to: Send/Receive files.

The Data Exchange Protocol component requires services to underlying layers namely, Execution Environment Services and Transport Services, via the following logical ports:

• DEP.13, which requests for communication services to: • Send/Receive: User Data, Streaming Data, Service Control Data, and Files.

• DEP.14, which requests for tactical communication services to: • Send/Receive: User Data, Service Control Data, and Files.

• DEP.05, which requests for Execution Environment services to: • Send/Receive: Service Control Data.

Figure 6-10 depicts the System-to-System Port Connectivity Diagram for Data Exchange Services in the Coalition Forces Domain, where the Data Exchange Protocol component of a generic Swarm node (UxV) interacts via the Coalition Gateway with both Allied Swarm Operator(s) and peer Allied UxV.

SYSTEM VIEW

6 - 20 STO-TR-SET-263

Figure 6-10: Data Exchange Services in Coalition Domain.

UxV

Com

putin

g Pl

atfo

rm

Data

Exc

hang

e Pr

otoc

ol

Swar

m

Allied Peer UxV

Com

putin

g Pl

atfo

rmDa

ta E

xcha

nge

Prot

ocol

Sw

arm

Allied Swarm Operator(s)

Com

putin

g Pl

atfo

rm

Data

Exc

hang

e Pr

otoc

ol S

war

m

Coalition Gateway

Data/Control/Files Control

<<Port>>DEP.11

<<Port>>DEP.02

<<Port>>DEP.05

<<Port>>DEP.13

<<Port>>DEP.14

Tactical Radio Transport Services

Radio Transport Services

Data/Streaming/Control/Files

Execution Platform API

<<System Component>>UxV Service Logic

Streaming/ControlData/Control

Data/Control

Data/Control/Files Control

<<Port>>DEP.11

<<Port>>DEP.02

<<Port>>DEP.05

<<Port>>DEP.13

<<Port>>DEP.14

Tactical Radio Transport Services

Radio Transport Services

Data/Streaming/Control/Files

Execution Platform API

<<System Component>>UxV Service Logic

Streaming/Control Data/Control

Data/Control/Files

Control

<<Port>>DEP.11

<<Port>>DEP.02

<<Port>>DEP.05

<<Port>>DEP.13

<<Port>>DEP.14

Tactical Radio Transport Services

Radio Transport Services

Data/Streaming/Control/Files

Execution Platform API

<<System Component>>Swarm C2 Service Logic

Streaming/Control

Data/Control

<<Port>>DEP.15

Streaming/control

<<Port>>DEP.15

Files

<<Port>>DEP.15

Files

Streaming/control

Files

Files Files

Files

Streaming/control

Data/Control

Information Forwarding

Information Filtering

Information Assurance

Information Mapping

Data/Control

SYSTEM VIEW

STO-TR-SET-263 6 - 21

6.7 NETWORKING

6.7.1 Swarm Networking Wireless Communication architecture and network security plays an important role for robust operation swarm systems in combat area. Regarding the swarm networking architecture there are two main approaches.

6.7.1.1 Centralized Architecture

In this approach, all UAVs communicate through the master node (Ground Control Station) which is located on the ground. This architecture is similar to traditional point-to-point communication infrastructure. However, data latency is an important issue to be solved, since all the data will be transmitted and distributed through the Ground Control Station. Another disadvantage of this architecture is the coverage area. All UAVs have to have Line of Sight (LoS) communication with the Ground Control Station causing the networking operational area to be limited to a circle with a radius of maximum line of sight distance of the modems (Figure 6-11).

Figure 6-11: Centralized Architecture.

6.7.1.2 Decentralized Architecture

In practice, UAVs in a swarm network could drop from and join to the network depending on the environmental conditions. This situation must be handled in an ad hoc manner by the network infrastructure. Decentralized architecture is suitable for these kind of situations since there is no master node. It reduces the delay overhead caused by the master node and eliminates the dependence on the infrastructure and communication range restrictions. It is very complex to implement a decentralized network since there are many issues to be solved by modem units such as data relaying of each node to other nodes, synchronizing the messages distributed in the network and handling the nodes that are instantaneously dropped from and joined to the network and so on. On the other hand, once this network is established correctly, it will be a flexible and scalable network that will increase the autonomy level of the swarm network (Figure 6-12).

SYSTEM VIEW

6 - 22 STO-TR-SET-263

Figure 6-12: Decentralized Architecture.

6.7.1.3 Single Group Networks

Ring

The ring architecture (Figure 6-13) form a closed loop through a bidirectional connection. Any node can be a gateway to the network. When a direct link between two adjacent nodes fails, the link can be established through the rest of the communication loop. However, this approach is not scalable.

Figure 6-13: Ring Architecture.

SYSTEM VIEW

STO-TR-SET-263 6 - 23

Star

Star architecture (Figure 6-14) has one gateway node in the middle, which is able to communicate with all nodes. However, this gateway node brings the risk of single point of failure if it malfunctions.

Figure 6-14: Star Architecture.

Mesh

It is a combination of Star and Ring architecture (Figure 6-15). All nodes are gateways in this architecture. It is also possible to relay data between the nodes in any combination. Therefore, mesh structure brings the advantage of scalability and flexibility. On the other hand, it is very difficult to implement such a complex network structure.

Figure 6-15: Mesh Architecture.

SYSTEM VIEW

6 - 24 STO-TR-SET-263

6.7.1.4 Multi-Group Networks

This network structure integrates both a centralized architecture and a single group network. While intra-group communication is established in an ad hoc manner, inter-group communication is established through the Ground Control Station. However, this causes a high latency in the messages sent between groups.

6.7.1.5 Multi-Layer Network

This structure is suitable for swarms having different types where first group establishes a network and another group establishes another network. Intra-group communication is established with one of the methods described in “Single Group Networks.” This is considered as the first layer of the network. Each group has a gateway node which enables the communication between different networks which is considered as the second level. This type of network is suitable when there are heterogeneous swarm groups and a huge number of UAVs and there is a huge amount of data distributed among the swarm networks.

6.8 REFERENCES

[1] Robinson, D.R., Mar, R.T., Estabridis, K. and Hewer, G., “An Efficient Algorithm for Optimal Trajectory Generation for Heterogeneous Multi-Agent Systems in Non-Convex Environments,” in IEEE Robotics and Automation Letters, 3(2), pp. 1215-1222, April 2018. doi: 10.1109/LRA.2018. 2794582.

[2] Gashler, M., Venture, D., and Martinez, T., “Manifold Learning by Graduated Optimization,” IEEE Trans. on Syst., Man, and Cybern., Part B: Cybern., vol. 41, pp. 1458-1470, December 2011.

[3] JAUS Service Interface Definition Language AS5684, ver. B, SAE, 15 May, 2020.

[4] Herbert, S.L., Chen, M., Han, S., Bansal, S., Fisac, J.F. and Tomlin, C.J., “Fastrack: A Modular Framework for Fast and Guaranteed Safe Motion Planning,” 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pp. 1517-1522, 2017.

[5] Fridovich-Keil, D., Herbert, S.L., Fisac, J.F., Deglurkar, S. and Tomlin, C.J., “Planning, Fast and Slow: A Framework for Adaptive Real-Time Safe Trajectory Planning,” 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 387-394, 2018.

STO-TR-SET-263 7 - 1

Chapter 7 – TECHNICAL VIEW

This chapter identifies for each service a set of standards, Technologies, and algorithms which are critical for an effective adoption and evolution of Swarm System for ISR.

7.1 STANDARDS AND TECHNOLOGIES

7.1.1 Human-Swarm Interaction

7.1.1.1 Command and Control

7.1.1.1.1 STANAG 4586 –Standard Interfaces of UAV Control System (UCS)

7.1.1.1.1.1 Overview

STANAG 4586 [1] specifies the interfaces that shall be implemented in order to achieve the required Level of Interoperability (LOI) between different UAV systems, so as to meet the requirements of the CONcept of OPerationS (CONOPS) defined by NATO countries. STANAG 4586 establishes a functional architecture for Unmanned Aerial Vehicle Control Systems (UCS) with the following elements and interfaces: Air Vehicle (AV), Vehicle Specific Module (VSM), Data Link Interface (DLI), Core UCS (CUCS), Command and Control Interface (CCI), Human Computer Interface (HCI), and Command and Control Interface Specific Module (CCISM).

STANAG 4586 is divided into two annexes: the first annex provides a glossary to support the second annex, the second annex provides an overview of the communication architecture, which is supported by three appendices:

• Appendix B1 discusses the data link interface,

• Appendix B2 discusses the command and control interface, and

• Appendix B3 discusses the Human and Computer Interfaces (HCI).

The following paragraphs briefly describe the more relevant STANAG 4586 modules for SS4ISR.

7.1.1.1.1.2 Data Link Interface (DLI)

The DLI enables the CUCS to generate and understand specific messages for control and status of air vehicles and payload [1]. DLI specifies the mechanism to process and display specific messages, which are air vehicle and payload independent.

7.1.1.1.1.3 Core UCS (CUCS)

The CUCS should provide a user interface that enables the operator to conduct all phases of an UAV mission, and support all requirements from the DLI, CCI and HCI. The computer generated graphic user interface should also enable the operator to control different types of UAVs and payloads.

7.1.1.1.1.4 Command and Control Interface (CCI)

CCI defines the standard message set and accompanying protocols that have been selected to be C4I System/node independent, avoiding placing additional requirements on the C4I System.

TECHNICAL VIEW

7 - 2 STO-TR-SET-263

The CCI is intended to cover all types of messages and data that need to be exchanged in both directions between the CUCS and the C4I systems during all the phases of a UAV mission.

7.1.1.1.1.5 Human Computer Interface (HCI)

The STANAG specifies the requirements levied upon the CUCS and does not impose any design requirements on Human Factors (HF) and Ergonomics, (e.g., number of displays, manual controls, switches, etc.).

The HCI establishes the operator display and input requirements that the CUCS shall support. Although not specifically defining the format of the data to be displayed, there are some identified requirements that the CUCS shall provide in order to ensure an effective operation of the UAV system, such as display and operator interactions imposed on the CUCS by the CCI and DLI.

7.1.1.1.1.6 Level of Interoperability

This standard also identifies five Levels of Interoperability (LOI) to accommodate operational requirements. The respective operational requirements and CONOPS will determine or drive the required LOI that the specific UAV System will achieve.

Level 1: Indirect receipt and/or transmission of sensor product and associated metadata, for example Key Length Value Metadata Elements from the UAV.

Level 2: Direct receipt of sensor product data and associated metadata from the UAV.

Level 3: Control and monitoring of the UAV payload unless specified as monitor only.

Level 4: Control and monitoring of the UAV, unless specified as monitor only, less launch and recovery.

Level 5: Control and monitoring of UAV launch and recovery unless specified as monitor only.

7.1.1.1.2 MAVLink Micro Air Vehicle Link (MAVLink) [2] is a communication protocol for communicating with small unmanned vehicles. It is used both for communication between a Ground Control Station (GCS) as well as for communication between the Flight Control Unit (FCU) and other onboard subsystems. MAVLink is used in several commercially available UAV autopilots and facilitate third party integration with other computation devices. By using an open standard for the platform/autonomy integration layer, the same autonomy systems can be used to command a variety of different platforms with different autopilots and low-level control systems, provided that they all conform to some standard, e.g., MAVLink.

7.1.1.2 Media

7.1.1.2.1 STANAG 4607: NATO Ground Moving Target Indicator (GMTI) Format The NATO Ground Moving Target Indicator Format (GMTIF) [3] defines a standard for the data content and format for the products of ground moving target indicator radar systems and a recommended mechanism for relaying tasking requests to the radar sensor system.

The format provides a flexible format for target information, such that simple GMTI systems can use a small subset of the format with limited bandwidth channels, while robust systems can encode all aspects of the output data for use with wideband channels, including High Range Resolution (HRR) and pulse Doppler modes.

TECHNICAL VIEW

STO-TR-SET-263 7 - 3

7.1.1.2.2 STANAG 7023: Air Reconnaissance Primary Imagery Data Standard STANAG 7023 [4] establishes a standard data format and a standard transport architecture for the transfer of reconnaissance imagery and associated auxiliary data between reconnaissance Collection Systems and Exploitation Systems. NATO STANAG 7023 is a self-describing format. The auxiliary data defines the format of the image data. This enables NATO STANAG 7023 to handle any image from any type of sensor.

7.1.1.2.3 Gstreamer Gstreamer [5] is a framework for building multimedia pipelines used for processing streams of data. From a distributed detection and tracking perspective, some sort of protocol or framework for streaming data between individual units or between the edge assets and the GCS. Gstreamer is a widely used open-source standard, which is also compatible with most other standard media formats and supports integration with custom plugins for dedicated filters and codecs.

7.1.1.2.4 HEVC/H.265 HEVC/H.265 [6] (also known as MPEG-H Part 2) supports videos up to a resolution slightly higher than 8K (7680 x 4320). HEVC introduces many new compression techniques, which improve the compression performance of its predecessor (AVC/H.264 [7]). Its performance is comparable to the VP9 codec.

The HEVC codec is covered by many patents. The patent owners (among which Apple, Microsoft, Motorola and Samsung) are grouped into two large patent pools (HEVC LA and HEVC Advance), which require licensing fees for the development and use of hardware supporting HEVC. HEVC Advance has recently waived the royalties for the use of software-based HEVC.

Qualcomm, Intel, NVIDIA and ARM provide CPUs, GPUs and SoCs with hardware acceleration support for HEVC. As such, the HEVC codec can be efficiently used in most modern smartphones and computer systems.

If the patent licensing is not an issue, HEVC/H. 265could be employed as the main codec for video applications in a DSS. The SoC implementations allow applications to encode/decode HEVC/H. 265with a higher power efficiency than would be attainable with a software-based encoder/decoder.

7.1.1.2.5 Matroska The Matroska [8] file format is a multimedia container format, designed to support multiple tracks of audio and video, as well as subtitles. Matroska files are serialized using the Extensible Binary Markup Language (EBML), a binary extension to XML. This allows the Matroska standard to be easily extended/adapted if required. Other features of Matroska are error resilience, fast seeking and support for stereoscopic video.

From a streaming perspective, Matroska is not optimized for live-streaming purposes, but is suited to be used in non-live-streaming. As such, it may be used in conjunction with HTTP adaptive streaming technologies.

Matroska is released as an open standard, and its usage does not incur royalties. The libEBML parser, which can be used to parse Matroska files, is released under the LGPL license. The Matroska format is supported on all mayor operating systems and mobile devices.

In a SS4ISR, the Matroska format can be used as a container for media files. Due to its flexible design, it is able to contain any audio or video format, which makes it suitable for usage in combination with lesser-known formats. Although the format is not suitable for live-streaming, it can be used for regular streaming.

TECHNICAL VIEW

7 - 4 STO-TR-SET-263

An alternative container format similar to Matroska is the WebM container format, which is defined as a subset of Matroska. As such, it only supports VP8/VP9 encoded video and Opus encoded audio.

7.1.1.2.6 MPEG-4 Part 14 MPEG-4 Part 14 [9] (also known as MP4), is an extension to the ISO Base Media File Format (ISOBMFF) and serves as a multimedia container. MP4 features a wide support for many video and audio codecs, as well as multiple audio/video tracks, subtitles and stereoscopic video. The HEVC and H.264 video codecs are commonly used in conjunction with the MP4 container format.

MP4 is a suitable candidate for HTTP adaptive streaming, as its header is included at the beginning of a file. This allows clients to efficiently skip through a video while only fetching video data which is needed, and nothing more. For live-streaming, MP4 can be used if using a technique known as fragmented streaming. Fragmented streaming entails that the format of an MP4 file is used to keep appending new data, while still serving a valid MP4 file. Although this approach can be used to achieve real live-streaming, it can be considered as a workaround. More suitable alternatives for live-streaming would be MPEG-TS (which can contain the same type of streams as MP4), or WebM.

The MP4 format is likely to be covered by patents. However, the largest patent pool MPEG-LA does not charge royalties to end-users. Therefore, royalties only apply in encoding/decoding software or hardware. Note that the H.264 and HEVC codecs commonly used in conjunction with MP4 are definitely covered by patents from the MPEG-LA patent pool.

In a DSS, the MP4 format can be used as a container format for audio and video data. As a storage format, MP4 can be used to transfer recorded media to and from one or more DSSs. As multiple System on Chips (SoCs) are available which support H.264 / HEVC (and therefore MP4), the MP4 format can be used in scenarios where high-efficiency decoding is required.

7.1.1.2.7 Portable Network Graphics Portable Network Graphics [10], whose acronym PNG can be pronounced “ping” or “P-N-G,” is a compressed raster graphic format. It is commonly used on the Web and is also a popular choice for application graphics.

The PNG format was introduced in 1994, after the GIF and JPEG formats had already been around for several years. Therefore, PNG includes many of the benefits of both formats. For example, PNG images use lossless compression like GIF files, so they do not have any blurring or other artefacts that may appear in JPEG images. The PNG format also supports 24-bit color like the JPEG format, so a PNG image may include over 16 million colors. This is a significant difference between GIF and PNG, since GIF images can include a maximum of 256 colors.

Unlike the JPEG and GIF formats, the PNG format supports an alpha channel, or the “RGBA” color space. The alpha channel is added to the three standard color channels (red, green, and blue, or RGB) and provides 256 levels of transparency. JPEG images do not support transparent pixels and GIF images only support completely transparent (not partially opaque) pixels. Therefore, the PNG format allows Web developers and icon designers to fade an image to a transparent background rather than a specific color. A PNG with an alpha channel can be placed on any color background and maintain its original appearance, even around the edges.

While the PNG image format has many benefits, it is not suitable for all purposes. For example, digital photos are still usually saved as JPEGs, since PNGs take up far more disk space. GIFs are still used for animations since PNG images cannot be animated. Additionally, GIFs are still used on many websites since

TECHNICAL VIEW

STO-TR-SET-263 7 - 5

browsers only recently provided full support for the PNG format. However, now that most browsers and image editing programs support PNGs, it has become a popular file format for web developers and graphic artists.

7.1.1.2.8 Tagged Image File Format Tagged Image File Format (TIFF) [11] is a graphics file format created in the 1980s to be the standard image format across multiple computer platforms. The TIFF format can handle color depths ranging from 1-bit to 24-bit. Since the original TIFF standard was introduced, people have been making many small improvements to the format, so there are now around 50 variations of the TIFF format. So much for a universal format. Recently, JPEG has become the most popular universal format, because of its small file size and Internet compatibility.

7.1.1.2.9 STANAG 4545: NATO Secondary Imagery Format (NSIF) The NATO Secondary Imagery Format (NSIF) [12] is the standard for formatting digital imagery files and imagery-related products and exchanging them among NATO members. The NSIF is part of a collection of related standards and specifications, known as the NATO ISR Interoperability Architecture (NIIA), developed to provide a foundation for interoperability in the dissemination of intelligence-related products among different computer systems.

Secondary imagery is sensor data that has been previously exploited and/or processed into a human readable picture. This format enables an operator at one workstation to compose and capture a multimedia image on his workstation and send it to another workstation where it is capable of being reproduced exactly as it was composed on the first workstation.

7.1.1.2.10 STANAG 4609: NATO Digital Motion Imagery Standard Motion Imagery (MI) is a valuable asset for commanders that enable them to meet a variety of theatre, operational and tactical objectives for intelligence, reconnaissance and surveillance. STANAG 4609 [13] is intended to provide common methods for exchange of MI across systems within and among NATO nations.

This standard addresses the applicability of commercial digital video standards and defines the metadata requirements for airborne motion imagery collection. The standard specifies the commercial standards to be used for the military community within NATO.

7.1.1.2.11 STANAG 7085: Interoperable Data Links for Imaging Systems STANAG 7085 [14] provides the interoperability standards for 3 classes of imagery data link used for primary imagery data transmission: analogue links described in Annex A, point-to-point digital links described in Annex B, and broadcast digital links described in Annex C.

7.1.2 Robotic Platform and Services Swarm technology is enabled by cheaper and smaller robotic platforms, sensors, communication and computing devices. The technology advances rapidly and new devices are continuously being developed and improved. In order to keep up with the technological advances made, multiple open-source standards are commonly used in the robotic and UAV communities to facilitate drop-in replacements for individual components and subsystems.

TECHNICAL VIEW

7 - 6 STO-TR-SET-263

7.1.2.1 Joint Architecture for Unmanned Systems

7.1.2.1.1 Overview The Joint Architecture for Unmanned Systems (JAUS) is an international standard of the SAE AS-4 Unmanned Systems Steering Committee, which establishes a common set of message formats and communication protocols for supporting interoperability within and between unmanned vehicles and ground control stations.

The main goal of JAUS is to structure communication and interoperation of unmanned systems within a network. A JAUS system is made up of subsystems connected to a common data network. A Subsystem typically represents a physical entity in the system network, such as an unmanned vehicle or operator control unit.

The JAUS network is further subdivided into hierarchical layers. Subsystems are divided into Nodes, which represent a physical computing endpoint in the system. For example, a Node might be a computer or microcontroller within a Subsystem.

Nodes can then host one or more Components, which are commonly applications or threads running on the Node. Finally, Components are made up of one or more Services.

A Service simply provides some useful function for the system. The Service Oriented Architecture (SOA) enables distributed command and control of the unmanned systems. The SOA approach of JAUS attempts to formalize the message format and protocol interaction between system components. This approach is standardized by the JAUS Service Interface Definition Language (JSIDL); an XML-based language that provides the basic structure and syntax for specifying JAUS Services. All of the Services that are standardized by JAUS must be specified in valid JSIDL syntax.

7.1.2.1.2 JAUS Standard Documents The specifications of the JAUS standard are published as separate documents. The following two documents provide the foundations for the later specification of the JAUS services (Table 7-1).

Table 7-1: JAUS Specification Documents.

JAUS Standards Date Content

JAUS /SDP Transport Specification (SAE AS5669A) [15]

2009.02 Specifications for UDP, TCP, and Serial based data transmission of JAUS messages.

JAUS Service Interface Definition Language (SAE AS5684B) [16]

2015.08 Defines the data structures of services, messages and protocol, formalized as an XML schema.

The JAUS standard is built upon JSIDL which defines an XML schema that enables formal specification of JAUS Services, Messages and Message Protocol. This schema aids robust and reliable interoperability by removing some of the ambiguities often found in hand-written standards.

The JAUS services are grouped in Sets and published as related but separate documents. They described generic concepts commonly found in unmanned systems. Table 7-2 and Table 7-3 list the most relevant Sets and Reports.

TECHNICAL VIEW

STO-TR-SET-263 7 - 7

Table 7-2: JAUS Service Sets.

JAUS Standards Date Content

JAUS Core Service Set (SAE AS5710A) [17]

2010.08 Low-level services such as transport and discovery to enable basic interoperation.

JAUS Mobility Service Set (SAE AS6009A) [18]

2017.11 Common mobility services such as global positioning and vehicle platform control by defining abstract services that are agnostic to specific vehicle mobility types (ground vehicles, aircraft, etc.).

JAUS Environment Sensing Service Set (SAE AS6060A) [19]

2019.08

Environment sensing capabilities commonly found across all domains and types of unmanned systems in a platform-independent manner (range, visual, video, etc.).

JAUS Manipulator Service Set (SAE AS6057A) [20]

2014.03

Service definitions for controlling robotic manipulators. Messages are defined generically so they can be applied to many different types of manipulators (arms, grippers, pan/tilt, etc.).

JAUS HMI Service Set [21] (SAE AS6040)

2020.12 Service definitions for HMI interaction that includes drawing, keyboard input, pointing device input, analog and digital user controls.

JAUS Mission Spooling Service Set (SAE AS6062A) [22]

2018.08 Services definition to store mission plans, coordinate mission plans, and parcel out elements of the mission plan for execution.

JAUS Unmanned Ground Vehicle Service Set (SAE AS6091) [23]

2014.07 Represents the platform-specific capabilities commonly found in UGVs, and augment the Mobility Service Set [18] which is platform-agnostic.

Table 7-3: JAUS Relevant Reports.

JAUS Standards Date Content JAUS Messaging over the OMG Data Distribution Service (DDS) (ARP6227) [24]

2018.08 Defines a standard representation of JAUS AS5684A message data in DDS IDL defined by the Object Management Group (OMG) CORBA 3.2 specification.

JAUS Compliance and Interoperability Policy (ARP6012) [25]

2014.09 Recommends an approach to documenting the complete interface of an unmanned system or component in regard to the application of the standard set.

Architecture Framework for Unmanned Systems (AIR5665B) [26]

2018.08 Describes the Architecture Framework for Unmanned Systems (AFUS). AFUS comprises a Conceptual View, a Capabilities View, and an Interoperability View. The Conceptual View provides definitions and background for key terms and concepts used in the unmanned systems domain. The Capabilities View uses terms and concepts from the Conceptual View to describe capabilities of unmanned systems and of other entities in the unmanned systems domain. The Interoperability View provides guidance on how to design and develop systems in a way that supports interoperability.

TECHNICAL VIEW

7 - 8 STO-TR-SET-263

Some JAUS services sets are still under development and may become available in the future (Table 7-4).

Table 7-4: JAUS Future Service Sets.

JAUS Standards Content

JAUS USV Service Set (SAE AS6063)

Unmanned surface vehicle specific capabilities that are not supported by the higher level platform-independent services. This service enables interoperability on common elements specific to unmanned surface vehicles.

JAUS Unmanned Underwater Vehicle Service Set (SAE AS6111)

Unmanned underwater vehicle specific capabilities that are not supported by the higher level platform-independent services.

7.1.2.2 Eurobotics Multi-Annual Roadmap (MAR)

The Eurobotics Multi-Annual Roadmap (MAR) [27], is a detailed technical guide that identifies expected progress within the community and provides an analysis of medium to long-term research and innovation goals. The MAR primarily covers three areas: Domains, System Abilities and Technologies.

7.1.2.2.1 Domains Domains are based on the different business models which in turn capture all parts of the market for robotics technology. The Domain overview moves beyond the simple division of markets into Industrial and Service and acknowledges the wide impact of robotics technologies and the importance of vertical end user markets.

7.1.2.2.2 System Abilities System Abilities provide an application, domain and technology independent way of characterizing whole system performance and through the definition of levels identify the different abilities that robotic systems can possess.

For each ability, the MAR defines a set of System Ability Levels, which provide a progressive characterization of what any system might be required to do for a particular application.

The following abilities have been identified:

• Adaptability, which includes:

• Parameter Adaptability.

• Component Adaptability.

• Task Adaptability.

• Cognitive Ability, which includes:

• Action Ability.

• Interpretative Ability.

• Envisioning Ability.

• Learning Ability.

TECHNICAL VIEW

STO-TR-SET-263 7 - 9

• Reasoning Ability.

• Configurability.

• Decisional Autonomy.

• Dependability, which includes:

• Failure Dependability.

• Functional Dependability.

• Environment Dependability.

• Interaction Dependability.

• Interaction Ability, which includes:

• Human Robot Interaction.

• Robot Interaction.

• Human Robot Interaction Safety.

• Social Interaction Duration.

• Social Interaction Range.

• Social Interaction Role.

• Manipulation Ability, which includes:

• Grasping Ability.

• Holding Ability.

• Handling Ability.

• Motion Ability, which includes:

• Constrained Motion.

• Unconstrained Motion.

• Perception Ability, which includes:

• Perception Ability.

• Tracking Ability.

• Recognition Ability.

• Scene Ability.

• Location Ability.

7.1.2.2.3 Technologies Technologies are divided into clusters each characterized by a purpose;

• Systems Development: Better systems and tools;

• Human Robot Interaction: Better interaction;

• Mechatronics: Making better machines; and

• Perception, Navigation and Cognition: Better action and awareness.

TECHNICAL VIEW

7 - 10 STO-TR-SET-263

Details are given of the underlying individual technical components in each cluster and of metrics and benchmarks that may be used to establish the state-of-the-art and thus future progress.

7.1.2.3 ROS

Robotic Operating System (ROS) [28] is a middleware layer for robotic systems. It provides tools, services and hardware abstractions for integrating computing devices, sensors and low-level control systems on a robot. ROS is widely adopted in the robotic community and multiple components, like for instance cameras and thermal imaging sensors, are supported in ROS. ROS provides standard methods handling image sensors, navigation services, etc. and it is very easy to upgrade individual components to keep up with the rapid advances in sensor and computing technology.

7.1.3 Data Exchange Services

7.1.3.1 Data Delivery Service Protocols

7.1.3.1.1 The Data Distribution Services Standard

7.1.3.1.1.1 Data-Centric Publish-Subscribe

The DDS specification describes a Data-Centric Publish-Subscribe (DCPS) [29] model for distributed application communication and integration. This specification defines both the Application Interfaces (APIs) and the Communication Semantics (behavior and quality of service) that enable the efficient delivery of information from information producers to matching consumers.

The purpose of the DDS specification can be summarized as enabling the “Efficient and Robust Delivery of the Right Information to the Right Place at the Right Time.”

The expected application domains require DCPS to be high performance and predictable as well as efficient in its use of resources. To meet these requirements, it is important that the interfaces are designed in such a way that they:

• Allow the middleware to pre-allocate resources so that dynamic resource allocation can be reduced to the minimum;

• Avoid properties that may require the use of unbounded or hard-to-predict resources; and

• Minimize the need to make copies of the data.

DDS uses typed interfaces (i.e., interfaces that take into account the actual data types) to the extent possible. Typed interfaces offer the following advantages:

• They are simpler to use: the programmer directly manipulates constructs that naturally represent the data.

• They are safer to use: verifications can be performed at compile time.

• They can be more efficient: the execution code can rely on the knowledge of the exact data type it has in advance, to e.g., pre-allocate resources.

It should be noted that the decision to use typed interfaces implies the need for a generation tool to translate type descriptions into appropriate interfaces and implementations that fill the gap between the typed interfaces and the generic middleware. QoS (Quality of Service) is a general concept that is used to specify the behavior of a service. Programming service behavior by means of QoS settings offers the advantage that the application developer only indicates ‘what’ is wanted rather than ‘how’ this QoS should be achieved.

TECHNICAL VIEW

STO-TR-SET-263 7 - 11

Generally speaking, QoS is comprised of several QoS policies. Each QoS policy is then an independent description that associates a name with a value. Describing QoS by means of a list of independent QoS policies gives rise to more flexibility.

7.1.3.1.1.2 Real-Time Publish-Subscribe

The Real-Time Publish-Subscribe (RTPS) protocol [30] found its roots in industrial automation and was in fact approved by the IEC as part of the Real-Time Industrial Ethernet Suite IEC-PAS-62030. It is a field proven technology that is currently deployed worldwide in thousands of industrial devices. RTPS was specifically developed to support the unique requirements of data-distributions systems. As one of the application domains targeted by DDS, the industrial automation community defined requirements for a standard publish-subscribe wire protocol that closely match those of DDS. As a direct result, a close synergy exists between DDS and the RTPS wire protocol, both in terms of the underlying behavioral architecture and the features of RTPS. The RTPS protocol is designed to be able to run over multicast and connectionless best-effort transports such as UDP/IP. The main features of the RTPS protocol include:

• Performance and quality-of-service properties to enable best-effort and reliable publish-subscribe communications for real-time applications over standard IP networks.

• Fault tolerance to allow the creation of networks without single points of failure.

• Extensibility to allow the protocol to be extended and enhanced with new services without breaking backwards compatibility and interoperability.

• Plug-and-play connectivity so that new applications and services are automatically discovered and applications can join and leave the network at any time without the need for reconfiguration.

• Configurability to allow balancing the requirements for reliability and timeliness for each data delivery.

• Modularity to allow simple devices to implement a subset of the protocol and still participate in the network.

• Scalability to enable systems to potentially scale to very large networks.

• Type-safety to prevent application programming errors from compromising the operation of remote nodes.

The RTPS also support the dynamic discovery among the DDS entities belonging to the same DDS domain.

7.1.3.1.2 DDS for Time Sensitive Network (DDS-TSN) DDS for Time Sensitive Network specifies a Platform-Specific Module (PSM) of DDSI-RTPS for Time Sensitive Network transport.

This specification will improve the robustness and the effectiveness of Data Exchange Services based on the DDS protocols.

DDS-TSN is currently in the Draft status.

7.1.3.1.3 Information-Centric Networking Information-Centric Networking (ICN) is an approach to evolve the Internet infrastructure to directly support this use by introducing uniquely named data as a core Internet principle. Data becomes independent from location, application, storage, and means of transportation, enabling in-network caching and replication. The expected benefits are improved efficiency, better scalability with respect to information/bandwidth

TECHNICAL VIEW

7 - 12 STO-TR-SET-263

demand and better robustness in challenging communication scenarios. These concepts are known under different terms, including but not limited to: Network of Information (NetInf), Named Data Networking (NDN) and Publish/Subscribe Networking.

ICN concepts can be applied to different layers of the protocol stack: name-based data access can be implemented on top of the existing IP infrastructure, e.g., by providing resource naming, ubiquitous caching and corresponding transport services, or it can be seen as a packet-level internetworking technology that would cause fundamental changes to Internet routing and forwarding. In summary, ICN is expected to evolve the Internet architecture at different layers.

This family of protocols support design patterns which are critical for battlefield scenarios which involve highly dynamic and disrupted network conditions and where implementations over the TCP/IP architecture have struggled. The most relevant design patterns are:

• Host-independent abstractions;

• Multicast communication;

• Pervasive network-accessible storage;

• Opportunistic communication;

• Namespace synchronization as transport; and

• Data-centric security.

7.1.3.1.4 Message Queue Telemetry Transport Message Queue Telemetry Transport (MQTT) [31] is an open machine-to-machine information exchange protocol that uses the publish/subscribe principle. It was standardized 2013 by the Organization for the Advancement of Structured Information Standards (OASIS). It is usually used for Internet-of-Things purposes in slow sensor networks.

In different to DDS MQTT introduces a central component, the “broker” that handles communication between publisher and subscriber. The centralized broker reduces the efforts for publishers since the broker holds the information for late joiner and handles the ensured message transfer to the subscribers. This matter serves the common usage of MQTT and is applicable on Soldier Systems as well. In accordance with the chosen system busses, that also introduce centralized infrastructure their self it is possible to host the MQTT-broker on these performant system bus hosts and have minimal client implementations in the devices.

In case a second MQTT-broker needs to be integrated into a system the bridging function can be used. Bridged brokers share the subscribed topic between each other so that information are openly shared between the two.

For low performant sensor networks MQTT-SN is specified. While MQTT is using TCP to hold the connection MQTT-SN is using UDP. This is for different reasons. UDP is easier to implement and serves a connectionless communication model. Hence, it is not necessary to keep a connection open and serve keep-alive requested by the lower protocol. This reduces resource usage and communication efforts significantly.

7.1.3.2 Streaming Protocols

7.1.3.2.1 RTP / RTCP / SRTP / SRTCP The Real-Time Protocol Transport Protocol (RTP) [32] is a protocol designed for the transport of audio and video data over a network. Its purpose is to provide a framework for end-to-end delivery of media and has

TECHNICAL VIEW

STO-TR-SET-263 7 - 13

therefore seen use in many Voice-over-IP (VoIP) applications. RTP is capable of providing synchronization of and jitter compensation by marking packets with timestamps. Due to these features, data is usually sent over UDP allowing for fast data transfer.

RTP is operated in conjunction with the RTP Control Protocol (RTCP) [33] to allow the peers involved in an RTP session to exchange data on the quality of service of said session. A common use of the RTCP protocol is to facilitate flow control mechanisms in order to prevent network congestion.

The RTP protocol is an open protocol which is standardized through RFC documents, and by the use of RTP audio or video profiles. These profiles are used to specify the specific parameters used by the RTP protocol, tailored to the encoding used in the used audio/video encoding.

The RTP and RTCP protocols are unsecure by default. To increase security, the SRTP and SRTCP protocols (Secure RTP and Secure RTCP respectively) have been developed. These protocols provide both an authentication and an encryption layer. AES is used as the default cipher in a stream cipher configuration.

Being a secure alternative to RTP and RTCP, the SRTP [34] and SRTCP protocols are good candidates for audio and video stream transport in a DSS. Due to the end-to-end design, some care should be taken in session setup (e.g., through session protocols such as SIP) to provide a reliable service. The protocols can be used in both broadcasting and streaming scenarios.

7.1.3.2.2 Recommendation H.323 H.323 [35] is a hybrid system constructed of centralized intelligent gatekeepers, multipoint control units (MCU), gateways, and less intelligent endpoints (Figure 7-1).

Figure 7-1: H.323 Architecture.

Endpoints (i.e., terminals) provide point-to-point and multipoint conferencing for audio and, optionally, video and data. Gateways interconnect to Public Switched Telephone Network (PSTN) or ISDN networks

TECHNICAL VIEW

7 - 14 STO-TR-SET-263

for H.323 endpoint interworking. Gatekeepers provide admission control and address translation services for terminals or gateways. MCUs are devices that allow two or more terminals or gateways to conference with either audio and/or video sessions.

H.323 call signaling is based on the ITU-T Recommendation Q.931 protocol and is suited for transmitting calls across networks using a mixture of IP, PSTN, ISDN, and QSIG over ISDN.

Although the H.323 standard is more complete in its latest revisions, issues have arisen, such as long call-setup times, overhead of a full-featured conferencing protocol, too many functions required in each gatekeeper, and scalability concerns for gatekeeper call-routed implementations. In this respect, SIP solves some of the problems found in H.323 and offers a good alternative (see relevant section).

7.1.3.2.3 Real-Time Streaming Protocol (RTSP)

The Real Time Streaming Protocol (RTSP) [36] is an application-layer protocol for the setup and control of the delivery of data with real-time properties.

RTSP version 2.0 is published as RFC 7826. It is to be noted that, although based on the now obsolete RTSP 1.0, RTSP 2.0 is not backwards compatible other than in the basic version negotiation mechanism.

RTSP is designed for use in entertainment and communications systems to control streaming media servers. RTSP allows to establish and control media sessions between end points. Clients of media servers’ issue common media control commands, such as play, record and pause, to facilitate real-time control of the media streaming from the server to a client (Video On Demand) or from a client to the server (Voice Recording).

The transmission of streaming data itself is not a task of RTSP. Most RTSP servers use the Real-time Transport Protocol (RTP) in conjunction with Real-time Control Protocol (RTCP) for media stream delivery. However, some vendors implement proprietary transport protocols.

7.1.3.2.4 Session Description Protocol (SDP)

The Session Description Protocol (SDP) [37] is a method to describe the media supported in a session. SDP does not deal with establishing the session, but only with describing the media supported by the endpoints in the call. The IETF published the original specification as a Proposed Standard in April 1998, and subsequently published a revised specification as RFC 4566 in July 2006.

SDP is used for describing multimedia communication sessions for the purposes of session announcement, session invitation, and parameter negotiation. SDP is intended to be general purpose so that it can be used in a wide range of network environments and applications. However, SDP does not deliver any media by itself but is used between endpoints for negotiation of media type, format, and all associated properties. The set of properties and parameters are often called a session profile.

An SDP session description consists of a number of lines of text of the form:

<type>=<value>

Where <type> must be exactly one case-significant character and <value> is structured text, whose format depends on <type>.

In actual use, the call owner sends multicast messages which contain the description of the session, e.g., the name of the owner, the name of the session, the coding, the timing, etc. Depending on this information, the recipients of the advertisement take a decision about participation in the session.

TECHNICAL VIEW

STO-TR-SET-263 7 - 15

SDP is designed to be extensible to support new media types and formats. SDP can be used in conjunction with Real-time Transport Protocol (RTP), Real-time Streaming Protocol (RTSP), Session Initiation Protocol (SIP) and even as a standalone format for describing multicast sessions.

7.1.3.2.5 Session Initiated Protocol In every communication session, there needs to be a standard for the control of that session. For instance, it is necessary to communicate about how a connection should be set up. The Session Initiation Protocol SIP [37] is a protocol which is designed to provide this type of communication.

SIP identifies two parties, a client who wishes to set up a connection to some destination, and a server, who responds to the client requests by providing information whether this connection could be set up. The SIP protocol is similar to the HTTP protocol, in that it is text-based, and that it uses response codes to categorize the answers by the server.

Being a signaling protocol, SIP only provides a method for the orchestration of communication. For other parts of the communication (e.g., the transfer of data), other protocols are needed. For example, in a live-video conference call setting, SIP can be used to set up a RTP session over UDP which is used to transmit VP9-encoded video. Although SIP messages are unsecured by default, they can be secured using Transport Layer Security (TLS).

In Voice-over-IP communication, the SIP protocol is often used as the main signaling protocol. This is also reflected in SIP terminology, which corresponds to regular telephone terminology. This is also a potential use-case for SIP in a DSS, where it could be used to set up communication, for instance, it could be used to set up a multicast voice communication channel between a group of soldiers. Due to the SIP protocol seeing widespread use in the telephone industry, it might also be used to connect the DSS to external systems if so desired.

7.2 ALGORITHMS

7.2.1 Detection and Tracking Algorithms Detection, tracking and classification is fundamentally the process of balancing the probability of observing the target against the probability of false alarm/mis-classification of input signal. The detection performance is mainly given by the Signal-to-Noise Ratio (SNR) of the system producing detections when input signals are above some defined signal threshold in relation to system noise/clutter level. A gain in SNR can be attained using the optimal detector of the matched filter signal processor [38].

Tracking is based on the observation that real target detections are correlated in space/time whereas false noise detections are uncorrelated in space/time. Typically, the repeated stochastic detection process is compared to a model for predicting the likelihood of observing the same target detection over consecutive observations for a given space/time localization. There are many classes of models available to increase system tracking performance.

System classification is closely linked to the matched filtering processes, i.e., matching input signal to a template of target in question. The spread in observed signals, called features, for a given target determines the likelihood of assigning correct identification to input signals. Typically, features from different types of targets overlap, causing mis-labeling of incoming signals [39].

An algorithm producing detections, tracking and classifications must deliver some basic data properties, e.g., ID of target detection/track, localization in space/time, list of target specific properties (i.e., features) and confidence of estimates.

TECHNICAL VIEW

7 - 16 STO-TR-SET-263

7.2.2 Swarm Control Algorithms A prioritized sequential graduated optimization framework (Figure 7-2), similar to that in Ref. [40] offers a solution to realize multi-agent trajectory planning. The approach in Ref. [40] leveraged two optimal control algorithms: Level Set (LS) and Pseudospectral (PS) methods. The LS method (global planner) provided the warm start to the PS method (local planner). Both techniques were MATLAB dependent and they had to be modified to make them usable for embeddable applications. GPOPS-II [41] provides a robust implementation of PS methods and is available through the University of Florida. GPOPS-II is a MATLAB software intended to solve general nonlinear optimal control problems in conjunction with the powerful open-source interior-point sparse nonlinear programming problem solver, IPOPT [42]. Although GPOPS-II has proven to be quite powerful for solving trajectory optimization problems, it remains limited by its MATLAB implementation in terms of portability, embeddability, and computational efficiency. There is a new C/C++ version of GPOPS – II, CGPOPS [43], that may help overcome some of these issues.

Figure 7-2: Multi-Agent Trajectory Planning Architecture.

Other embeddable local planners such as ACADOS [44], PSOPT [45], and Optim Traj [46] offer potential but limited solutions. ACADOS is a fast and embeddable solver for nonlinear optimal control. It is free open-source software from the University of Freiburg. It is implemented in C++ and Python, with a MATLAB interface. But recent evaluations show that ACADOS is only efficient for small problems, warm starts and suboptimal solutions. PSOPT and Optim Traj present MATLAB-like problem specification akin to GPOPS-II in that they are not directly embeddable onto a board such as the Jetson TX2.

Other techniques to solve trajectory optimization problems include ALTRO [47] and FASTER [48]. Both methods utilize a form of graduated optimization technique while running onboard the platform. ALTRO comprises of two stages: The first stage solves a coarse solution via an improved Augmented Lagrangian-iLQR method that serves as the warm start for the second stage, an active-set projection method that achieves high precision constraint satisfaction. FASTER ensures safety without sacrificing speed by always having a feasible, safe back-up trajectory in the free-known space at the start of each replanning step.

TECHNICAL VIEW

STO-TR-SET-263 7 - 17

FASTER leverages Mixed Integer Quadratic Programming for choosing the trajectory interval allocation, and the time allocation is found by a line search algorithm initialized with a heuristic computed from the previous replanning iteration. It was tested extensively both in simulation and in real hardware, showing agile flights in unknown cluttered environments with velocities up to 7.8 m/s.

Search Motion Planning (SMP) for quadrotor navigation [49] proposes a trajectory in a lower dimensional state space and refines the final trajectory via unconstrained Quadratic Programming (QP). The SMP approach is able to efficiently generate resolution-complete (i.e., optimal in the discretized space), safe, and dynamically feasible trajectories via a Linear Quadratic Minimum Time (LQMT) formulation. Smoothness is guaranteed by imposing quadratic jerk or snap constraints. Heuristics such as minimum time with speed constraint are used to convert graph search to a LQMT. The method has been extended to planning with motion uncertainty, planning trajectories with limited field of view and moving obstacles [50], and planning trajectories in narrow environments [51].

7.2.3 Swarm Networking Algorithms The main task of swarming networks (or wireless sensor networks in general) is to sense data and transmit it to base node or other nodes in a multi-hop fashion. Routing is essential to perform such a task in a network infrastructure. Routing calculates a path that transmit the data efficiently from source to destination through several nodes. There are a variety of algorithms that perform these calculations taking into account, power and resources limitations, quality of the wireless communication channel, possible packet losses and delays.

A robust and high performance routing protocol should be scalable, stable, energy efficient and should utilize the highest bandwidth available with very low latency.

Routing Technology provides support for the implementation of a routing protocol. Conventional routing technologies can be used for establishing a UAV Swarm Ad Hoc Network. The 6 most common technologies are as follows:

• Store carry forward technology: When no relay node can be found at a certain time, the current node will store and carry the datagram until it finds a forwarding node with a penalty of delay when there is no forwarding node.

• Greedy forward technology: Selecting the node closest to the destination. However, the closes node to destination must be within communication range.

• Path discover technology: Flooding the routing request and maximizing the accessibility of the path. However, it utilizes high data bandwidth.

• Single-path technology: Utilizes a single path for transmission. However, it is not a robust solution since it depends on a single path and lacks transmitting data over alternative routes.

• Multi-path technology: Transmission is performed on many links, hence, increasing the robustness.

• Predictive routing technology: It predicts the future position of a node based on its position, velocity and direction, then chooses the next optimal hop. It is more suitable for UAV swarm networks.

The routing protocols of traditional ad hoc networks are not suitable for UAV swarm communication. The properties of UAV communication nodes have different properties than traditional network nodes since they are stationary and move rapidly.

TECHNICAL VIEW

7 - 18 STO-TR-SET-263

A general classification of Routing Protocols are as follows:

• Topology based Routing Protocols.

• Geographic/position based Routing Protocols.

• Swarm Intelligence based Routing Protocols.

Topology-based Routing Protocols use Internet protocol addresses to define the nodes and use existing link information to forward packets through appropriate paths in the network. This topology has 4 subclasses namely:

1) Static Routing: It has static and non-updateable routing tables. Suitable for fixed topology networks, therefore, it is not very suitable for swarm systems where each communication node is mobile.

2) Proactive Routing: Updates the routing table periodically to maintain the best communication topology between nodes. Bringing the advantage of near real-time communication. However, to maintain the best communication topology a continuous message exchange is needed.

3) Reactive Routing: It is an aperiodic, on-demand routing protocol. However, since it is aperiodic, it is not suitable for near real-time applications.

4) Hybrid Routing: Utilizes Proactive Routing for adjacent nodes in the same zone, Reactive Routing for nodes in different zones. However, it is really difficult to maintain an optimal Hybrid Routing network since the real-time behavior and message exchanges to calculate an optimal route due to the dynamic mobile behavior of UAV systems.

Geographic/position based Routing Protocols overcome the problems that topology based approaches could not handle. Geographical position data, the optimal number of hops among the nodes based on position, the density of the nodes within the region, etc., is used to calculate an optimal routing scheme within the network. There are several methods in this approach such as Greedy Perimeter Stateless Routing, Geographic Load Share Routing, Mobility Prediction-based Geographic Routing and Reactive-Greedy-Reactive routing protocol. The downside of this approach is packet losses. Since the routing is based on the correct position estimation of UAV nodes, in a high mobile environment, if the calculation of the next hop is not updated in a timely manner, packet losses may occur. However, it still has great potential for tracking, searching and multitasking of UAV swarms.

Swarm Intelligence based routing protocols are used to solve problems of communication range, expansion and information leakage. These protocols are mainly based on swarm patterns of fish, birds, insects, etc. By adjusting the topology of the group, communication security and optimal data communication bandwidth can be achieved.

There is no single best technology and routing protocol that is suitable for swarm UAVs, as in most engineering problems. Different UAV task scenarios have different requirements for routing protocols and technologies. For instance, consider a swarm of large Surveillance Drones that do not fly very close to each other. In this case, the network should support high bandwidth since many video sources will be streamed to the Command and Control Center and the latency of the video stream may not be an issue. On the other hand, a group small quadcopter flying densely needs a data latency as little as possible since they share each other’s 3-D locations and other telemetry data to avoid collisions. Therefore, a suitable network technology and protocol should be determined for each specific UAV swarm application.

7.2.4 Localization and Mapping in Swarm Systems Algorithms Localization and Mapping for robotic systems in GNSS denied environments is quite important but very difficult problem to solve. Methods developed for single-agent systems is relatively more mature compared

TECHNICAL VIEW

STO-TR-SET-263 7 - 19

to methods for multi-agent systems. Although this problem is a hot research topic, proposed methods are only demonstrated in limited environments on small sized robot teams. Application of these methods to real-life scenarios with large sized swarms is yet an unsolved problem. Several methods compiled from recent research papers are presented in this section to give idea about possible solutions to localization and mapping problem in GNSS denied environments.

In this section only SLAM algorithms using visual sensors (monocular or stereo) will be considered. The algorithms using other sensors such as LIDAR will not be mentioned. Some recently proposed multi-robot Collaborative Visual SLAM algorithms will be mentioned here without diving into detail. The algorithms are chosen such that both centralized and decentralized methods will be represented.

In order to perform Multi-Robot Collaborative SLAM in a swarm system, each robot must perform single-robot SLAM on itself and share its results with other robots and/or Ground Control Station. Most of the Collaborative SLAM studies in literature are based on ORB-SLAM [52] method, which is considered as state-of-the-art in SLAM algorithms. ORB-SLAM is a keyframe based robust SLAM algorithm which can be utilized in both indoor and outdoor applications. ORB-SLAM2 [53] and ORB-SLAM3 [54] are new variants of ORB-SLAM in which new sensor type support and multi-map support is presented.

Ref. [55] is a Collaborative Visual SLAM method which is implemented in centralized architecture. The method uses ORB-SLAM2 method on each agent but does not use the loop-closure feature. Each agent keeps its own local map which is limited to 50 keyframes and send this map to the server periodically. In order to reduce bandwidth, only the newly added or updated keyframes are sent. The server keeps all keyframes received from the agents. At first, different maps are produced for each agent. When a loop closure is detected between agents (two agents observe the same place), their server maps are merged and previous separate server maps are deleted for those agents. This method has been demonstrated in indoor environment with 2 hand-held cameras and in outdoor environment with 4 UAVs equipped with monocular camera. A picture of the results obtained during these experiments is given in Figure 7-3.

Figure 7-3: Experimental Results of Ref. [55] with 4 UAVs.

TECHNICAL VIEW

7 - 20 STO-TR-SET-263

JORB-SLAM [56] is another centralized method for Collaborative Visual SLAM. This method enhances ORB-SLAM2 algorithm to use multiple data from multiple images. Relative localization problem (robot-robot loop closure) is solved using APRIL tags placed on the agents. ORB-SLAM2 runs on agents simultaneously and their results are fused in a server agent with a graph-based global optimizer backbone. Apart from place recognition, which is used to detect inter-agent loop closures, usage of APRIL tags placed on the agents brings robustness to the system. This method has been demonstrated with hand-held cameras in indoor environment. The experimental setup is given in Figure 7-4.

Figure 7-4: Hand-Held Camera System with APRIL Tag Utilized in JORB-SLAM [56]. Image as seen by robot camera.

DOOR-SLAM [57] is a distributed, online and outlier resilient SLAM method for robotic teams. This method does not require full connectivity maintenance between robots. Inter-agent loop closure can be detected without exchanging raw data (sharing extracted feature points from keyframes is enough for this purpose). Since outlier rejection is applied to the loop closures, incorrect loop-closure candidates caused by perceptual-aliasing which may corrupt the result of the SLAM are removed. Stereo Visual Odometry algorithm which is present in RTAB-Map [58] is executed on agents. There is no intra-robot loop-closure detection and closing, so the agents themselves does not run SLAM on them. This method introduces two key modules:

1) Pose-graph optimizer combined with a distributed pairwise consistent measurement set maximization algorithm; and

2) Distributed SLAM front-end that detects inter-robot loop closures without exchanging raw sensor data.

For place recognition, NetVLAD [59] method which is based on Convolutional Neural Networks (CNN) is utilized. Buzz scripting language which can be integrated with ROS is used to deal with multi-robot communication issues. This method is demonstrated with 2 DJI Matrice 100 drones equipped with Intel Realsense D435 stereo cameras and NVIDIA Jetson TX2 companion computer. The test setup and results are given in Figure 7-5.

TECHNICAL VIEW

STO-TR-SET-263 7 - 21

(a) Without Outlier Rejection. (b) With Outlier Rejection.

Figure 7-5: Experimental Setup and Results of DOOR-SLAM [57].

Ref. [60] is another distributed Collaborative Visual SLAM method which uses enhanced version of ORB-SLAM method on the agents. The authors claim that they enhanced the ORB-SLAM algorithm by adding two new modules named as re-localization and active loop closure. For the Multi-Robot Collaborative SLAM problem, the method solves two main problems which are 1) Real-time communication between robots; and 2) Relative pose estimation and map merging. The communication issue is solved by using ROS framework. In relative pose estimation, DBoW2 [61] places recognition method which is also used for detecting loops in ORB-SLAM is utilized. Each agent performs an enhanced version of ORB-SLAM on itself for its own localization and mapping. The robots share the local map data they produce with other robots. Each robot keeps the map data received from other robots and processes this data like its own local map data (applies bundle adjustment, etc.). The relative pose estimation method is performed by finding the overlaps in their maps (via DBoW2). This method is demonstrated on ready datasets and on indoor application with hand-held camera systems.

TECHNICAL VIEW

7 - 22 STO-TR-SET-263

Do et al. propose a robust loop closure method for Multi-Robot Map Fusion [62]. A new loop closure quality measure is proposed in this method. The problem of selecting maximum set of valid loop closures satisfying a quality measure is turned into maximum edge weight clique problem and solved like this. Each agent performs its own local map and the maps of the agents are merged after processing the valid loop closures. This method is demonstrated on the dataset obtained from Autonomous Underwater Vehicle (AUV).

7.3 REFERENCES

[1] STANAG 4586 Ed.3 Nov 2012, Standard Interfaces of UAV Control System (UCS) for NATO UAV Interoperability, NATO Standardization Agency (NSA), November 2012.

[2] MAVLINK Micro Air Vehicle Communication Protocol. MAVLink Developer Guide. https://mavlink.io/en/ Accessed 22 May 2022.

[3] STANAG 4607 Ed.3 Sep 2010, NATO Ground Moving Target Indicator (GMTI) Format, NATO Standardization Agency (NSA), 2010.

[4] STANAG 7023 Ed.4 Oct 2009, Air Reconnaissance Primary Imagery Data Standard, NATO Standardization Agency (NSA), 2009.

[5] Gstreamer Open-Source Multimedia Framework 1.20.0. https://gstreamer.freedesktop.org/ 3 Feb, 2022.

[6] ITU H.265 (Version 6) / ISO/IEC 23008-2: High Efficiency Video Coding, 2019.

[7] ITU H.264 (Version 13) Advanced Video Coding for Generic Audiovisual Services, 2019.

[8] Matroska Element Specification, version, 2005 ‒ 2020. https://www.matroska.org/index.html

[9] ISO/IEC 14496-14 Information Technology ‒ Coding of Audio-Visual Objects ‒ Part 14: MP4 file format, 2003.

[10] International Standard 15948:2003 – Portable Network Graphics (PNG): Functional specification 2003.

[11] TIFF Specification Revision 6.0, 1992.

[12] STANAG 4545 Ed.2 May 2013, NATO Secondary Imagery Format (NSIF), NATO Standardization Agency (NSA), 2013.

[13] STANAG 4609 Ed.3 May 2009, NATO Digital Motion Imagery Standard, NATO Standardization Agency (NSA), 2009.

[14] STANAG 7085 Ed.3 Oct 2011, NATO Interoperable Data Links for ISR Systems, NATO Standardization Agency (NSA), 2011.

[15] AS5669A JAUS / SDP Transport Specification, 22 Apr 2019.

[16] AS5684 JAUS Service Interface Definition Language, 15 May 2020.

TECHNICAL VIEW

STO-TR-SET-263 7 - 23

[17] AS5710 JAUS Core Service Set, 24 Apr 2015.

[18] AS6009 JAUS Mobility Service Set, 09 Sep 2017.

[19] AS6060A JAUS Environment Sensing Service Set, 28 Oct 2021.

[20] AS6057 JAUS Manipulator Service Set, 24 Apr 2015.

[21] AS6040 JAUS HMI Service Set, 09 Dec 2020.

[22] AS6062 JAUS Mission Spooling Service Set, 16 Aug 2018.

[23] AS6091 JAUS Unmanned Ground Vehicle Service Set, 22 Apr 2019.

[24] ARP6227 JAUS Messaging over the OMG Data Distribution Service (DDS), 23 Aug 2018.

[25] ARP6012 JAUS Compliance and Interoperability Policy, 05 Sep 2014.

[26] AIR5665B Architecture Framework for Unmanned Systems, 23 Aug 2018.

[27] Eurobotics, “Robotics 2020 Multi-Annual Roadmap,” December 2016.

[28] ROS. ROS ‒ Robot Operating System. https://www.ros.org/ Accessed 22 May 2022.

[29] Object Management Group (2015), “Data Distribution Services,” Issue 1.4, April 2015.

[30] OMG, “The Real-Time Publish-Subscribe Wire Protocol DDS Interoperability Wire Protocol Specification – version 2.2,” November 2014.

[31] OASIS: MQTT Version 5.0, 2019.

[32] Schulzrinne, H., Casner, S., Frederick, R. and Jacobson, V., “RTP: a Transport Protocol for Real-Time Applications,” RFC 3550, July 2003.

[33] Huitema, C., “Real Time Control Protocol (RTCP) attribute in Session Description Protocol,” RFC 3605, October 2003.

[34] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and Norrman, K., “The Secure Real-Time Transport Protocol,” RFC 3711, March 2004.

[35] ITU, H.323, “Packet Based Multimedia Communications Systems,” February 1998.

[36] Schulzrinne, H., Rao, A., Lanphier, R. et al., “Real Time Streaming Protocol (RTSP),” RFC 7826, December 2016.

[37] Handley, M., Jacobson, V., and Perkins, C., “SDP: Session Description Protocol,” RFC 4566, July 2006.

[38] Leva, N., Radar Principles, Wiley & Sons, 1988.

[39] Duda, R., Hart, P., and Stork, D., Pattern Classification, Second edition, Wiley & Sons, 2001.

TECHNICAL VIEW

7 - 24 STO-TR-SET-263

[40] Robinson, D.R., Mar, R.T., Estabridis, K. and Hewer, G., “An Efficient Algorithm for Optimal Trajectory Generation for Heterogeneous Multi-Agent Systems in Non-Convex Environments,” in IEEE Robotics and Automation Letters, 3(2), pp. 1215-1222, April 2018, doi: 10.1109/LRA. 2018.2794582.

[41] Patterson, M.A. and Rao, A.V., “GPOPS-II: A MATLAB Software for Solving Multiple-Phase Optimal Control Problems Using HP-Adaptive Gaussian Quadrature Collocation Methods and Sparse Nonlinear Programming,” ACM Transactions on Mathematical Software, 41(1), pp. 1-37, October 2014.

[42] Ipopt Documentation. https://coin-or.github.io/Ipopt/ Accessed 22 May 2022.

[43] Agamawi, Y.M. and Rao, A.V., “CGPOPS: A C++ Software for Solving Multiple-Phase Optimal Control Problems Using Adaptive Gaussian Quadrature Collocation and Sparse Nonlinear Programming,” ACM Trans. Math. Software, 6(3), July 2020. doi:10.1145/3390463.

[44] Verschueren, R., Frison, G., Kouzoupis, D., van Duijkeren, N., Zanelli, A., Novoselnik, B., Frey, J., Albin, T., Quirynen, R. and Diehl, M., “ACADOS: A Modular Open-Source Framework for Fast Embedded Optimal Control,” arXiv preprint, 2019. https://arxiv.org/abs/1910.13753.

[45] Becerra, V.M., “Solving Complex Optimal Control Problems at No Cost with PSOPT,” in 2010 IEEE International Symposium on Computer-Aided Control System Design, pp. 1391-1396, 2010.

[46] Kelly, M., “An Introduction to Trajectory Optimization: How to Do Your Own Direct Collocation,” SIAM Review, 59(4), pp. 849-904, 2017. doi: 10.1137/16M1062569.

[47] Howell, T.A., Jackson, B.E. and Manchester, Z., “Altro: A Fast Solver for Constrained Trajectory Optimization,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7674-7679, 2019.

[48] Tordesillas, J., Lopez, B.T., Everett, M. and How, J.P., “arxiv:2001.04420,” 2020.

[49] Liu, S., Atanasov, N., Mohta, K., and Kumar, V., “Search-Based Motion Planning for Quadrotors Using Linear Quadratic Minimum Time Control,” CoRR, vol. abs/1709.05401, 2017. http://arxiv.org/abs/1709.05401.

[50] Liu, S., Mohta, K., Atanasov, N., and Kumar, V., “Search-Based Motion Planning for Aggressive Flight in SE(3),” IEEE Robotics Autom. Lett., vol. 3, no. 3, pp. 2439-2446, 2018. [Online]. Available: doi: 10.1109/LRA.2018.2795654.

[51] Liu, S., Mohta, K., Atanasov, N., and Kumar, V., “Towards Search-Based Motion Planning for Micro Aerial Vehicles,” CoRR, vol. abs/1810.030712018. http://arxiv.org/abs/1810.03071.

[52] Mur-Artal, R., Montiel, J.M.M. and Tardós, J.D., “ORB-SLAM: A Versatile and Accurate Monocular SLAM System” IEEE Trans. Robot., vol. 31, no. 5, pp. 1147-1163, 2015.

[53] Mur-Artal, R. and Tardós, J.D., “ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras,” IEEE Trans. Robot., vol. 33, no. 5, pp. 1255-1262, 2017.

[54] Campos, C., Elvira, R., Rodríguez, J.J.G., Montiel, J.M.M. and Tardós, J.D., “ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM,” pp. 1-15, 2020.

TECHNICAL VIEW

STO-TR-SET-263 7 - 25

[55] Schmuck, P. and Chli, M., “Multi-UAV Collaborative Monocular SLAM,” IEEE International Conference on Robotics and Automation (ICRA), 2017.

[56] Chakraborty, K., Deegan, M., Kulkarni, P., Searle, C., and Zhong, Y., “JORB-SLAM: A Jointly Optimized Multi-Robot Visual SLAM,” 2020.

[57] Lajoie, P., Ramtoula, B., Chang, Y., Carlone, L. and Beltrame, G, “DOOR-SLAM: Distributed, Online, and Outlier Resilient SLAM for Robotic Teams,” IEEE Robotics and Automation Letters, 2019.

[58] Labbe, M. and Michaud, F., “RTAB-Map as an Open-Source LIDAR and Visual Simultaneous Localization and Mapping Library for Large-Scale and Long-Term Online Operation,” Journal of Field Robotics, vol. 36, no. 2, pp. 416-446, 2019.

[59] Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., and Sivic, J., “NetVLAD: CNN Architecture for Weakly Supervised Place Recognition” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 5297-5307, 2016.

[60] Zhang, H., Chen, X., Lu, H. and Xiao, J., “Distributed and Collaborative Monocular SLAM for Multi-Robot System in Large-scale Environments,” International Journal of Advanced Robotic Systems, 2018.

[61] Gálvez-López, D. and Tardós, J.D., “Bags of Binary Words for Fast Place Recognition in Image Sequences,” IEEE Transactions on Robotics, 28(5), pp. 1188-1197, 2012.

[62] Do, H., Hong, S., and Kim, J., “Robust Loop Closure Method for Multi-Robot Map Fusion by Integration of Consistency and Data Similarity,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) October 25 ‒ 29, 2020, Las Vegas, NV, USA (Virtual).

TECHNICAL VIEW

7 - 26 STO-TR-SET-263

STO-TR-SET-263 8 - 1

Chapter 8 – INTEROPERABILITY

This chapter describes a possible approach to achieve a system-level interoperability as bases for swarm system adaptability and evolutionary development. The chapter will be based on the Interoperable Open Architecture guidelines, and its relevant standardization implementation, such as the NATO Generic Vehicle Architecture (NGVA) for Land Systems (STANAG 4754) [1].

8.1 INTEROPERABLE OPEN ARCHITECTURE

8.1.1 Introduction Design systems and system-of-systems while satisfying the combined attributes of performance, scalability and reliability is hard. Adding non-functional requirements of interoperability, flexibility, modularity, and portability makes the problem even more difficult. This goal is not just for systems of a similar type but also for different mission specific systems, such as system-of-systems, which need, want and use similar data.

8.1.2 Open Architecture An Open Architecture (OA) is meant to deliver specific benefits – interoperability is perhaps the most important. For over 10 years government procurement agencies have been asking for Open Architecture solutions, and all they have got is a set of best practices and guidelines for the adoption of open standards, open systems and COTS technologies. All of these are components of an OA, but they need to act together via a common infrastructure to provide actual benefits to the system customers, and more importantly, to the system users.

8.1.3 Interoperable Open Architecture Definition Interoperable Open Architecture (IOA) is a System of System Architecture (SoSA) based upon open standards that delivers interoperability among subsystems and applications built and integrated at different times. To be meaningfully interoperable, systems, which have been built at different times, and differs for:

1) Adopted hardware and software architectures; and

2) Technologies and operational goals must be readily integrable at a semantic level of the exchangeddata.

8.1.4 System-Level Interoperability IOA defines a SoSA through its data via the adoption of a data-centric middleware open standard. Typical Open Standard for SoS integration addresses both node integrability via the specification of low level protocols and application portability via the specification of API (Application Programming Interface).

The above system quality factor is addressed by a System Software Bus (SSB) infrastructure. The SSB infrastructure also acts as run-time data repository, and the single authoritative source of state information in the system.

However, while the fulfilment of these two system quality requirements provides the basis for an interoperable OA, allowing data to be fully independently of any application or function, it may be still insufficient. System integration best practices have recognized that application portability is a goal, but that system-level interoperability is the higher order functionality that is needed. Communication protocols a SSB

INTEROPERABILITY

8 - 2 STO-TR-SET-263

can rely upon have been openly standardized, but the meaning of the information flow has not. To address this issue, it could be defined a System Data Model (SDM) that defines content and context of the data that is exchanged around the SoS. The SDM fully specifies the data and its meaning so allowing for them to be instantiated appropriately on different technologies to exchange information. The SDM includes a set of meta-data that defines:

• A Name, which uniquely identifies the semantic information associated with data,

• A Domain, which provides the context such a semantic applies, and

• A set of QoS Profiles, each specifying a criticality degree for a given data flow serving that information.

This approach will provide for all of the following levels of interoperability:

• Binary-level Interoperability: Bits and bytes are exchanged in an unambiguous manner via a set of standardized communication protocols.

• Syntactic Level Interoperability: A common data format is defined.

• Semantic Level Interoperability: The meaning of exchanged data is specified via a common information model, the SDM.

As concern the Information-level interoperability, it is worth noting that posting data using “Information Names” removes the restriction that packets can only name communication endpoints, e.g., IP nodes. As far as the network is concerned, the Information Name can identify anything – an endpoint, an imagine, the stream of frames of a movie, a command, etc. This conceptually simple change allows SS4ISR to use almost all of the Internet’s well-tested engineering properties to solve not only communication problems but also digital data distribution and control problems, see Section 5.6 for a description of data exchange services based upon the data-centric paradigm.

8.1.5 An IOA Implementation: The NATO Generic Vehicle Architecture The NATO has based the specification of the NATO Generic Vehicle Architecture (NGVA) [1] on the IOA. With the adoption of the IOA the NATO has changed its way to manage the systems-of-systems integration by defining a new perspective for relationships between Government Procurement Agencies (GPA) and System Integrators (SIs).

Communication between two subsystems of any type requires at least two common properties: the production and consumption of data. The NATO has assumed full responsibility for defining and maintaining a Land Data Model System, which is vehicle System Data Dictionary (SDD) defined on a subsystem-type basis (sensors, C4I, etc.). The specification and ownership of the Land Data Model forms the core of the NATO’s strategically different engineering approach to systems architecture design.

As vehicle System Software Bus (SSB) the NATO mandated the use of the Data Distribution Service for Real Time System (DDS), an open-standard middleware specified by Object Management Group (OMG) [2], [3].

8.1.6 Design Guidelines for Interoperability The means of implementing system-level interoperability via data and information flows is to use a data-centric design approach. The proposed system adopts a Data-centric Interoperability that is based on the following architectural guidelines:

INTEROPERABILITY

STO-TR-SET-263 8 - 3

• All data (that is to be exchanged in the system) are rigorously defined (with semantics), described, documented and available via a distributed repository which act as SDM. As described in the following paragraphs in the system the SDM provides for semantics, syntactic and criticality data for any information of the system segments, i.e., sensor network segment and System/Mission Management segment.

• Data distribution is managed by an SSB infrastructure, which also acts as run-time data repository, and the single authoritative source of state information in the system. As described in the following paragraphs, the SSB is based on the DDS specification suite.

According to the IOA principles, the SSB provides a system-level interoperability by maintaining the system state in the architecture infrastructure and not within applications or in a specific subsystem. In fact, the public portion of the system state has to be made explicit within the SSB by every connected functional subsystem or application. This decouples not only the data exchange among different SoS nodes, e.g., a sensor network node, the System/Mission Management segment, the DSSs, and the vehicles, but also their state information. This allows any subsystem or application to obtain public state information from the software infrastructure rather than directly from another subsystem or application, thereby reducing coupling and removing stovepipes.

8.1.7 Expected Benefits

Innovative Approach to SoS Integration The IOA aims to provide an innovative approach to SoS integration. It will: increase SoS effectiveness; enhance adaptability upgradeability and re-configurability, i.e., system agility, in the face of ever-changing operational demands; and reduce whole SoS life costs. The IOA approach also aims to enable the retrofit of a wide range of products with the minimum of system impact and logistic support in operations.

The System Integration Benefits The adopted approach does not change the system integration process and organization. Prime System Integrator (SI) continues to bring together the various tier-2 and tier-3 SIs and manages the systems roadmap. However, now any SI may work within shared architectural guidelines as specified by the SDM, which typically is specified by the military customer.

Should a subcontractor jeopardize the schedule or deliverables, the prime SI now can search for alternative suppliers. Because the SDM ensures interoperability, the integration phase will not be a significant risk and the SI can therefore focus on functionality, usability, price, durability and so on. The business benefits to industry of mitigating the risk of subcontractor failure cannot be underestimated any more, also for large programs.

Even legacy subsystems can be brought forward into the proposed architecture. A legacy subsystem can be “wrapped” with a DDS-compliant data gateway to become an integral part of the System. This results in low-cost legacy transition mechanism for both the GA and SIs.

Through-Life Benefits As System is deployed, there will be a significant change in through-life maintenance. Smaller maintenance and upgrade can be done at much more regular intervals, avoiding risky and costly ‘big bang’ improvements.

Perhaps the biggest savings will come in Integrated Logistics Support (ILS) [4]. Additionally, the existence of a standard logical data bus makes the development and integration of simulation and training systems orders of magnitude easier because a range of Mission simulation and training systems can leverage the actual Mission software components through the same SSB interface and adopting the same data model via the common SDM.

INTEROPERABILITY

8 - 4 STO-TR-SET-263

The system will also become easier to improve because integrating new capabilities is as simple as defining an extension of the SDM. The DDS ensures that new data providers and consumers can be discovered dynamically by the system in any given domain. A new System function can select the data it needs from the DDS while adding itself as a provider of new sets of data to the whole environment.

8.2 SS4ISR DATA MODELING APPROACH

8.2.1 SS4ISR Data Model Rationale

8.2.1.1 SS4ISR Data Model Concepts

The SS4ISR SDM (SSDM) aims to model the standard SS4ISR Concepts, i.e., Rules, Information Elements (IE), Functions, and Services.

SDM is organized in Domains; each domain models a specific SS4ISR concept.

Examples of domains are:

• A system resource, such as a video camera, laser range finder, radio equipment;

• A system service, such as Resource Registration Service; and

• A system function, such as power management, monitoring and control.

The SSDM is specified in a formal design language, e.g., Unified Modelling Language (UML) [5], in order to provide a Platform Independent Model (PIM) of the SS4ISR Concepts, in terms of data, operations, events, interactions.

Starting from the PIM model it is possible to achieve a Platform Specific Model (PSM) which allows for the implementation of the SS4ISR Concepts for a specific execution platform adopted by a target SS4ISR system.

An automatic translation from the PIM to PSM greatly increases the SSDM advantages and exploitation.

A Domain module is the implementation of the Domain Specification for a given execution platform. The set of Domain modules build the SS4ISR Domain Layer, which provides for a standard access to the SS4ISR services, which are strictly related to the SS4ISR Concepts, e.g., Resources, System Services, System Functions.

As depicted in the Figure 8-1, a SS4ISR Application typically is based on one or more SS4ISR Services offered by the SS4ISR Domain Layer.

The SS4ISR Infrastructure provides for basic services such as data exchange, repository, Operating System.

The Data Model improves key SS4ISR system quality factors such as interoperability and agility.

INTEROPERABILITY

STO-TR-SET-263 8 - 5

Figure 8-1: SS4ISR Application Layered Architecture.

8.2.1.2 How Data Model Improves Interoperability

Interoperability among the SS4ISR Application Components can act on different levels, see also Section 8.1.4. Starting from the more basic level they are:

• Binary Level, which guarantees that interacting SW Components adopt a compatible interpretation of the binary packet exchanged.

• Syntactic Level, which guarantees that interacting SW Components adopt a compatible interpretation of the syntax of the data structure translated by the binary packet exchanged.

• Semantic Level, which guarantees that interacting SW Components adopt a compatible interpretation of the semantic of the data represented by a given syntax.

• Procedural Level, which guarantees that interacting SW Components adopt compatible behaviors during the exchange of IEs implementing a standard SS4ISR procedure.

The SS4ISR Infrastructure provides for Binary-Level interoperability.

The SS4ISR Data Model provides for:

• Syntactic Level interoperability, due to the specification of a standard data syntax for each IE.

• Semantic Level interoperability, due to the specification of standard naming rules for each kind of IE. Semantic interoperability is further improved by the adoption of DDS as Data Exchange Services protocol, due to its concepts of Topic Name, which adds a semantic value to Data Syntax.

• Procedural Level interoperability, due to the specification of standard SS4ISR Procedures, e.g., via Use Case and Sequence Diagrams. It is worth noting that this procedure acts at Application Layer and implements Standard SS4ISR Rules, e.g., object creation, or Services, e.g., Resource Registration.

Application HMI Layer

Application Service Logic Layer

SS4ISR Domain Layer

SS4ISR Infrastructure Layer

INTEROPERABILITY

8 - 6 STO-TR-SET-263

In conclusion, the SS4ISR Data Model guarantees that interaction Application components is able to correctly exchange a given sequence of IEs, and deal with each IE by assigning compatible syntax and semantic to each elements the IE is composed by.

It is worth noting that, these degrees of interoperability are independent of any specific agreement among the SW Component manufacturers, e.g., Interface Specification Requirements, and directly stems from the standardization process of the SS4ISR Data Model.

8.2.2 Data Categories A data model represents an information model in a form that is specific to a particular paradigm or theory on the representation, storage and handling of data, often reflecting a certain type of data store or repository technology.

A definition of a common data model facilitates the data interchange with system and subsystems. Relevant modules and categories for the data model are listed below.

A Common Data Model defines a data structure that allows heterogeneous SS4ISR to interoperate. A Common Data Model is important to guarantee the standardization of the system architecture in an environment where it is common to have multiple applications, and interoperable subsystems. A Common Data Model integrates unrelated data into useful information to give enhanced capabilities and more effective functionalities to the DSS.

The SS4ISR Data Model shows the information needs for a SS4ISR system and allows to identify specific data interfaces for the architecture. The data interfaces can be embedded on a SS4ISR system or stated in a technology independent way.

The data model categories for a SS4ISR are:

• C4I is a core category and one of the most important. This category includes Situational awareness, Coordination and interoperability, Planning and Mission Support, communications, Orders and Requests, actions.

• Data processing with focus in multimedia data processing, audio, video and image, orders, target data, routes and navigation information.

• Logistics and support for power and system information, maintenance and material control.

• System processing to maintain the general features of the SS4ISR.

• Sensor/Effectors, multiple types of sensors require a complex data model structure.

• Data streaming to exchange audio and video with external system and subsystems.

• Training and simulation.

• Data Storage to allow the management and persistence of critical information related with the system configuration, log data, C4I data and the system status during the mission execution.

8.2.3 SS4ISR Data Categories vs NGVA Data Model This paragraph describes how the NGVA Data Model could be adopted as basis to specify the data model for a set of Data Categories as described in Section 8.2.1.

INTEROPERABILITY

STO-TR-SET-263 8 - 7

Sensor/Effectors Support:

• Tactical Sensor, which models a generic tactical sensor.

• Laser Range Finder, which extends Tactical Sensor to model a Laser Range Finder sensor.

• Laser Warning System, which extends Tactical Sensor to model a Laser Warning Sensors.

• Acoustic Gunshot Detection System, which extends Tactical Sensor to model an Acoustic Gunshot Detection Sensors.

• Meteorological Sensor, which models a Meteorological Sensors Station.

• Tactical Effector, which models a generic tactical effector (offensive or defensive weapon).

• Armament, which models basic and composite armament systems.

• Automatic Weapon, which extends the Tactical Effector to model an Automatic Weapon, such as machine gun, automatic grenade launcher, cannon.

• Remote Control, which models the basic remote control of an unmanned vehicle.

• Unmanned Air Vehicle, which models the basic control of an UAV.

System Processing Support:

• Data Logging, which models a generic Logging service.

• HMI Input Devices, which models a set of Input Devices.

• HMI Presentation, which models a set of window-based graphical user interface. Both Input and output graphical items are addressed.

Data Processing Support:

• Sensor Data Fusion, which models the transformation of Sensor Events in Object of Interest, a.k.a. Fused Track.

• Routes, which models a route management service.

• Navigation, which models a Navigation Service.

• Video, which models the video processing plug-in.

Situational Awareness Support:

• Video, which models the control of a video device. Both Optical and Infrared Video device are supported.

• Video Tracking, which models the tracking an object of interest via a video device.

• Mount, which models the control of both rotational and linear mount devices.

Coordination Support:

• Arbitration, which models the coordinate control of a shared resource, e.g., a Video Device.

Logistic and Support:

• Inventory, which models basic inventory service.

INTEROPERABILITY

8 - 8 STO-TR-SET-263

• Vehicle Configuration, which models the automatic registration of resources1 to the system. This model is completely applicable to any platform, included a DSS.

• Power, which models a power system for a platform.

C4I:

• Tactical Areas, which models geometric elements which relate to area description.

• Target and Threats, which models the Target and Threat concepts.

8.2.4 Data Modeling Process Guidelines The set of Software Technologies and Architectures to support the SS4ISR strongly rely on the Data as first class entity in software system architecture. Interoperable Open Architecture framework identifies in the Customer the stakeholder who specifies and manages the System Data Model. NGVA Data Model [1] is a meaningful example of such an approach, where the NATO provides the Data Model for NGVA Reference Architecture as an integrated part of the STANAG.

As consequence the Data Modeling process assumes a key role in the design of a System Architecture, and even more when the goal is to define a Reference System Architecture, which shall act as standard for many different systems, or even category of systems (soldier, vehicle, base), as the case of SS4ISR for the DSS.

A Reference System Architecture shall allow system architect to capture and formalize expertise in a way that is not impacted by specific implementation technologies. Such an approach results in key advantages such as:

• System Architecture Longevity: It will ensure that Reference Architecture models are long-life reusable assets, immune to technological fashion changes,

• Unambiguous System Architecture: Reference Architecture data models shall provide an unambiguous understanding of the modeled data, in terms of data: 1) Meaning; 2) States; and 3) Exchange procedures so as to avoid incompatible implementations with consequent serious impact on the interoperability among actual system components provided by different manufacturer.

OMG specified the Model Driven Architecture (MDA) [6], whose strategy is founded upon a combination of abstraction and automation. Abstraction means that modelers construct a set of simple but precise views of the system requirements and architecture, without prejudice to the implementation technologies.

The adoption of MDA as base upon which build the data modeling process for a Reference Architecture, such as the SS4ISR Reference Architecture, is based on:

• Top-level architecture, which is defined in terms of Domains, each modeling a set of Reference Architecture abstract functions.

• A set of Domain models, each one being defined in terms of classes as abstract model of the data meaning via attributes, relationships, operations and states, and sequence specifying the Domain behaviors, i.e., the data exchanges.

• Automated translation tools, which translate the abstract models into implementation specific data and operations. This feature: 1) Eliminates the need to manually maintain technology specific artefacts; and 2) Guarantees the univocal generation of technology dependent artefacts, which greatly improve interoperability.

1 The term “resource” identifies any element of a SS4ISR, e.g., sensor, effector, application.

INTEROPERABILITY

STO-TR-SET-263 8 - 9

Domain

A domain is the primary unit of modularity and represents a single subject matter. The greatest risk in Domain definition is the pollution, which occurs when the rules and policies of one domain are mixed up with the rules and policies of another domain. Examples include:

• Polluting an application subject matter with technology subject matter. For example, building adomain that includes information about GPS positions along with information about how this isdistributed by the System Software Bus. This is the kind of pollution that is avoided by separatingthe PIM from the PSM by means of the automated translation tools;

• Mixing two application subject matters. For example, building a “Routes” domain in which a routeknows how to render itself graphically and/or textually on an HMI. This kind of pollution is avoidedby separating those two subject matters into two different domains.

Counterparts

A consequence of domain partitioning is that the different aspects of a System Node/Part are often represented as separate classes in different domains. In MDA these classes are referred to as “counterpart classes.” Relationships between these counterpart classes enable navigation from one class to its counterpart(s), and consequently from one domain to any other related domain.

Each counterpart class embodies data (attributes) relevant to that level of abstraction, so a class which models the SS4ISR Route capabilities might have a “distanceRemaining” attribute, while a class which models the SS4ISR Navigation capabilities might have a “currentPosition” attribute.

The links between these counterpart classes are known as “counterpart relationships” and allow SS4ISR applications to navigate from one aspect of the installation platform to its related aspects to extract data at that level of abstraction.

8.2.4.1 Platform Independence

A Reference Architecture requires for Platform Independence, where the term “Platform” has a manifold meaning:

• A System Node type, e.g., UxV, Swarm.

• A System Node Implementation, e.g., National Swarm Systems.

• An Execution Platform, e.g., DDS Middleware, DBMS, MQTT Middleware.

In the context of Data Modeling, the term “Platform” refers to the “Execution Platform” onto which the PIM will ultimately be mapped, typically by automatic translation, via a Platform Specific Model (PSM). The term “Execution Platform” covers the set of technologies chosen for system deployment.

The MDA process is based upon a number of fundamental concepts, which are summarized in Figure 8-2.

INTEROPERABILITY

8 - 10 STO-TR-SET-263

Figure 8-2: Data Modeling Steps and Processors.

Platform Independent Model

It is worth noting that if a Reference Architecture is to be applicable to many deployed systems, and remain valid for several decades, then it must be specified in a platform independent way, in a “(execution) platform independent model.” The term “Platform” in this case refers to:

• Middleware, such as DDS, Micro-Services, or MQTT;

• Bus Technology, such as the USB;

• Message Definition Language, such as IDL in the case of DDS and Protocol Buffer in the case of GRPC;

• Programming Language, such as Java or C++;

• Database Technology, such as MySQL;

• Hardware Architecture, such as single node, multi-node, centralized or distributed; and

• Software Architecture, such as single process, multi-process, cooperative scheduling or pre-emptive scheduling.

This means that the SDM:

• Is simpler than its platform specific equivalent, and therefore easier to understand and cheaper to build and maintain.

• Is unaffected by changes in any of the above technologies, and therefore has greater longevity than an equivalent platform specific model.

• Is reusable across a range of platforms, for which suitable PIM translators are available.

PlatformIndependent

Model

PlatformSpecificModel

PlatformSpecific

Implementation

PIM-PSMTranslator

PSM-PSITranslator

Data-Centric Micro-Services RDBMS

Platform Independent

Messaging

INTEROPERABILITY

STO-TR-SET-263 8 - 11

In the case of SS4ISR, we also endeavor to be independent of:

• Deployed SS4ISR Type (Italian SS4ISR, Turkey SS4ISR, US SS4ISR, etc.).

Platform independence in the SDM domain is achieved using the same principles used by high level languages such as Java and Ada – use of abstractions that suppress the implementation detail of the underlying platform. In the SS4ISR, UML [5] is used to provide abstractions, such as:

• Classes to model data but without specifying platform specific data structure or storage medium.

• Operations and signals to model message-based communications but without specifying middleware and transport medium.

• State machines to model concurrent behavior but without specifying operating systems or scheduling strategies.

• Sequence to model data exchange among domain entities without specifying the data exchange protocols.

SS4ISR Data Model process could be based on the same one adopted by NGVA, which is based on the MoD UK Land Data Model (LDM) methodology [7]. LDM methodology while exploiting abstraction for the constructing the PIM, makes use of automation for generating the PSM and PSI (Platform Specific Implementation).

Platform Specific Model

The Platform Specific Model is the set of design components needed to implement the PIM on a given deployment platform, i.e., the protocol suite selected for the SS4ISR Data Exchanges Services for a given SS4ISR domain. As example in NGVA, the PSM comprises a set of classes that define the set of DDS topic data types needed to implement the PIM Classes with the related relationships, operations, signals, and states. In general, the PSM can comprise any set of technologies, each of which has an associated PIM translator, see Figure 8-2.

Platform Specific Implementation

The Platform Specific Implementation is the set of development components to implement the PSM components at programming level. As example, in the case of NGVA, the language used for data definition for DDS is Interface Definition Language (IDL [8]). The IDL is generated automatically from the PSM, which was in turn generated from the PIM. Similarly, the domain datasets used to configure certain domains are specified using XML, the schemas for which can also be generated from the PIMs.

The PIM-PSM mappings are quite complex, as they need to map from a technology-agnostic specification of the required data and platform specific data specification, e.g., a DDS middleware-specific architecture whose Data Dictionary is a set of Named Topic, and Data Types. On the other hand, the PSM-PSI mappings are typically relatively simple, e.g., translate each Data Type Model in the corresponding IDL data specification.

In theory, it is not necessary to preserve a copy of the PSM, as it is used solely as an intermediate representation between the PIM and the PSI. However, an SDM does contain copies of the generated PSMs so that implementers and other interested stakeholders have an abstract representation of data structure, e.g., the DDS topic data type syntax, for reference when designing, reviewing and debugging the application code.

INTEROPERABILITY

8 - 12 STO-TR-SET-263

8.2.4.2 Model Translation The proposed strategy requires maintenance of two primary artefacts, namely the PIM and the model translators. The PSM and PSI are automatically generated products: if the generated PSI – in the case of NGVA, the IDL – is not as desired, then this is addressed by modifying either the PIM or the PIM translator(s).

It is worth noting that both the PIM and the set of Translators could be Government Furnished Items, as integrated components of the Reference Architecture.

Translator modules are typically plug-in of the Design Platform adopted, e.g., IBM Rhapsody, which impose a requirement on such a Design Platform, it shall allow the development of third-part components as plug-ins.

8.2.4.3 The System Data Model Each Domain in the SS4ISR Data Model shall be reused in multiple contexts. A “System Data Model (SDM)” is a parts list that specifies the selected components, which instantiate the SS4ISR Data Model for a specific SS4ISR implementation, e.g., a National SS4ISR.

An SDM includes: • Domain_Module_Packages, which are the set of SS4ISR Domains to be adopted. • Common_Module_Package, which defines the set of standardized data types used across all

domains. • SDM_Package, which includes the system implementation specific counterpart relationships

between Domain’s components. These relationships transform a set of “independent” Domains in a collection which defines a specific implementation of the SS4ISR Data Model.

For example, an SDM of a PTZ Camera might contain different domains, such as those shown in Figure 8-3. Note that the dependencies, represented as broken directed arcs, are useful when selecting domains, as they indicate that certain domains will only be able to support their advertised capabilities if other domains are also included. In this case, the uppermost dependency indicates that “DataModel_PTZ_Camera” requires the presence of the following domains “DD_VideoServices” for the video sensor component(s), “DD_MountServices” for the pan-tilt component(s), “DD_VehicleConfiguration” for the Registration protocol, and “LDM_Common” for the common data types. Dependencies represent usage and not direction of data or control flow.

8.2.4.4 The SS4ISR Data Model Process The Data Modeling Process used for developing the SS4ISR Data Model involves constructing a set of SS4ISR Domains each modeling a set of service/system capabilities to be realized. As depicted in Figure 8-4, it directly stems from SS4ISR Reference Architecture, and includes the following artefacts:

• Domains to represent the layered architecture comprising the separate subject matters that make up the system.

• Interactions to define how the SSDM components collaborate via a sequence of messages to realize the various scenarios.

• Classes to define the data required to support those capabilities. • Operations to specify the set of messages required to realize those capabilities. • States to specify the modes to be supported, and to formalize rules about when operations can be

invoked. • Automated Translator Tools to generate PSM and PSI for SS4ISR Execution Platforms.

INTEROPERABILITY

STO-TR-SET-263 8 - 13

Figure 8-3: Example Build Set Components: Domains.

INTEROPERABILITY

8 - 14 STO-TR-SET-263

Figure 8-4: A Possible Data Modeling Process for the SS4ISR Data Model.

8.3 REFERENCES

[1] STANAG 4754, “NATO Generic Systems Architecture (NGVA) for Land Systems,” Edition 1,January 2018.

[2] Object Management Group, “Data Distribution Services,” Issue 1.4, April 2015.

[3] Object Management Group, “The Real-time Publish-Subscribe Wire Protocol DDS InteroperabilityWire Protocol Specification – Version 2.2,” November 2014.

[4] UCS Executive Summary 2011: “The Data Distribution Service (Reducing Cost through AgileIntegration),” 2011.

[5] Object Management Group. Unified Modeling Language. www.omg.org/uml Accessed 22 May 2022.

[6] Object Management Group. OMG Model Driven Architecture. www.omg.org/mda Accessed22 May 2022.

[7] Ministry of Defence. Land Open Systems Portal. https://landopensystems.mod.gov.uk/share/page/site/land-data-model/dashboard Accessed 22 May 2022.

[8] Object Management Group, “Interface Definition Language,” Issue 4.2, March 2018.

SS4ISR

STO-TR-SET-263 9 - 1

Chapter 9 – RELATIONSHIPS MATRIXES

9.1 CAPABILITY GOALS vs CAPABILITY MAPPING

Table 9-1 defines the relationships between SS4ISR Capability Goals, see Section 3.1.2 and the Forces Capabilities which support each goal, see Section 3.2.

Table 9-1: Capability Goals to Capability.

Capability Goals Continuous Provision of ISR Data about Advisory Actions

Force Protection and Interdiction

Improvement of Anti Access Area Denied

Operations

Command, Control, Communications, Computing (C4)

Multimodal HRI X

Shared Awareness X

Information Sharing X

Fratricide Situation Prevention X X

Human – Robotic and Autonomous System Teaming

X X

EW Resilient Robust Navigation Systems

X

Intelligence, Surveillance, Target Acquisition and Reconnaissance (ISTAR)

Multimodal Enriched Information

X

Pervasive Sensing X

Focused Situational Awareness X X X

Robust Tactical Network X

Effective Engagement

Target Hand-Over Support X

UxV Sensor-Shooter Integration X X

Protection and Survivability

Couter-drone Protection X

Dirty, Dusty, Dangerous Environment Protection

X X

Protection from Remote Threats X X

Mobility

Forces and Material Mobility X X

RELATIONSHIPS MATRIXES

9 - 2 STO-TR-SET-263

9.2 CAPABILITY vs OPERATIONAL ACTIVITY MAPPING

Each capability requirement area fulfils several operational activities in different operational contexts. Table 9-2 maps the correlation of Capability Areas, as described in Section 3.2 to Operational Activities, as described in Chapter 4. This is an indicative and not a comprehensive list of all operational activities, which can vary based on the type and intensity of an operational mission.

Table 9-2: Capability to Operational Activity Mapping.

CAPABILITY OPERATIONAL ACTIVITIES

CG1: Command, Control, Communications and Computing (C4)

Multimodal HRI • Information Gathering • Networking

Shared Awareness • Networking • Self-Protection Electronic Measures • SS4ISR Control Hand-over

Information Sharing • Networking • Self-Protection Electronic Measures

Fratricide Situation Prevention

• Information Gathering

Human – Robotic and Autonomous System Teaming

• SS4ISR Control Hand-over • Networking • Information Gathering

EW Resilient Robust Navigation Systems

• Self-Protection Electronic Measures • Networking

CG2: Intelligence, Surveillance, Targeting, Acquisition and Reconnaissance (ISTAR)

Multimodal Enriched Information

• Information Gathering

Pervasive Sensing • Information Gathering

Focused Situational Awareness

• Information Gathering

Robust Tactical Network

• Networking • Self-Protection Electronic Measures

CG 3: Effective Engagement

Target Hand-Over Support

• Information Gathering • Sense and Response • Self-Protection Electronic Measures • Maintain Area Dominance • SS4ISR Control Hand-over

UxV Sensor-Shooter Integration

• Networking • Self-Protection Electronic Measures

RELATIONSHIPS MATRIXES

STO-TR-SET-263 9 - 3

CAPABILITY OPERATIONAL ACTIVITIES

CG 5: Protection and Survivability

Counter-drone Protection

• Surveillance • Airspace Control • Sense and Response • SS4ISR Control Hand-over

Dirty, Dusty, Dangerous Environment Protection

• Networking • Information Gathering • Self-Protection Electronic Measures • Explosives Ordnance Detection

Protection from Remote Threats

• Information Gathering • Surveillance • Explosives Ordnance Detection

CG 6: Mobility

Forces and Material Mobility

• Surveillance • Explosives Ordnance Detection • Airspace Control • Maintain Area Dominance

9.3 CAPABILITY vs SERVICES MAPPING

Each capability requirement area is implemented via several Services. Table 9-3 maps the correlation of Capability, as described in Section 3.2 to Services, as described in Chapter 5. The listed services have been classified depending on their relevance to the related Capability:

• Key Services, which are critical for the Capability. They are mandatory for the provision of the related Capability.

• Support Services, which are useful to improve the Capability quality factor, e.g., effectiveness, efficiency, but do not impact the Capability in a meaningful way. They could be also optional.

For a description of Services, please refer to Chapter 5.

Table 9-3: Capability to Services Mapping.

CAPABILITY SERVICES

Key Support

CG1: Command, Control, Communications and Computing (C4)

Multimodal HRI • Human-Swarm Interaction • Data Exchange Services

Shared Awareness • Detection and Tracking • Data Exchange Services • Robot-Robot Interaction

• Human-Swarm Interaction • Networking

Information Sharing • Data Exchange Services • Robot-Robot Interaction

• Human-Swarm Interaction • Networking

RELATIONSHIPS MATRIXES

9 - 4 STO-TR-SET-263

CAPABILITY SERVICES

Key Support

Fratricide Situation Prevention

• Data Exchange Services • Robot-Robot Interaction • Swarm Navigation and Control

• Networking

Human-Robotic and Autonomous System Teaming

• Human-Swarm Interaction • Data Exchange Services • Robot-Robot Interaction

EW Resilient Robust Navigation Systems

• Swarm Navigation and Control • Data Exchange Services • Robot-Robot Interaction

• Networking

CG2: Intelligence, Surveillance, Targeting, Acquisition and Reconnaissance (ISTAR) Multimodal Enriched Information

• Human-Swarm Interaction • Data Exchange Services • Networking

Pervasive Sensing • Detection and Tracking • Swarm Navigation and Control • Robot-Robot Interaction

• Data Exchange Services • Networking

Focused Situational Awareness

• Detection and Tracking • Swarm Navigation and Control • Robot-Robot Interaction

• Human-Swarm Interaction • Data Exchange Services • Networking

Robust Tactical Network • Data Exchange Services • Networking

• Robot-Robot Interaction

CG 3: Effective Engagement

Target Hand-Over Support

• Swarm Navigation and Control • Data Exchange Services • Robot-Robot Interaction • Human-Swarm Interaction • Networking

UxV Sensor-Shooter Integration

• Swarm Navigation and Control • Data Exchange Services • Robot-Robot Interaction • Networking

CG 5: Protection and Survivability

Counter-drone Protection

• Detection and Tracking • Swarm Navigation and Control • Robot-Robot Interaction • Human-Swarm Interaction

• Data Exchange Services • Networking

Dirty, Dusty, Dangerous Environment Protection

• Detection and Tracking • Swarm Navigation and Control • Robot-Robot Interaction • Human-Swarm Interaction

• Data Exchange Services • Networking

RELATIONSHIPS MATRIXES

STO-TR-SET-263 9 - 5

CAPABILITY SERVICES

Key Support

Protection from Remote Threats

• Detection and Tracking • Swarm Navigation and Control • Robot-Robot Interaction

• Human-Swarm Interaction • Data Exchange Services • Networking

CG 6: Mobility

Forces and Material Mobility

• Swarm Navigation and Control • Robot-Robot Interaction

• Networking • Data Exchange Services

9.4 CAPABILITY vs OPERATIONAL SCENARIO MAPPING

Each capability is requested to accomplish a given phase of a relevant several operational scenarios. Table 9-4 maps the correlation of Capability, as described in Section 3.2, to Operational Scenarios, as described in Chapter 2. This is an indicative and not a comprehensive list of operational scenarios. The Operational Scenarios phases addressed are listed below:

• Swarming Operational Scenario.

• (Stealth) Ubiquitous Sensing.

• Threat Discovery.

• Raise Alarm.

• Sustainable Pulsing.

• Threat Neutralization.

• Dispersion (to Stealthy Ubiquity).

• Swarm-Squad Symbiotic Teaming.

• Coordinated Common Control.

• Control Hand-Over.

• Common Situational Awareness.

• Ubiquitous Sensing Operational Scenario.

• Build-up.

• Engagement.

RELATIONSHIPS MATRIXES

9 - 6 STO-TR-SET-263

Table 9-4: Capability to Operational Scenarios Mapping.

CAPABILITY OPERATIONAL SCENARIOS

Swarming Symbiotic Teaming Ubiquitous Sensing

CG1: Command, Control, Communications and Computing (C4)

Multimodal HRI • (Stealth) Ubiquitous Sensing

• Raise Alarm • Threat Neutralization

• Coordinated Common Control

• Control Hand-Over • Common Situational

Awareness

• Build-up • Engagement

Shared Awareness • (Stealth) Ubiquitous Sensing

• Threat Discovery • Threat Neutralization

• Common Situational Awareness

• Build-up • Engagement

Information Sharing • (Stealth) Ubiquitous Sensing

• Threat Discovery • Raise Alarm • Sustainable Pulsing • Threat Neutralization • Dispersion (to Stealthy

Ubiquity)

• Coordinated Common Control

• Control Hand-Over • Common Situational

Awareness

• Build-up • Engagement

Fratricide Situation Prevention • Sustainable Pulsing • Threat Neutralization

• Coordinated Common Control

• Control Hand-Over

• Build-up • Engagement

RELATIONSHIPS MATRIXES

STO-TR-SET-263 9 - 7

CAPABILITY OPERATIONAL SCENARIOS

Swarming Symbiotic Teaming Ubiquitous Sensing

Human-Robotic and Autonomous System Teaming

• (Stealth) Ubiquitous Sensing

• Threat Discovery • Raise Alarm • Sustainable Pulsing • Threat Neutralization • Dispersion (to Stealthy

Ubiquity)

• Coordinated Common Control

• Control Hand-Over • Common Situational

Awareness

• Build-up • Engagement

EW Resilient Robust Navigation Systems

• Sustainable Pulsing • Threat Neutralization • Dispersion (to Stealthy

Ubiquity)

• Coordinated Common Control

• Control Hand-Over

• Build-up • Engagement

CG2: Intelligence, Surveillance, Targeting, Acquisition and Reconnaissance (ISTAR)

Multimodal Enriched Information

• (Stealth) Ubiquitous Sensing

• Raise Alarm • Threat Neutralization

• Coordinated Common Control

• Common Situational Awareness

• Build-up • Engagement

Pervasive Sensing • (Stealth) Ubiquitous Sensing

• Common Situational Awareness

• Build-up • Engagement

Focused Situational Awareness • Sustainable Pulsing • Threat Neutralization

• Common Situational Awareness

• Build-up • Engagement

Robust Tactical Network • Raise Alarm • Sustainable Pulsing • Threat Neutralization

• Control Hand-Over • Build-up • Engagement

RELATIONSHIPS MATRIXES

9 - 8 STO-TR-SET-263

CAPABILITY OPERATIONAL SCENARIOS

Swarming Symbiotic Teaming Ubiquitous Sensing

CG 3: Effective Engagement

Target Hand-Over Support • Sustainable Pulsing • Threat Neutralization

• Control Hand-Over • Engagement

UxV Sensor-Shooter Integration • Sustainable Pulsing • Threat Neutralization

• Coordinated Common Control

• Engagement

CG 5: Protection and Survivability

Counter-drone Protection • (Stealth) Ubiquitous Sensing

• Threat Discovery • Raise Alarm • Sustainable Pulsing • Threat Neutralization • Dispersion (to Stealthy

Ubiquity)

• Coordinated Common Control

• Control Hand-Over • Common Situational

Awareness

• Build-up • Engagement

Dirty, Dusty, Dangerous Environment Protection

• (Stealth) Ubiquitous Sensing

• Threat Discovery • Raise Alarm • Sustainable Pulsing • Threat Neutralization • Dispersion (to Stealthy

Ubiquity)

• Coordinated Common Control

• Control Hand-Over • Common Situational

Awareness

• Build-up • Engagement

RELATIONSHIPS MATRIXES

STO-TR-SET-263 9 - 9

CAPABILITY OPERATIONAL SCENARIOS

Swarming Symbiotic Teaming Ubiquitous Sensing

Protection from Remote Threats • (Stealth) Ubiquitous Sensing

• Threat Discovery • Raise Alarm • Sustainable Pulsing • Threat Neutralization • Dispersion (to Stealthy

Ubiquity)

• Coordinated Common Control

• Build-up • Engagement

CG 6: Mobility

Forces and Material Mobility • Sustainable Pulsing • Threat Neutralization • Dispersion (to Stealthy

Ubiquity)

• Coordinated Common Control

• Control Hand-Over

• Build-up • Engagement

RELATIONSHIPS MATRIXES

9 - 10 STO-TR-SET-263

9.5 “SWARM SYSTEM” NODE TO SYSTEM NODE RELATIONSHIPS

This section relates the set of Swarm System Components to the key Mission System Nodes they interact with for ISR Missions, which are listed below:

• Command and Control (C2) Node, which provides for the command and control services such as Battlespace Management (BSM), Situational Awareness (SA), System Management (SYS).

• Mission Network Infrastructure (MNI), which provides for networking services during the mission.

• Smart Sensor (SM), which provides for Area Surveillance and, Target Detection and Visualization.

• Unmanned Vehicle Platform (UVP), which provides for the basic services such as: motion control, power, data bus, sensors, communications, operating system, etc.

Table 9-5 identifies which S4ISR System Function interacts with the listed Mission System Nodes.

Table 9-5: “Swarm System” Node to System Node Mapping.

S4ISR Function/ Mission System Node

Mission C2 MNI SM UVP

BSM SA SYS

Detection and Tracking X X X

Human-Swarm Interaction X

Swarm Mission Control X X

Swarm Management X X

Swarm Payload Control X

Robot-Robot Interaction

Cooperative Robot Integration Platform

X X X

Localization and Mapping X X X X

Data Exchange Services X X X X X

Swarm Networking X X X X X X

Swarm Control and Navigation X X X X

STO-TR-SET-263

REPORT DOCUMENTATION PAGE

1. Recipient’s Reference 2. Originator’s References 3. Further Reference

4. Security Classification of Document

STO-TR-SET-263 AC/323(SET-263)TP/1090

ISBN 978-92-837-2407-0

PUBLIC RELEASE

5. Originator Science and Technology Organization North Atlantic Treaty Organization BP 25, F-92201 Neuilly-sur-Seine Cedex, France

6. Title Swarm System for Intelligence Surveillance and Reconnaissance

7. Presented at/Sponsored by

Final report.

8. Author(s)/Editor(s) 9. Date

Multiple August 2022

10. Author’s/Editor’s Address 11. Pages

Multiple 170

12. Distribution Statement There are no restrictions on the distribution of this document. Information about the availability of this and other STO unclassified publications is given on the back cover.

13. Keywords/Descriptors

Data distribution services; Detection and tracking; High level reference architecture; Human-RAS interaction; Interoperable open architecture; Model driven architecture; Robot-robot interaction; Simultaneous Localization and Mapping (SLAM); Swarm control; Swarm networking; Swarm operations; Swarm system; Symbiotic human-RAS teaming

14. Abstract

Future NATO Joint Forces will incorporate autonomous and semi-autonomous ground, aerial and sea platforms to improve the effectiveness and agility of Forces. These autonomy-enabled systems will deploy as force multipliers at all echelons from the squad to the brigade combat teams. The RTG SET-263 “Swarm System for Intelligence Surveillance and Reconnaissance” analyzed operational and system issues of swarm systems which could facilitate their integration in current battlefield tactical systems from both operational, system, and technological point of views. This final report provides a High Level Reference Architecture for Swarm-centric Systems for ISR (SS4ISR) which integrates and extends the outcomes of the previous two years of the SET-263 Research Study. The reference architecture addresses Operational issues (operational scenarios, key capability goals and supporting capabilities, and relevant SS4ISR operational activities); System issues (key system services provided from SS4ISR); Technologies (current and foreseen standards and algorithms for achieving expected system capabilities); and System-level Interoperability design guidelines for the adoption of swarm system in joint/multinational coalition and integration with legacy ones. The report also illustrates the main relationships between Operational and System issues via a set of relationship matrixes and describes the set of research topics addressed by the SET-263 Research Study: Detection and Tracking; Human-Swarm Interaction; Swarm Control and Navigation; Robot-Robot Interaction; Localization and Mapping in Swarm Systems; Data Exchange; and Networking.

STO-TR-SET-263

NORTH ATLANTIC TREATY ORGANIZATION SCIENCE AND TECHNOLOGY ORGANIZATION

BP 25

F-92201 NEUILLY-SUR-SEINE CEDEX • FRANCE Télécopie 0(1)55.61.22.99 • E-mail [email protected]

DIFFUSION DES PUBLICATIONS STO NON CLASSIFIEES

Les publications de l’AGARD, de la RTO et de la STO peuvent parfois être obtenues auprès des centres nationaux de distribution indiqués ci-dessous. Si vous souhaitez recevoir toutes les publications de la STO, ou simplement celles qui concernent certains Panels, vous pouvez demander d’être inclus soit à titre personnel, soit au nom de votre organisation, sur la liste d’envoi. Les publications de la STO, de la RTO et de l’AGARD sont également en vente auprès des agences de vente indiquées ci-dessous. Les demandes de documents STO, RTO ou AGARD doivent comporter la dénomination « STO », « RTO » ou « AGARD » selon le cas, suivi du numéro de série. Des informations analogues, telles que le titre est la date de publication sont souhaitables. Si vous souhaitez recevoir une notification électronique de la disponibilité des rapports de la STO au fur et à mesure de leur publication, vous pouvez consulter notre site Web (http://www.sto.nato.int/) et vous abonner à ce service.

CENTRES DE DIFFUSION NATIONAUX ALLEMAGNE FRANCE PORTUGAL

Streitkräfteamt / Abteilung III O.N.E.R.A. (ISP) Estado Maior da Força Aérea Fachinformationszentrum der Bundeswehr (FIZBw) 29, Avenue de la Division Leclerc SDFA – Centro de Documentação Gorch-Fock-Straße 7, D-53229 Bonn BP 72 Alfragide 92322 Châtillon Cedex P-2720 Amadora

BELGIQUE Royal High Institute for Defence – KHID/IRSD/RHID GRECE (Correspondant) REPUBLIQUE TCHEQUE Management of Scientific & Technological Research Defence Industry & Research General Vojenský technický ústav s.p. for Defence, National STO Coordinator Directorate, Research Directorate CZ Distribution Information Centre Royal Military Academy – Campus Renaissance Fakinos Base Camp, S.T.G. 1020 Mladoboleslavská 944 Renaissancelaan 30, 1000 Bruxelles Holargos, Athens PO Box 18 197 06 Praha 9

BULGARIE HONGRIE Ministry of Defence Hungarian Ministry of Defence ROUMANIE Defence Institute “Prof. Tsvetan Lazarov” Development and Logistics Agency Romanian National Distribution “Tsvetan Lazarov” bul no.2 P.O.B. 25 Centre 1592 Sofia H-1885 Budapest Armaments Department 9-11, Drumul Taberei Street

CANADA ITALIE Sector 6 DGSlST 2 Ten Col Renato NARO 061353 Bucharest Recherche et développement pour la défense Canada Capo servizio Gestione della Conoscenza 60 Moodie Drive (7N-1-F20) F. Baracca Military Airport “Comparto A” ROYAUME-UNI Ottawa, Ontario K1A 0K2 Via di Centocelle, 301 Dstl Records Centre

00175, Rome Rm G02, ISAT F, Building 5 DANEMARK Dstl Porton Down

Danish Acquisition and Logistics Organization LUXEMBOURG Salisbury SP4 0JQ (DALO) Voir Belgique Lautrupbjerg 1-5 SLOVAQUIE 2750 Ballerup NORVEGE Akadémia ozbrojených síl gen.

Norwegian Defence Research M.R. Štefánika, Distribučné a ESPAGNE Establishment informačné stredisko STO

Área de Cooperación Internacional en I+D Attn: Biblioteket Demänová 393 SDGPLATIN (DGAM) P.O. Box 25 031 01 Liptovský Mikuláš 1 C/ Arturo Soria 289 NO-2007 Kjeller 28033 Madrid SLOVENIE

PAYS-BAS Ministry of Defence ESTONIE Royal Netherlands Military Central Registry for EU & NATO

Estonian National Defence College Academy Library Vojkova 55 Centre for Applied Research P.O. Box 90.002 1000 Ljubljana Riia str 12 4800 PA Breda Tartu 51013 TURQUIE

POLOGNE Milli Savunma Bakanlığı (MSB) ETATS-UNIS Centralna Biblioteka Wojskowa ARGE ve Teknoloji Dairesi

Defense Technical Information Center ul. Ostrobramska 109 Başkanlığı 8725 John J. Kingman Road 04-041 Warszawa 06650 Bakanliklar – Ankara Fort Belvoir, VA 22060-6218

AGENCES DE VENTE

The British Library Document Canada Institute for Scientific and Supply Centre Technical Information (CISTI)

Boston Spa, Wetherby National Research Council Acquisitions West Yorkshire LS23 7BQ Montreal Road, Building M-55

ROYAUME-UNI Ottawa, Ontario K1A 0S2 CANADA

Les demandes de documents STO, RTO ou AGARD doivent comporter la dénomination « STO », « RTO » ou « AGARD » selon le cas, suivie du numéro de série (par exemple AGARD-AG-315). Des informations analogues, telles que le titre et la date de publication sont souhaitables. Des références bibliographiques complètes ainsi que des résumés des publications STO, RTO et AGARD figurent dans le « NTIS Publications Database » (http://www.ntis.gov).

NORTH ATLANTIC TREATY ORGANIZATION SCIENCE AND TECHNOLOGY ORGANIZATION

BP 25

F-92201 NEUILLY-SUR-SEINE CEDEX • FRANCE Télécopie 0(1)55.61.22.99 • E-mail [email protected]

DISTRIBUTION OF UNCLASSIFIED STO PUBLICATIONS

AGARD, RTO & STO publications are sometimes available from the National Distribution Centres listed below. If you wish to receive all STO reports, or just those relating to one or more specific STO Panels, they may be willing to include you (or your Organisation) in their distribution. STO, RTO and AGARD reports may also be purchased from the Sales Agencies listed below. Requests for STO, RTO or AGARD documents should include the word ‘STO’, ‘RTO’ or ‘AGARD’, as appropriate, followed by the serial number. Collateral information such as title and publication date is desirable. If you wish to receive electronic notification of STO reports as they are published, please visit our website (http://www.sto.nato.int/) from where you can register for this service.

NATIONAL DISTRIBUTION CENTRES

BELGIUM GERMANY PORTUGAL Royal High Institute for Defence – Streitkräfteamt / Abteilung III Estado Maior da Força Aérea

KHID/IRSD/RHID Fachinformationszentrum der SDFA – Centro de Documentação Management of Scientific & Technological Bundeswehr (FIZBw) Alfragide

Research for Defence, National STO Gorch-Fock-Straße 7 P-2720 Amadora Coordinator D-53229 Bonn

Royal Military Academy – Campus ROMANIA Renaissance GREECE (Point of Contact) Romanian National Distribution Centre

Renaissancelaan 30 Defence Industry & Research General Armaments Department 1000 Brussels Directorate, Research Directorate 9-11, Drumul Taberei Street

Fakinos Base Camp, S.T.G. 1020 Sector 6 BULGARIA Holargos, Athens 061353 Bucharest

Ministry of Defence Defence Institute “Prof. Tsvetan Lazarov” HUNGARY SLOVAKIA “Tsvetan Lazarov” bul no.2 Hungarian Ministry of Defence Akadémia ozbrojených síl gen 1592 Sofia Development and Logistics Agency M.R. Štefánika, Distribučné a P.O.B. 25 informačné stredisko STO

CANADA H-1885 Budapest Demänová 393 DSTKIM 2 031 01 Liptovský Mikuláš 1 Defence Research and Development Canada ITALY 60 Moodie Drive (7N-1-F20) Ten Col Renato NARO SLOVENIA Ottawa, Ontario K1A 0K2 Capo servizio Gestione della Conoscenza Ministry of Defence

F. Baracca Military Airport “Comparto A” Central Registry for EU & NATO CZECH REPUBLIC Via di Centocelle, 301 Vojkova 55

Vojenský technický ústav s.p. 00175, Rome 1000 Ljubljana CZ Distribution Information Centre Mladoboleslavská 944 LUXEMBOURG SPAIN PO Box 18 See Belgium Área de Cooperación Internacional en I+D 197 06 Praha 9 SDGPLATIN (DGAM)

NETHERLANDS C/ Arturo Soria 289 DENMARK Royal Netherlands Military 28033 Madrid

Danish Acquisition and Logistics Organization Academy Library (DALO) P.O. Box 90.002 TURKEY

Lautrupbjerg 1-5 4800 PA Breda Milli Savunma Bakanlığı (MSB) 2750 Ballerup ARGE ve Teknoloji Dairesi Başkanlığı NORWAY 06650 Bakanliklar – Ankara

ESTONIA Norwegian Defence Research Estonian National Defence College Establishment, Attn: Biblioteket UNITED KINGDOM Centre for Applied Research P.O. Box 25 Dstl Records Centre Riia str 12 NO-2007 Kjeller Rm G02, ISAT F, Building 5 Tartu 51013 Dstl Porton Down, Salisbury SP4 0JQ

POLAND FRANCE Centralna Biblioteka Wojskowa UNITED STATES

O.N.E.R.A. (ISP) ul. Ostrobramska 109 Defense Technical Information Center 29, Avenue de la Division Leclerc – BP 72 04-041 Warszawa 8725 John J. Kingman Road 92322 Châtillon Cedex Fort Belvoir, VA 22060-6218

SALES AGENCIES

The British Library Document Canada Institute for Scientific and Supply Centre Technical Information (CISTI)

Boston Spa, Wetherby National Research Council Acquisitions West Yorkshire LS23 7BQ Montreal Road, Building M-55

UNITED KINGDOM Ottawa, Ontario K1A 0S2 CANADA

Requests for STO, RTO or AGARD documents should include the word ‘STO’, ‘RTO’ or ‘AGARD’, as appropriate, followed by the serial number (for example AGARD-AG-315). Collateral information such as title and publication date is desirable. Full bibliographical references and abstracts of STO, RTO and AGARD publications are given in “NTIS Publications Database” (http://www.ntis.gov).

ISBN 978-92-837-2407-0