17
Developing Artificial Neural Networks for Safety Critical Systems Zeshan Kurd Supervisor: Tim Kelly Department of Computer Science

Developing Artificial Neural Networks for Safety Critical Systems Zeshan Kurd Supervisor: Tim Kelly Department of Computer Science

Embed Size (px)

Citation preview

Developing Artificial Neural Networks for Safety Critical Systems

Zeshan KurdSupervisor: Tim Kelly

Department of Computer Science

NCAF 2003 2

Outline

• The problem

• Current approaches

• Safety critical systems

• Safety argumentation

• Suitable ANN model

• Safety lifecycle

• Feasibility issues

NCAF 2003 3

Introduction• Used in many areas of industry

– Defence and medical applications

• Attractive features– Used when little understanding of the relationship between inputs and

outputs– Ability to learn or evolve– Generalisation or efficiency in terms of computational resources

• Commonly used as advisory roles– IEC61508-7 C.3.4 – Safety bag: independent external monitors to

ensure the system does not enter an unsafe state

• Why advisory roles?– Absence of acceptable analytical certification methods

NCAF 2003 4

The Problem

• Justify the use of neural networks in safety-critical systems• Highly dependable roles• Derive satisfactory safety arguments• Argue safety using Safety Case

A safety case should present a clear, comprehensive and defensible argument that a system is acceptably safe to operate within a particular context

[DEFSTAN 00-55] 36.5.2 Proof obligations shall be:Constructed to verify that the code is a correct refinement of the Software Design and does nothing that is not specified

NCAF 2003 5

Current Approaches• Diverse Neural Networks

– Choose a single net that covers the whole target function – Choose a set of nets that cover the whole target function– Overall generalisation performance has been shown to be improved– Black box approach

• Fault Tolerance– [IEC-50] The attribute of an entity that makes it able to perform a

required function in the presence of certain given sub-entity faults – Fault tolerant by inject weight faults– Fault hypothesis unrealistic – does not deal with major potential faults

• ANN Development and Safety Lifecycles– No provision for dealing with safety concerns

NCAF 2003 6

Safety Critical Systems• Incorrect operation may lead to fatal or severe consequences

– Safety-critical system directly or indirectly contributing to occurrence of hazardous system state

– System level hazard is a condition that is potentially dangerous to man, society or environment

• Safety process & techniques– Identify, analyse and mitigate hazards

• ‘Acceptably’ safe– Risk of failure assured to tolerable level (ALARP)

• Software Safety Lifecycle– Software “hazard” – software level condition that could give rise to a

system level hazard– Hazard Identification– Functional Hazard Analysis– Preliminary System Safety Analysis (Potential to influence design)– System Safety Analysis (confirming causes of hazards)– Safety Case

NCAF 2003 7

Types of Safety Arguments

• Process vs. product based arguments– Process-based: Assuming safety given certain processes have

been performed– Process-based arguments for ANNs in ‘Process Certification

Requirements’ (York)– Implementation issues (formal methods)– Team Management and other process based issues

– Product-based: Evidence based arguments about the system such as functional behaviour, identifying potential hazards etc.

– Current standards and practices are working towards removing process based arguments

• Solution: use product-based arguments and process-based only where improvement can be demonstrated

NCAF 2003 8

Safety Criteria

• Argue functional properties or behaviour• Represented as a set of high-level goals• Analysing aspects of current safety standards• Key criteria argued in terms of failure modes• Need to have more white-box style arguments• Apply to most types of networks• Leaves open alternative means of compliance

Z. Kurd, T.P. Kelly, “Establishing Safety Criteria for Artificial Neural Networks” To appear in Seventh International Conference on Knowledge-Based Intelligent

Information & Engineering Systems, Oxford, UK, 2003

NCAF 2003 9

Safety CriteriaG1Neural network is acceptablysafe to perform a specifiedfunction within the safety-

critical context

S1Argument over key

safety criteria

C2Use of network in safety-

critical context mustensure specific

requirements are met

C3‘Acceptably safe’ will be

determined by thesatisfaction of safety

criteria

C1Neural Networkmodel definition

C7Hazardous output isdefined as an outputoutside a specified

set or target function

G2Functions for neural network

have been safely mapped

G3Observable behavior of the

neural network must bepredictable and repeatable

G4The neural network tolerates

faults in its inputs

G5The neural network does not

create hazardous outputs

C5Known and

unknown inputs

C6A fault is classified as an

input that lies outsidethe specified input set

C4Function may betotal or subset oftarget function

Goal Strategy

Context

NCAF 2003 10

Suitable ANN Model

• Current ANN models have many problems!– Determining suitable ANN structure, training and test sets

Influences functional behaviour (dealing with systematic faults)

– ‘Forgetting’ of previously learnt samples & noisy data Introducing new ‘faults’ during training

– Pedagogical approaches to analysing behaviour Black-box style safety arguments

• Objectives– Preserve ability to learn or evolve given input-outputs– Control the learning (refinement) process

NCAF 2003 11

‘Hybrid’ ANNs• ‘Hybrid’ – Representing symbolic information in ANN frameworks

– Knowledge represented by the internal structure of the ANN (initial conditions)

– Translation algorithms– Working towards specification– Outperforms many ‘all-symbolic’ systems

Taken from: J. Shavlik, “Combining symbolic and neural learning”, 1992.

NCAF 2003 12

‘Hybrid’ ANNs

• Decompositional approach to analysis– Potential for ‘transparency’ or white-box style analysis

– White-box style analysis which focuses on analysing the internal structure of the ANN

– Potentially result in strong arguments about the knowledge represented by the network

– Potential to control learning

Knowledge / Data Process Neural Network

InitialSymbolic

Knowledge

RefinedSymbolic

Knowledge

TrainingSamples

INSERTSYMBOLIC

KNOWLEDGEInitialANN

LEARNING &REFINEMENT

EXTRACTSYMBOLIC

KNOWLEDGE

FinalANN

NCAF 2003 13

Safety Lifecycle

• Current software safety lifecycle is inadequate for ANNs– Relies on conventional software development lifecycle

– Safety processes are not suitable for ANNs

• Existing development & safety lifecycles for ANNs are inadequate– Focus too much on process based arguments

– No argumentation on how certain ANN configuration may (or may not) contribute to safety

– Some models assume ‘intentionally’ complete specification

– No attempt to find ways to identify, analyse, control and mitigate potential hazards

• Need a lifecycle for the ‘hybrid’ ANN model

Z. Kurd and T. P. Kelly, "Safety Lifecycle for Developing Safety-critical Artificial Neural Networks," To appear in 22nd International Conference on Computer Safety,

Reliability and Security, 2003.

NCAF 2003 14

Safety Lifecycle

Safe PlaformSafety Case

FHA

Delivered Platform

NeuralLearning Level

Translation Level

Symbolic Level

Requirements

Sub-initial SymbolicInformation

Initial SymbolicInformation

Dynamic LearningStatic Learning

(within constraints)

Initial Hazard List

Refined SymbolicInformation

Legend

PHI is Preliminary Hazard IdentificationFHA is Functional Hazard Analysis

NCAF 2003 15

Safety Lifecycle• Adapted techniques used in conventional software

• Focuses on hazard identification, analysis and mitigation

• Two-tier learning process– Dynamic Learning– Static Learning

• Safety processes performed over meaningful representation– Things that can go wrong in real-world terms

• Preliminary Hazard Identification (PHI)– Used to determine initial conditions of the network (weights)– Consideration of possible system level hazards (BB approach)– Result is a set of rules partially fulfilling desired function

NCAF 2003 17

Feasibility Issues

• Consider performance vs. safety trade-off• Any model must have sufficient performance

– Does not mean output error, but characteristics– Learning and generalisation– Permissible learning during development but not whilst deployed

• No compromise in safety– Controls or features added to ensure strong safety arguments

• SCANN model preserves ability to learn– Some analysable representation– Provide safety assurances in terms of knowledge evolution– Deal with large input spaces– For problems whose complete algorithmic specification is not

available (at start of development)– Involves two-tier learning process

NCAF 2003 18

Summary

• Need for analytical safety arguments for certification

• Current position of ANNs in safety-related systems

• Set out safety criteria for functional behaviour

• Using ‘Hybrid’ ANNs

• Safety lifecycle for the ‘hybrid’ ANN

• Challenge: performance vs. safety Trade-off