175
z Copyright © 20xx IEEE. All rights reserved. PHD Cybersecurity Whitepaper Date: 23-November-2018 Version: D0.9r01 IEEE | 3 Park Avenue | New York, NY 10016-5997 | USA 1 3

[Document Subtitle] · Web viewPhilips Matthew d’Entremont Dalhousie University Melaine Yeung Centre for Global eHealth Innovation Michael Kirwan PCHA Rick Hampton Partners Health

  • Upload
    donhu

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

z

Copyright © 20xx IEEE. All rights reserved.

PHD Cybersecurity WhitepaperDate: 23-November-2018Version: D0.9r01

IEEE | 3 Park Avenue | New York, NY 10016-5997 | USA

1

3

4

PHD Cybersecurity WhitepaperCorresponding Author

Name Affiliation E-mailChristoph Fischer Roche Diabetes Care GmbH [email protected] Hamming HMT Consulting [email protected]

Contributors

Name AffiliationAxel Wirth SymantecBeth Pumo Kaiser PermanenteBrian Ondiege Brunel UniversityCarsten Mueglitz Roche Diabetes Care GmbHCatherine Li FDAChris Gates Illuminati EngineeringChris Roberts ResMedCraig Carlson Roche Diabetes Care GmbHCristian Pimentel ResMedDaidi Zhong Chongqing UniversityDaniel Pletea PhilipsEugene Vasserman Kansas State UniversityIsabel Tejero FDAJan Wittenber IEEEJohn Garguilo NISTJordan Hartmann NoninMartha De Cunha Maluf-Burgman MedtronicMartin Rosner PhilipsMatthew d’Entremont Dalhousie UniversityMelaine Yeung Centre for Global eHealth InnovationMichael Kirwan PCHARick Hampton Partners Health CareScott Thiel NavigantWilliam Hagestad Smiths Medical

1

Copyright © 2018 IEEE. All rights reserved.

5

6

7

8

9

2

Copyright © 2018 IEEE. All rights reserved.

10

Trademarks and DisclaimersIEEE believes the information in this publication is accurate as of its publication date; such information is subject to change without notice. IEEE is not responsible for any inadvertent errors.

The Institute of Electrical and Electronics Engineers, Inc.

3 Park Avenue, New York, NY 10016-5997, USA

Copyright © 20xx by The Institute of Electrical and Electronics Engineers, Inc.All rights reserved. Published Month 20xx. Printed in the United States of America.

IEEE is a registered trademark in the U. S. Patent & Trademark Office, owned by The Institute of Electrical and Electronics Engineers, Incorporated.

PDF: ISBN 978-0-7381-xxxx-x STDVxxxxx

Print: ISBN 978-0-7381-xxxx-x STDPDVxxxxx

IEEE prohibits discrimination, harassment, and bullying. For more information, visit http://www.ieee.org/web/aboutus/whatis/policies/p9-26.html.

No part of this publication may be reproduced in any form, in an electronic retrieval system, or otherwise, without the prior written permission of the publisher.

To order IEEE Press Publications, call 1-800-678-IEEE.Find IEEE standards and standards-related product listings at: http://standards.ieee.org

3

Copyright © 2018 IEEE. All rights reserved.

11

1213

14

15

16

17

18

19

20

21

22

23

24

25

2627

28

29

30

3132

33

3435

36

37

38

39

4041

42

4344

454647

4

Copyright © 2018 IEEE. All rights reserved.

48

Notice and Disclaimer of LiabilityConcerning the Use of IEEE-SA Industry Connections Documents

This IEEE Standards Association (“IEEE-SA”) Industry Connections publication (“Work”) is not a consensus standard document. Specifically, this document is NOT AN IEEE STANDARD. Information contained in this Work has been created by, or obtained from, sources believed to be reliable, and reviewed by members of the IEEE-SA Industry Connections activity that produced this Work. IEEE and the IEEE-SA Industry Connections activity members expressly disclaim all warranties (express, implied, and statutory) related to this Work, including, but not limited to, the warranties of: merchantability; fitness for a particular purpose; non-infringement; quality, accuracy, effectiveness, currency, or completeness of the Work or content within the Work. In addition, IEEE and the IEEE-SA Industry Connections activity members disclaim any and all conditions relating to: results; and workmanlike effort. This IEEE-SA Industry Connections document is supplied “AS IS” and “WITH ALL FAULTS.”

Although the IEEE-SA Industry Connections activity members who have created this Work believe that the information and guidance given in this Work serve as an enhancement to users, all persons must rely upon their own skill and judgment when making use of it. IN NO EVENT SHALL IEEE OR IEEE-SA INDUSTRY CONNECTIONS ACTIVITY MEMBERS BE LIABLE FOR ANY ERRORS OR OMISSIONS OR DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO: PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS WORK, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE AND REGARDLESS OF WHETHER SUCH DAMAGE WAS FORESEEABLE.

Further, information contained in this Work may be protected by intellectual property rights held by third parties or organizations, and the use of this information may require the user to negotiate with any such rights holders in order to legally acquire the rights to do so, and such rights holders may refuse to grant such rights. Attention is also called to the possibility that implementation of any or all of this Work may require use of subject matter covered by patent rights. By publication of this Work, no position is taken by the IEEE with respect to the existence or validity of any patent rights in connection therewith. The IEEE is not responsible for identifying patent rights for which a license may be required, or for conducting inquiries into the legal validity or scope of patents claims. Users are expressly advised that determination of the validity of any patent rights, and the risk of infringement of such rights, is entirely their own responsibility. No commitment to grant licenses under patent rights on a reasonable or non-discriminatory basis has been sought or received from any rights holder. The policies and procedures under which this document was created can be viewed at http://standards.ieee.org/about/sasb/iccom/.

This Work is published with the understanding that IEEE and the IEEE-SA Industry Connections activity members are supplying information through this Work, not attempting to render engineering or other professional services. If such services are required, the assistance of an appropriate professional should be sought. IEEE is not responsible for the statements and opinions advanced in this Work.

v

Copyright © 2017 IEEE. All rights reserved.

1

4950

51525354555657585960

61

626364656667686970

71

7273747576777879808182

83

84858687

EXECUTIVE SUMMARY

Users of personal health devices (PHD) have implicit expectations on convenience, connectivity, accessibility of their data, and security. They expect to connect PHDs to their mobile devices, view their data in the cloud, and easily share with their clinicians or care providers. In some cases, the users themselves are taking action to build connections between PHDs, mobile devices, and the cloud to create the desired system. While many manufacturers are working on solving PHD connectivity with proprietary solutions, there is a lack of a standardized approach to providing secure Plug & Play interoperability.

In this context, “Interoperability” is the ability of client components to communicate and share data with service components in an unambiguous and predictable manner as well as to understand and use the information that is exchanged [1]. Personal Health Device communication standards were developed to specifically address Plug & Play interoperability of PHDs (e.g., pulse oximeter, blood pressure monitor, insulin pumps and many other device types) with an emphasis on an optimized data exchange typically for small battery-powered devices. “Plug & Play” means in this context that all the user has to do is make the connection – the systems automatically detect, configure, and communicate without any other human interaction [2]. Currently, PHD communication standards do not support application end-to-end information “security” with multiple access control levels (e.g., restricted read access, restricted write access, full read access, full write access, full control access) over an untrusted transport with limited resources (e.g., processing power, memory, energy). Manufacturers are required to define solutions by proprietary extensions or using mechanisms on the transport level. This limits the usage of PHD communication standards and inhibits interoperability. In addition, for regulated medical devices, cybersecurity has also to be addressed. Cybersecurity is the process and capability of preventing unauthorized access, modification, misuse, denial of use, or the unauthorized use of information that is stored, accessed, or transferred from a medical device to an external recipient [4].

In order to align the standardization of secure Plug & Play interoperability, the PHD Cybersecurity Workgroup was founded by members of IEEE 11073 PHD WG, Bluetooth MedWG and PCHA GTC in November 2015. In September 2017 the workgroup was officially approved by IEEE-SA Standards Board as Industry Connections Activity Initiation IC17-013-01 with participation from regulatory bodies, research and industry. The mission of the PHD Cybersecurity Workgroup is to achieve comprehensive understanding of PHD cybersecurity in the PHD community, create a scalable information security toolbox appropriate for PHD communication and build the basis for a secure interoperability toolbox including Plug & Play.

A first outcome of this work is this PHD Cybersecurity Whitepaper. The whitepaper contains the background related to PHD cybersecurity, a detailed risk analysis of use cases specific to PHD types (process part of PHD cybersecurity), a scalable information security toolbox appropriate for PHD communication (capability part of PHD cybersecurity), and harmonization of threat modeling, scoring, and principal security controls across device types and organizations. The whitepaper will become the input for standardization of secure interoperability in open consensus standards (e.g., by IEEE 11073 PHD WG), industry standards (e.g., by Bluetooth MedWG) and guidelines (e.g., by PCHA GTC).

The standardization of secure Plug & Play interoperability will increase patient confidence that a device will work in a multi-vendor environment and gives providers trust that the data exchange is proven secured. This will also reduce the cost of care by providing proven secure interoperability between different devices and manufacturers. Also, the approval work of regulatory bodies could be streamlined and reduced because the various manufacturers could use the same scalable information security toolbox based on trusted open consensus standards.

vi

Copyright © 2018 IEEE. All rights reserved.

88

89909192939495

96979899

100101102103104105106107108109110111

112113114115116117118119

120121122123124125126

127128129130131132

CONTENTS

1 OVERVIEW.........................................................................................................................41.1 Scope.............................................................................................................................41.2 Problem Statement.......................................................................................................41.3 Purpose.........................................................................................................................51.4 Mission..........................................................................................................................5

2 BACKGROUND..................................................................................................................52.1 Definitions.....................................................................................................................52.2 Acronyms and Abbreviations......................................................................................72.3 Personal Health Device..............................................................................................102.3.1 Classification......................................................................................................................... 10

2.3.1.1 US Classification.................................................................................................................................10

2.3.1.2 EU Classification.................................................................................................................................10

2.3.1.3 China Classification............................................................................................................................11

2.3.2 Interfaces............................................................................................................................... 11

2.3.2.1 ISO/IEEE 11073 Personal Health Devices Standards..........................................................................12

2.3.2.2 Bluetooth Special Interest Group Medical Device Specifications.......................................................12

2.3.3 Device Type Examples.........................................................................................................12

2.3.3.1 Physical Activity Monitor...................................................................................................................12

2.3.3.2 Pulse Oximeter..................................................................................................................................13

2.3.3.3 Sleep Apnoea Breathing Therapy Equipment....................................................................................13

2.3.3.4 Insulin Delivery Device.......................................................................................................................13

2.3.3.5 Continuous Glucose Monitor.............................................................................................................14

2.3.3.6 Classification & Standards..................................................................................................................14

2.3.4 Related Publications.............................................................................................................14

2.4 Cybersecurity..............................................................................................................152.4.1 Importance of Cybersecurity...............................................................................................162.4.2 Related Publication...............................................................................................................16

2.5 Risk Management.......................................................................................................162.5.1 Risk Analysis......................................................................................................................... 172.5.2 Risk Evaluation.....................................................................................................................172.5.3 Risk Control........................................................................................................................... 172.5.4 Related Publications.............................................................................................................18

2.6 Information Security...................................................................................................18

vii

Copyright © 2018 IEEE. All rights reserved.

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

2.6.1 Confidentiality.......................................................................................................................182.6.2 Integrity.................................................................................................................................. 192.6.3 Availability............................................................................................................................. 192.6.4 Non-repudiation.................................................................................................................... 192.6.5 Related Publications.............................................................................................................19

2.7 Safety and Usability...................................................................................................202.7.1 Safety relationships..............................................................................................................202.7.2 Usability relationships..........................................................................................................202.7.3 Related Publications.............................................................................................................21

2.8 Threat Modeling..........................................................................................................212.8.1 Common Approaches to Threat Modeling..........................................................................22

2.8.1.1 List Known Potential Vulnerabilities..................................................................................................22

2.8.1.2 STRIDE................................................................................................................................................22

2.8.1.3 DREAD................................................................................................................................................22

2.8.1.4 CWSS..................................................................................................................................................22

2.8.1.5 Trike...................................................................................................................................................22

2.8.1.6 OCTAVE..............................................................................................................................................23

2.8.1.7 Attack-Defense Trees.........................................................................................................................23

2.8.1.8 Comparison........................................................................................................................................23

2.8.2 Selected Approach................................................................................................................242.8.3 Related Publications.............................................................................................................25

2.9 Vulnerability Assessment..........................................................................................252.9.1 Common Approaches to Quantifying Vulnerabilities........................................................26

2.9.1.1 CVSS...................................................................................................................................................26

2.9.1.2 eCVSS.................................................................................................................................................26

2.9.1.3 CWSS..................................................................................................................................................26

2.9.1.4 Comparison........................................................................................................................................27

2.9.2 Selected Approach................................................................................................................272.9.3 Related Publications.............................................................................................................27

2.10 Mitigation.....................................................................................................................272.10.1 STRIDE Category and Security Property............................................................................272.10.2 Design Principles.................................................................................................................. 28

2.10.2.1 Secure by Design and Secure by Default Principles............................................................................28

2.10.2.2 Privacy by Design and Privacy by Default Principles..........................................................................29

2.10.2.3 Ensure Robust Interface Design.........................................................................................................29

2.10.2.4 Limit Access to Trusted Users Only....................................................................................................29

viii

Copyright © 2018 IEEE. All rights reserved.

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

2.10.2.5 Ensure Trusted Content.....................................................................................................................30

2.10.3 Mapping of Mitigation Categories, Security Capabilities, Mitigation Techniques, and Design Principles.................................................................................................................. 30

2.10.4 Related Publications.............................................................................................................31

3 METHODS.........................................................................................................................323.1 Device Types...............................................................................................................323.2 Iterative Vulnerability Assessment...........................................................................323.2.1 System Context.....................................................................................................................33

3.2.1.1 Actors.................................................................................................................................................33

3.2.1.2 Assets.................................................................................................................................................35

3.2.1.3 Mapping Actors to Assets..................................................................................................................36

3.2.2 System Decomposition........................................................................................................36

3.2.2.1 System Boundaries............................................................................................................................36

3.2.2.2 Threat Model.....................................................................................................................................36

3.2.2.3 Vulnerability List................................................................................................................................37

3.2.3 Scoring................................................................................................................................... 38

3.2.3.1 eCVSS Metric Guidelines....................................................................................................................38

3.2.3.2 Suggested Collateral Damage............................................................................................................39

3.2.3.3 Device Wide Metrics..........................................................................................................................40

3.2.3.4 Risk Level Thresholds.........................................................................................................................40

3.3 Mitigation.....................................................................................................................41

4 RESULTS..........................................................................................................................424.1 Quantified Pre-Mitigation Vulnerabilities.................................................................424.2 Identified Attack Vectors...........................................................................................434.3 Decomposed Attack Vectors.....................................................................................474.4 Post-Mitigation Assessment.....................................................................................48

5 DISCUSSION....................................................................................................................515.1 PHD Vulnerability Assessment.................................................................................515.1.1 Physical Activity Monitor.....................................................................................................515.1.2 Pulse Oximeter...................................................................................................................... 525.1.3 Sleep Apnoea Breathing Therapy Equipment....................................................................535.1.4 Insulin Delivery Device.........................................................................................................535.1.5 Continuous Glucose Monitor...............................................................................................54

5.2 Multi-Component System Vulnerability Assessment.............................................555.3 Software of Unknown Provenance...........................................................................55

ix

Copyright © 2018 IEEE. All rights reserved.

203

204205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

5.4 Threat Modeling Tool.................................................................................................555.5 STRIDE........................................................................................................................565.6 eCVSS..........................................................................................................................565.7 Out of Scope Threat Vectors.....................................................................................57

6 CONCLUSION..................................................................................................................57

7 OUTLOOK.........................................................................................................................58

8 CITATIONS.......................................................................................................................60

9 APPENDIX A – FRAMEWORKS AND METHODOLOGIES............................................669.1.1 List Known Potential Vulnerabilities...................................................................................669.1.2 STRIDE................................................................................................................................... 669.1.3 DREAD................................................................................................................................... 669.1.4 Trike....................................................................................................................................... 679.1.5 OCTAVE................................................................................................................................. 679.1.6 Attack-Defense Trees...........................................................................................................68

10 APPENDIX B – STRIDE...................................................................................................69

11 APPENDIX C – CVSS AND ECVSS.................................................................................7311.1 Common Vulnerability Scoring System...................................................................7311.1.1 Base Metrics.......................................................................................................................... 7311.1.2 Temporal Metrics.................................................................................................................. 7511.1.3 Environmental Metrics..........................................................................................................7611.1.4 Equations............................................................................................................................... 7711.1.5 Vector..................................................................................................................................... 77

11.2 Embedded Common Vulnerability Scoring System................................................7711.2.1 Impact Safety Efficacy..........................................................................................................7911.2.2 Suggested Collateral Damage Value...................................................................................7911.2.3 Equations............................................................................................................................... 8011.2.4 Vector..................................................................................................................................... 80

12 APPENDIX D – SCORING EQUATIONS.........................................................................8212.1 Original CVSS v2 Equations......................................................................................8212.2 eCVSS Equations.......................................................................................................82

x

Copyright © 2018 IEEE. All rights reserved.

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

13 APPENDIX E – ECVSS METRIC VALUE NUMERIC EQUIVALENT..............................84

14 APPENDIX F – TMT EXPORT MACRO...........................................................................85

15 APPENDIX G – DEVICE TYPE ANALYSIS.....................................................................8915.1 Physical Activity Monitor...........................................................................................8915.1.1 System Context.....................................................................................................................89

15.1.1.1 Use Case Description.........................................................................................................................89

15.1.1.2 Intended Actors.................................................................................................................................89

15.1.1.3 Exchanged Data.................................................................................................................................89

15.1.1.4 Actors Mapped to Assets...................................................................................................................90

15.1.2 Threat Model.......................................................................................................................... 9015.1.3 Pre- & Post-Mitigation Vulnerability Assessment..............................................................93

15.2 Pulse Oximeter...........................................................................................................9515.2.1 System Context.....................................................................................................................95

15.2.1.1 Use Case Description.........................................................................................................................95

15.2.1.2 Intended Actors.................................................................................................................................95

15.2.1.3 Exchanged Data.................................................................................................................................95

15.2.1.4 Actors Mapped to Assets...................................................................................................................95

15.2.2 Threat Model.......................................................................................................................... 9615.2.3 Pre- & Post-Mitigation Vulnerability Assessment..............................................................98

15.3 Sleep Apnoea Breathing Therapy Equipment.......................................................10015.3.1 System Context...................................................................................................................100

15.3.1.1 Use Case Description.......................................................................................................................100

15.3.1.2 Intended Actors...............................................................................................................................100

15.3.1.3 Exchanged Data...............................................................................................................................100

15.3.1.4 Actors Mapped to Assets.................................................................................................................100

15.3.2 Threat Model........................................................................................................................ 10115.3.3 Pre- & Post-Mitigation Vulnerability Assessment............................................................103

15.4 Insulin Delivery Device............................................................................................10615.4.1 System Context...................................................................................................................106

15.4.1.1 Use Case Description.......................................................................................................................106

15.4.1.2 Intended Actors...............................................................................................................................106

15.4.1.3 Exchanged Data...............................................................................................................................106

15.4.1.4 Actors Mapped to Assets.................................................................................................................106

15.4.2 Threat Model........................................................................................................................ 10715.4.3 Pre- & Post-Mitigation Vulnerability Assessment............................................................109

xi

Copyright © 2018 IEEE. All rights reserved.

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

15.5 Continuous Glucose Monitor..................................................................................11115.5.1 System Context...................................................................................................................111

15.5.1.1 Use Case Description.......................................................................................................................111

15.5.1.2 Intended Actors...............................................................................................................................111

15.5.1.3 Exchanged Data...............................................................................................................................111

15.5.1.4 Actors Mapped to Assets.................................................................................................................111

15.5.2 Threat Model........................................................................................................................ 11215.5.3 Pre- & Post-Mitigation Vulnerability Assessment............................................................114

xii

Copyright © 2018 IEEE. All rights reserved.

303

304

305

306

307

308

309

310

311

TablesTable 2-1 – Definitions....................................................................................................................5

Table 2-2 – Acronyms and abbreviations........................................................................................7

Table 2-3 – Examples of device type classifications and communication standards.....................14

Table 2-3 – Threat modeling framework pros and cons...............................................................23

Table 2-5 – STRIDE threat category and security property...........................................................28

Table 2-6 – Mitigation categories, security capabilities, mitigation techniques, and design principles......................................................................................................................................31

Table 3-1 - Assets..........................................................................................................................35

Table 3-2 – Pre- and post-mitigation assessment guidelines........................................................38

Table 3-3 – Suggested collateral damage value definitions..........................................................39

Table 3-4 – Device wide metrics for device types.........................................................................40

Table 3-5 – Application of mitigations techniques based on STRIDE categories...........................41

Table 4-1 – Number and risk level of STRIDE pre-mitigation vulnerabilities by device type.........42

Table 4-2 – Quantified STRIDE vulnerabilities by interface across all device types.......................47

Table 4-3 – Mitigation techniques selected for post-mitigation analysis......................................48

Table 4-4 – Pre- and Post- mitigation comparison of STRIDE vulnerabilities by device type........50

Table 10-1 – STRIDE categories and security properties...............................................................69

Table 10-2 – Data flow diagram element and STRIDE threat category.........................................70

Table 10-3 – Vulnerability type for STRIDE threat category..........................................................70

Table 10-4 – STRIDE threat category and primary mitigation techniques....................................72

Table 11-1 – CVSS metric group description.................................................................................73

Table 11-2 – Base metrics and values descriptions.......................................................................74

Table 11-3 – Temporal metrics and values descriptions...............................................................75

Table 11-4 – Environmental metrics and values descriptions.......................................................76

Table 11-5 – eCVSS updated definitions of metrics......................................................................78

Table 11-6 – Suggested collateral damage value definitions........................................................79

Table 15-1 – Mapping PAM actors to assets.................................................................................90

Table 15-2 – Description of PAM threat model data flows...........................................................91

Table 15-3 – Mapping pulse oximeter actors to assets.................................................................95

Table 15-4 – Description of PAM threat model data flows...........................................................96

Table 15-5 – Mapping SABTE actors to assets............................................................................100

312

313

314

315

316

317

318319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

Table 15-6 – Description of SABTE threat model data flows.......................................................101

Table 15-7 – Mapping insulin delivery device actors to assets...................................................108

Table 15-8 – Description of insulin delivery device threat model data flows.............................109

Table 15-9 – Mapping CGM actors to assets..............................................................................113

Table 15-10 – Description of CGM threat model data flows.......................................................114

2

Copyright © 2018 IEEE. All rights reserved.

344

345

346

347

348

349

350

FiguresFigure 2-1 ISO/IEEE 11073 PHD communication hierarchy...........................................................12

Figure 2-2 Bluetooth PHD communication hierarchy...................................................................13

Figure 2-3 Security, safety, and usability relationship..................................................................20

Figure 3-1 Vulnerability assessment workflow.............................................................................33

Figure 3-2 Actors...........................................................................................................................35

Figure 3-3 PHD generic threat model...........................................................................................37

Figure 4-1 Physical activity monitor moderate- and high-risk data flows.....................................43

Figure 4-2 Pulse oximeter threat model with moderate- and high-risk data flows......................44

Figure 4-3 Insulin delivery device moderate- and high-risk data flows.........................................45

Figure 4-4 Sleep apnoea breathing therapy equipment moderate- and high-risk data flows......45

Figure 4-5 Continuous glucose monitoring moderate- and high-risk data flows..........................46

Figure 4-6 Common moderate- and high-risk data flows.............................................................47

Figure 7-1 Long-term roadmap.....................................................................................................58

Figure 7-2 Near-term roadmap.....................................................................................................59

Figure 11-1 CVSS metric groups and attributes............................................................................73

Figure 15-1 Physical activity monitor threat model......................................................................91

Figure 15-2 Pulse oximeter threat model.....................................................................................96

Figure 15-3 Sleep apnoea breathing therapy equipment threat model.....................................101

Figure 15-4 Insulin delivery device threat model........................................................................109

Figure 15-5 CGM threat model...................................................................................................114

3

Copyright © 2018 IEEE. All rights reserved.

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

PHD Cybersecurity Whitepaper1 Overview1.1 ScopeUsers of personal health devices (PHD) have implicit expectations on convenience, connectivity, accessibility of their data, and security. They expect to connect PHDs to their mobile devices, view their data in the cloud, and easily share with their clinicians or care providers. In some cases, the users themselves are taking action to build connections between PHDs, mobile devices, and the cloud to create the desired system. While many manufacturers are working on solving PHD connectivity with proprietary solutions, there is a lack of a standardized approach to providing secure Plug & Play interoperability.

The ISO/IEEE 11073 PHD family of standards, Bluetooth Special Interest Group profiles & services specifications, and Continua Design Guideline were developed to specifically address Plug & Play interoperability of PHDs (e.g., physical activity monitor, pulse oximeter, sleep apnoea breathing therapy equipment, insulin delivery device, continuous glucose monitor) with an emphasis on an optimized exchange protocol typically for small battery-powered devices. In this context that means:

“Interoperability” is the ability of client components to communicate and share data with service components in an unambiguous and predictable manner as well as to understand and use the information that is exchanged [1] and

“Plug & Play” is all the user has to do is make the connection – the systems automatically detect, configure, and communicate without any other human interaction [2].

Within the context of “secure” Plug & Play interoperability, cybersecurity is the process and capability of preventing unauthorized access, modification, misuse, denial of use, or the unauthorized use of information that is stored on, accessed from, or transferred to and from a PHD. This PHD Cybersecurity Whitepaper describes cybersecurity for transport-independent applications and information profiles of PHDs. These profiles define data exchange, data representation, and terminology for communication between agents (e.g., pulse oximeter or sleep apnoea breathing therapy equipment) and connected devices (e.g., health appliances, set top boxes, cell phones, and personal computers). This whitepaper provides the background related to PHD cybersecurity, a detailed risk analysis of use cases specific to PHD device types and the recommended controls to be adopted for a future enhancement of PHD data exchange standards.

This whitepaper is concerned with the machine-to-machine interface to and from the PHD. Currently not in scope is the cybersecurity of the physical device (e.g. physical tampering), the UI, and the direct-to-cloud interface.

1.2 Problem StatementAt this moment, the IEEE 11073 PHD communication standards do not provide methods to ensure information security of data exchange. They assume that data exchange is secured by other means, for example, a secure transport channel. Nevertheless, security is a crucial issue

4

Copyright © 2018 IEEE. All rights reserved.

373

374

375

376377378379380381382

383384385386387388

389390391392393394

395396397398399400401402403404405

406407408

409

410411412

that needs to be managed in the context of PHD data exchange Error: Reference source notfound.

The Bluetooth SIG core specifications supports information security of the data exchange over the base transport, but not end-to-end into the applications [8].

1.3 PurposeThe purpose of this whitepaper is to build common ground about cybersecurity in the PHD data exchange and define an “information security toolbox” appropriate for the PHD data exchange standards. The whitepaper will become the basis for the standardization of secure Plug & Play interoperability in an open consensus standard by the IEEE 11073 PHD Working Group and secure Profile and Service by the Bluetooth SIG Medical Devices Working Group.

Currently, PHD data exchange does not support application end-to-end information security with multiple access control levels (e.g., restricted read access, restricted write access, full read access, full write access, full control access) over an untrusted transport with limited resources (e.g., processing power, memory, energy). It assumes that data exchange is secured by other means, for example, a secure transport channel. This requires that manufacturers define solutions by proprietary extensions or using mechanisms on the transport level for example. This limits the usage of PHD data exchange standards and restricts interoperability. In addition, for regulated medical devices, cybersecurity has also to be addressed [4]. Unfortunately, there is no standard available addressing process and capability of secure Plug & Play interoperability [5].

1.4 MissionThe mission of the PHD Cybersecurity Workgroup is to achieve comprehensive understanding of PHD cybersecurity in the PHD community, create a scalable information security toolbox appropriate for PHD communication and build the basis for a secure interoperability toolbox including Plug & Play.

2 Background2.1 DefinitionsFor the purposes of this document, the following terms and definitions apply. Multiple glossaries were searched and various definitions were reviewed from which the definitions below were selected [6], [7], [8]. The IEEE Standards Dictionary Online should be consulted for terms not defined in this clause.

Table 2-1 – Definitions

Term DescriptionAccess Ability or the means necessary to read, write, modify, or communicate

data/information or otherwise make use of any system resource [10].Access Control Ensuring that the resources of a data processing system can be accessed only

by authorized entities in authorized ways [10].Asset Anything that has value to an individual, an organization or a government

[11].Audit Trail Records of system activity both by system and application processes and by

5

Copyright © 2018 IEEE. All rights reserved.

413414

415416

417

418419420421422

423424425426427428429430431432

433

434435436437

438

439

440441442443

444

Term Descriptionuser activity of systems and applications. In conjunction with appropriate tools and procedures, audit trails can assist in detecting security violations, performance problems, and flaws in applications [12].

Authentication The act of verifying the claimed identity of an entity [13].Authorization Granting of rights, which includes the granting of access based on access

rights [13].Availability Property of data or of resources being accessible and usable on demand by

an authorized entity [13].Confidentiality The property of information that is not made available or disclosed to

unauthorized individuals, entities, or processes [14].Cybersecurity Preservation of confidentiality, integrity and availability of information in the

Cyberspace [11].De-Identification General term for any process of removing the association between a set of

identifying data and the data subject [15].Digital Signatures Data appended to, or a cryptographic transformation of, a data unit that

allows a recipient of the data unit to prove the source and integrity of the data unit and protect against forgery, e.g. by the recipient [16].

Do not Store Secrets Avoiding the retention of sensitive and/or secret data on a device in order to limit the impact of a cybersecurity event.

Encryption Function of transforming data by the discipline of cryptography so as to make the data undecipherable to anyone other than the legitimate sender and receiver (i.e., encrypted data in transit) [17].

End-User Signalization Making the user aware of a cybersecurity event by means of visible, auditable, or tactile signal.

Filtering Process of accepting or rejecting data flows through a network, according to specified criteria [18].

Harm Harm refers to the impact of a threat to end-user/patient safety and satisfaction, device functionality, system data integrity and availability, property, or economic loss of productivity or revenue.

Hazard Potential source of harm [19].Information Security Preservation of confidentiality, integrity and availability of information [20];

the capability part of PHD cybersecurity related to digital data.Input Sanitization Assure by testing or replacement that a tainted or other input value

conforms to the constraints imposed by one or more restricted sinks into which it may flow. If the input value does not conform, either the path is diverted to avoid using the value or a different, known-conforming value is substituted [21].

Input Validation Process used to determine if inputs are accurate, complete, or meet specified criteria [13].

Integrity Data, information and software are accurate and complete and have not been altered in an unauthorized manner since it was created, transmitted or stored [18].

Invalidate Compromised Security

Reducing access to a device by invalidating previously established security when a cybersecurity event is detected or considered to have occurred.

Least Privileges Limiting the access privileges to a set that supports only the intended use while restricting access to all other functions, data, settings, etc.

Message Authentication Code

Fixed-length string of bits used to verify the authenticity of a message, generated by the sender of the message, transmitted together with the

6

Copyright © 2018 IEEE. All rights reserved.

Term Description

message, and verified by the receiver of the message [22].Multi-Component System

Multiple connected components from potentially various manufacturers either within a single device or as a system, where at least one is a PHD

PHD Cybersecurity Within the context of the PHD is cybersecurity the process and the capability of preventing unauthorized access, modification, misuse, denial of use, or the unauthorized use of information that is stored on, accessed from, or transferred to and from a PHD.

Physical Tamper Evidence

Observable indication that an attempt has been made to compromise the security of a device [23].

Physical Tamper Resistant

A device designed to make it difficult for attackers to gain access to sensitive information contained in the module [24].

Plug & Play All the user must do is make the connection – the systems automatically detect, configure, and communicate without any other human interaction [2].

Privacy The measure used to prevent the content of messages from being read by other than the intended recipients [14].

Protect Secrets & Secret Data

Restricting access to secrets and sensitive and/or secret data stored within a device (i.e., encrypted data at rest) to limit the impact of a cybersecurity event.

Quality of Service Level of performance for the transport of data. Typically realized by a set of network technologies that enable a network to handle data traffic with a minimum amount of negative effects to the users of the network [25], [26].

Re-Establish Security After the security of the device has been determined, allow trusted parties to reconnect, regain access levels, and conduct secure procedures.

Risk Possibility that a particular threat will exploit a particular vulnerability of a data processing system [13].

Risk Analysis Systematic use of available information to identify hazards and to estimate the risk [4]; the process part of PHD cybersecurity related to digital data..

Threat Potential cause of an unwanted incident, which may result in harm to a system, individual or organization [11].

Throttling A limiting technique where resistance (e.g., time delay, lower band width) is introduced as a means to control data flow and consumption of device resources.

Vulnerability Weakness of an asset or control that can be exploited by a threat [11].

2.2 Acronyms and Abbreviations

Table 2-2 – Acronyms and abbreviations

Acronym/Abbreviation DescriptionAAMI Association for the Advancement of Medical InstrumentationAID Automated Insulin DeliveryADTree Attack-Defense TreeCFR Code of Federal RegulationsCGM Continuous glucose monitorCIA Confidentiality, integrity, and availabilityCM Conditional MandatoryCRUD Create, Read, Update, DeleteCVSS Common Vulnerability Scoring System

7

Copyright © 2018 IEEE. All rights reserved.

445

446

Acronym/Abbreviation DescriptionCWSS Common Weakness Scoring SystemDFD Data flow diagramsDoS Denial of ServiceDREAD Damage Potential, Reproducibility, Exploitability, Affected User,

DiscoverabilityeCVSS Embedded Common Vulnerability Scoring SystemEU European UnionFDA Food and Drug AdministrationGATT Generic Attribute ProfileGTC Global Technical CommitteeHCP Health Care ProfessionalIC Industry ConnectionsID IdentificationIEC International Electrotechnical CommissionIEEE Institute of Electrical and Electronics EngineersIHE Integrating the Healthcare EnterpriseInfoSec Information SecurityIRDA Infrared Data AssociationISO International Organization for StandardizationIT Information TechnologyLM Lateral MovementM MandatoryMAC Message Authentication CodeMedWG Medical Devices Working GroupNFC Near Field CommunicationNIST National Institute of Standards and TechnologyNMPA National Medical Products AdministrationOWASP Open Web Application Security ProjectOXP Optimized Exchange ProtocolPAM Physical Activity MonitorPC Personal ComputerPCD Point-of-case DevicesPCHA Personal Connected Health AlliancePHD Personal Health DevicePHI Protected Health InformationPHR Personal Health RecordsPwD Person with DiabetesR&D Research and DevelopmentSABTE Sleep apnoea breathing therapy equipmentSD Secure DigitalSIG Special Interest GroupSOUP Software of Unknown ProvenanceSP Special PublicationSQL Structed Query LanguageStd StandardSTRIDE Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service,

Elevation of PrivilegesTMT Threat Modeling ToolUI User Interface

8

Copyright © 2018 IEEE. All rights reserved.

Acronym/Abbreviation DescriptionUML Unified Modeling LanguageUSB Universal Serial BusVBA Visual Basic for ApplicationsWG Working Group

9

Copyright © 2018 IEEE. All rights reserved.

447

2.3 Personal Health DeviceA Personal Health Device (PHD) is a medical device used in personal health applications, where the intended use of the device is patient-centric. Often, a PHD is used outside of the clinical environment, such as at home or on the go. Some examples of a PHD include Physical Activity Monitor, Pulse Oximeter, Sleep Apnoea Breathing Therapy Equipment, Insulin Pump, and Continuous Glucose Monitoring devices [27].

2.3.1 Classification

Personal Health Devices are subject to regulatory authorities for classification based on the distribution of the PHD and regulatory authority jurisdiction. Typically, the classification is based on the device design complexity, intended use, and potential for harm. However, classification across jurisdictions is not harmonized. As an illustration, following a brief description of the US, EU, and China classifications.

2.3.1.1 US Classification

The Food and Drug Administration (FDA), an agency within the U.S. Department of Health and Human Services, recognizes three classes of medical devices: Class I, Class II, and Class III. The classification is described under Section 513 of the Federal Food, Drug, and Cosmetic Act (the “Act”) but in brief summary [28]:

Class I: There is information showing that the general controls of the act are sufficient to provide reasonable assurance of safety and effectiveness.

Class II: General controls, by themselves, are insufficient to provide reasonable assurance of safety and effectiveness, but there is sufficient information to establish special controls to provide such assurance.

Class III: There is insufficient information to support classifying a device into Class I or Class II and the device is a life-sustaining or life-supporting device or is for a use which is of substantial importance in preventing impairment of human health, or presents a potential unreasonable risk of illness or injury.

2.3.1.2 EU Classification

The European Union (EU) recognizes four classes of medical devices: Class I, Class IIa, Class IIb, and Class III. The classification is described in the Medical Devices Directive (Council Directive 93/42/ECC) (MDD) under Article IX and in it‘s successor Medical Devices Regulation (Council Regulation 2017/745) (MDR) under Annex VIII , but in brief summary [29]:

Class I: Non-invasive devices that are not intended for storing or channelling blood or other body fluids or storing organs in part or in whole. Invasive devices, with respect to body offices, intended for transient use and reusable surgical instruments. Devices not intended for connection to a device of Class IIa or higher.

Class IIa: Non-invasive devices intended for storing or channelling blood or other body fluids or storing organs in part or in whole and for modifying biological or chemical composition where treatment consists of filtration, centrifugation or exchanges of gas or heat. Invasive devices intended for short-term use (invasive with respect to body offices), most surgical devices intended for transient use or short-term use, and implantable devices or long-term surgical devices placed in the teeth. Most active

10

Copyright © 2018 IEEE. All rights reserved.

448

449450451452453

454

455456457458459

460

461462463464

465466467468469470471472473

474

475476477478

479480481482483484485486487488

devices intended for diagnosis or intended to administer and/or remove medicines, body liquids or other substances to or form the body and/or exchange energy in a non-hazardous way.

Class IIb: Non-invasive devices intended for modifying the biological or chemical composition of blood, other body liquids or other liquids intended for infusion into the body. Invasive devices intended for long-term use (invasive with respect to body offices), for ionising radiation, to have biological effect or to be wholly or mainly absorbed, to administer medicines by means of a delivery system, and most implantable devices. Active devices intended to control or monitor the performance of active therapeutic devices, for monitoring vital physiological parameters where variations could result in immediate danger to the patient, and for diagnosis or intended to administer and/or remove medicines, body liquids or other substances to or form the body in a potentially hazardous way.

Class III: Invasive devices that are surgical or implantable and intended for direct contact with the heart, central circulatory system, or central nervous system or intended for biological effect or undergo chemical change in the body. All devices incorporating, as an integral part, a human blood derivative.

2.3.1.3 China Classification

The National Medical Products Administration (NMPA), an agency within the China State Administration for Market Regulation, recognizes three classes of medical devices: Class I, Class II, and Class III. The classification is described under Article 4 of the Regulation on the Supervision and Administration of Medical Devices (2017 Revision), but in brief summary [30]:

Class I: Means the medical devices with low risks, whose safety and effectiveness can be ensured through routine administration.

Class II: Means the medical devices with moderate risks, which are strictly controlled and administered to ensure their safety and effectiveness.

Class III: Means the medical devices with relatively high risks, which are strictly controlled and administered through special measures to ensure their safety and effectiveness.

2.3.2 Interfaces

When it comes to data exchange over interfaces especially between devices from different manufacturers, interoperability is key. Interoperability is the ability of a device to provide data so that it might be accessed from other devices and systems, and then made available to other persons, near or far, via electronic means. On top of that, “semantic interoperability” enables target systems to interpret and analyze the data automatically. It allows the interpretation of information without ambiguity (not automatically) especially across language and cultural boundaries. Interoperability between devices from different vendors can be achieved when all devices and systems involved implement a common set of standards.

PHDs support wired and/or wireless interfaces to transfer data between connected devices and the PHD. These interfaces could be proprietary, manufacturer specific, based on a standard designed by an industry-driven body (e.g., Bluetooth Special Interest Group), or based on an open standard designed by an Standard Development Organization (e.g., IEEE Standards Association).

11

Copyright © 2018 IEEE. All rights reserved.

489490491492493494495496497498499500501502503504505

506

507508509510

511512513514515516517

518

519520521522523524525526

527528529530531

2.3.2.1 ISO/IEEE 11073 Personal Health Devices Standards

The ISO/IEEE 11073 PHD family of standards contributes to interoperability of personal health devices. It is grouped into three domains: disease management, health and fitness, and aging independently. A listing of these standards is provided in Section 2.3.4. These standards are transport agnostic, as long as the transport is reliable (see Figure 2-1). As well, these standards are intended to be as lightweight as possible by limiting the functionality of the standards to what was agreed to be absolutely necessary. Simple devices only need to implement a subset of all the functionalities that these standards define. Data transfer is event driven, which reduces the need to periodically poll devices for new information.

Figure 2-1 ISO/IEEE 11073 PHD communication hierarchy

2.3.2.2 Bluetooth Special Interest Group Medical Device Specifications

The Bluetooth Special Interest Group (SIG) specifications define the communication building blocks that can be used to create interoperable devices. While the Bluetooth SIG specifications cover a large number of domains, the Medical Devices Working Group (MedWG) develops specifications for PHDs. The MedWG specifications work on top of the Bluetooth Core Specification while the Core Specification defines the lower layers of the OSI model (see Figure2-2). The MedWG specifications use the Generic Attribute Profile (GATT), which has a client-server model. The PHD is the server that makes data available to connected devices (i.e., clients) through request/response or publish/subscribe interactions. A listing of the MedWG specifications is provided in Section 2.3.4.

2.3.3 Device Type Examples

12

Copyright © 2018 IEEE. All rights reserved.

532

533534535536537538539540

541542

543

544545546547548549550551552

553

2.3.3.1 Physical Activity Monitor

A Physical Activity Monitor (PAM) device is intended to track a user’s physical activity during the day and may also monitor a user’s sleeping patterns. It typically measures body movement using an accelerometer and/or a heart rate sensor which is the classified in categories such as walking, running, cycling, etc. Based on a user’s characteristics, such as gender and age, it calculates calories burned. It is typically associated with a service that supports users to achieve goals such as becoming more active, reducing weight or getting insight in their activity level over time. These services may also be of a more clinical nature where patients with chronic diseases are being monitored using the PAM as one of the sensors.

Figure 2-2 Bluetooth PHD communication hierarchy

2.3.3.2 Pulse Oximeter

A Pulse Oximeter device is used to analyze the oxygen saturation of a patient’s arterial blood. This type of device is typically a fingertip size model used by patients with chronic conditions where they may have trouble getting oxygen into their body, like Chronic Obstructive Pulmonary Disease. Typically, in this usage, the patient checks their oxygenation a few times a day.

Pulse oximeters used in the personal health device space may also be used to study oxygenation during sleep, to detect sleep apnoea. In this case, a night’s worth of data is often stored within the device and retrieved later. Continuous monitoring of a patient using set thresholds to alert a caregiver when a metric has crossed a boundary may also be used in a personal health setting.

2.3.3.3 Sleep Apnoea Breathing Therapy Equipment

A sleep apnoea breathing therapy equipment (SABTE) is intended to alleviate the symptoms of patients who suffer from sleep apnoea by delivering therapeutic breathing pressure support to the patient. Sleep apnoea is the clinically significant intermittent absences of normal respiration occurring during sleep indicated by apnoea and hypopnea events.

13

Copyright © 2018 IEEE. All rights reserved.

554

555556557558559560561562

563564

565

566567568569

570571572573

574

575576577578

2.3.3.4 Insulin Delivery Device

An insulin delivery device administers insulin at a defined amount to the human body. Insulin delivery devices are primarily used in the continuous subcutaneous insulin infusion (CSII) therapy of type 1 diabetes mellitus. This type of diabetes mellitus is characterized by loss of the insulin-producing beta cells of the islets of Langerhans in the pancreas. Insulin delivery devices typically inject insulin into the subcutaneous layer of fat tissue under the skin through an infusion set. Preferred sites for the cannula are the abdomen, lumbar region, thighs, buttocks and the upper arms.

2.3.3.5 Continuous Glucose Monitor

A continuous glucose monitor (CGM) device uses a tiny sensor inserted under the skin to check glucose levels in interstitial tissue fluid. The sensor stays in place for several days up to a few months and then is replaced. A transmitter sends information about glucose levels via radio waves from the sensor to a pager-like wireless monitor. For most CGMs, the end-user checks capillary blood samples with a glucose meter to calibrate the devices. Users are recommended to confirm glucose levels with a capillary blood sample using a glucose meter before making a change in treatment. The system aids in the detection of episodes of hyperglycemia and hypoglycemia, facilitating both acute and long-term therapy adjustments.

2.3.3.6 Classification & Standards

The Table 2-3 shows examples of PHD device types, classifications and data exchange interfaces.

Table 2-3 – Examples of device type classifications and communication standards

Device type US EU China ISO/IEEE Standard

Bluetooth Specification

Physical Activity Monitor

Class II Exempt Class IIa Non-

regulatedISO/IEEE 11073-

10441 N/A

Pulse Oximeter Class II Class IIa II ISO/IEEE 11073-10404

Pulse Oximeter

Profile and Service

Sleep Apnoea Breathing Therapy Equipment Class II Class IIa II ISO/IEEE 11073-

10424 N/A

Insulin Delivery Class II Class IIb II ISO/IEEE 11073-10419

Insulin DeliveryProfile and Service

Continuous Glucose Monitor Class III Class IIb III ISO/IEEE 11073-

10425

Continuous Glucose Monitoring

Profile and Service

2.3.4 Related Publications

The following regulations deals with medical device classification:

U.S. Code, Title 21, Chapter 9, Subchapter V, Part A, Section 360co Physical Activity Monitor [31]o Pulse Oximeter [32]o Sleep Apnoea Breathing Therapy Equipment [33]

14

Copyright © 2018 IEEE. All rights reserved.

579

580581582583584585586

587

588589590591592593594595

596

597

598

599

600

601602603604

o Insulin Delivery Device [34]o Continuous Glucose Monitor [35]

Council Directive 93/42/EEC, Article IX [36]

The following standards are the “core standards” of the IEEE 11073 family of standards:

ISO/IEEE 11073-10101 Nomenclature [37] ISO/IEEE 11073-10201 Domain information model [2] ISO/IEEE 11073-20101 Application profiles [38] ISO/IEEE 11073-30200 Transport profile - Cable connected [39] ISO/IEEE 11073-30300 Transport profile - Infrared wireless [40]

The ISO/IEEE 11073 “core standards” were optimized for PHDs as described in the following IEEE 11073 PHD standards:

ISO/IEEE 11073-00103 Overview [41] ISO/IEEE 11073-20601 Optimized exchange protocol [3]

The IEEE 11073 PHD “optimized standards” describe a toolbox which is further compiled for specific applications in so called device specializations. The following IEEE 11073 PHD device specializations:

ISO/IEEE 11073-10404 Pulse Oximeter [42] ISO/IEEE 11073-10419 Insulin Pump [43] ISO/IEEE 11073-10424 Sleep Apnoea Breathing Therapy Equipment (SABTE) [44] ISO/IEEE 11073-10425 Continuous Glucose Monitor (CGM) [45] ISO/IEEE 11073-10441 Cardiovascular Fitness and Activity Monitor [46]

The following specifications are the “core standards” of the Bluetooth SIG

Core Specification version 5.0 [47] Core Specification Supplement version 7 [48] Core Specification Addendum version 6 [49]

The following are the MedWG GATT specifications

Blood Pressure Profile [50] and Service [51] Continuous Glucose Monitoring Profile [52] and Service [53] Insulin Delivery Profile [54] and Service [55] Glucose Profile [56] and Service [57] Heart Rate Profile [58] and Service [59] Health Thermometer Profile [60] and Service [61] Pulse Oximeter Profile [62] and Service [63] Weight Scale Profile [64] and Service [65]

2.4 CybersecurityIn the context of PHDs, cybersecurity is the process and capability of preventing unauthorized access, modification, misuse, denial of use, or the unauthorized use of information that is stored on, accessed from, or transferred to and from a PHD. Cybersecurity is a subset of information security. Information security (also known as InfoSec) ensures that both physical and digital data is protected from unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction. Information security differs from cybersecurity in that InfoSec aims to keep data in any form secure, whereas cybersecurity protects only digital data [11], [20].

15

Copyright © 2018 IEEE. All rights reserved.

605606607

608

609610611612613

614615

616617

618619620

621622623624625

626

627628629

630

631632633634635636637638

639

640641642643644645646

This whitepaper addresses the process part of cybersecurity with risk management (see Section 2.5), the capability part of cybersecurity with information security both related to digital data (see Section 2.6), and the relationships to safety and usability (see Section 2.7). Our approach is described in detail in Section 2.10.

2.4.1 Importance of Cybersecurity

Most PHDs provide vital support for those living with chronic disease. In the recent past, some manufactures have been impacted by cybersecurity attacks and threats (see Section 2.4.2).

The growth of the Internet of Things will increase the number of connected PHDs in the market creating systems of systems. Detecting and assessing vulnerabilities for a PHD alone requires careful analysis. Augmenting this with connectivity to another device, system or the Internet significantly increases the potential threats. To ensure that PHDs succeed with their primary function of improving the quality of life for those affected by disease, security of the PHD has to be considered.

2.4.2 Related Publication

The following regulations, standards, and guidance deal with PHD cybersecurity:

FDA Guidance Content of Premarket Submissions for Management of Cybersecurity in Medical Devices [4]

FDA Guidance Content of Postmarket Management of Cybersecurity in Medical Devices [66]

ISO 31000:2018 Risk management [67]

Examples of cybersecurity attacks and threats:

Medical device vendor disables internet updates over hacking risk, FDA alerts [68] The $250 Biohack That’s Revolutionizing Life With Diabetes [69] FDA recalls 5,000 Abbott heart devices worldwide [70] DHS warns of security flaws in GE, Philips, Silex medical devices [71] Medical Devices are the Next Security Nightmare [72] Hospital survival guide for a world overflowing with unsecured medical devices [73]

2.5 Risk ManagementVarious regulations, standards and guidelines address the subject of risk and risk management. In some cases, the application of specific standards may be mandated by regulations, contracts, or customer expectations. Rather than trying to define an appropriate risk-management process in this whitepaper, a manufacturer’s risk management process needs to comply with the regulations, standards, and contracts for their disease domain.

This whitepaper provides guidance for the risk management of interfaces using PHD communication standards. This risk management activity minimize all reasonably foreseeable risks associated with PHD communication. In the domain of PHD there are fitness devices with low information security concerns and disease management devices with higher information security concerns. Therefore, in this whitepaper the risk management process is based on the device intended use cases for each domain, which represent a wide variance where medical devices represent the upper limit. As such the risk evaluation of a PHD with fewer information security concerns may only use a subset of the risk controls identified for medical devices.

16

Copyright © 2018 IEEE. All rights reserved.

647648649650

651

652653

654655656657658659

660

661

662663664665666

667

668669670671672673

674

675676677678679

680681682683684685686687

The described risk management process is based on ISO 14971 and thus is divided into three major parts: risk analysis, risk evaluation, and risk control.

2.5.1 Risk Analysis

The risk analysis process clarifies the intended use and identifies characteristics related to the safety of the PHD as well as potential hazards. This is commonly achieved using concepts like threat modeling (see Section 2.8). It also takes care of an estimation of the risks for each hazardous situation, as described in published standards, scientific technical data, usability tests, field data from similar devices already in use, etc. A hazard that can be expected to occur with minimal additional effort and is known to cause severe harm will result in a high-risk. On the other hand, a hazard that only occurs in complex scenarios that require detailed system knowledge and does not cause harm will result in a low-risk.

2.5.2 Risk Evaluation

Risk evaluation decides for each hazardous situation whether mitigation measures are required based on the criteria defined as part of the risk management plan. If the risk is deemed acceptable, risk reduction is not required. Otherwise a risk control has to be put in place to reduce unacceptable risks to an acceptable level.

2.5.3 Risk Control

When risk reduction is required, risk control activities are performed. There are several steps for risk control and reduction activities, including but not limited to:

a) Risk control option analysis: The manufacturer identifies measures that reduce the risk to an acceptable level through inherent safety by design, protective measures, or safety information.

b) Implementation of these risk-control measures identified from the risk control option analysis.

c) Residual risk evaluation: After the risk control measures are applied, any residual risk needs to be evaluated whether it is now acceptable.

d) Risk/benefit analysis: If the residual risk is still not judged acceptable and further risk control is not practicable, then the manufacturer may review data and literature to determine if the benefits outweigh the residual risk.

e) Risk arising from risk-control measures: The effects from the risk-control measures need to be analyzed for the introduction of new hazards and whether the estimated risks for previously identified hazardous situations are affected by this introduction.

f) Completeness of risk control: The manufacturer minimizes the risks from all identified hazardous situations.

Once all the risk control activities are completed, the manufacturer evaluates the overall residual risk acceptability. If the overall residual risk is still not deemed acceptable, the manufacturer may measure the medical benefits of the intended use against the overall residual risk. If the medical benefits outweigh the overall residual risk, it can be judged acceptable.

2.5.4 Related Publications

17

Copyright © 2018 IEEE. All rights reserved.

688689

690

691692693694695696697698

699

700701702703

704

705706

707708709710711712713714715716717718719720721

722723724725

726

The following regulations, standards, and guidance deal with risk management in the PHD domain:

Worldwide

ISO 13485 Medical devices - Quality management systems [74] ISO 14971 Medical devices - Application of risk management to medical devices [75] IEC 61010-1 Safety requirements for electrical equipment for measurement, control and

laboratory use - Part 1: General requirements [76] IEC 62304 Medical device software - Software life-cycle processes [77] IEC TR 80001-2-2 Guidance for the disclosure and communication of medical device

security needs, risks and controls. [78]

Europe

Medical Devices Directive, Council Directive 98/79/EC, Annex III Clause 3, “Results in Risk Analysis” [79]

Medical Devices Regulations, Regulation (EU) 2017/746, Annex II Clause 5, “Benefit-Risk Analysis and Risk Management” [80]

USA

21 CFR 820.30 Design Controls [81] NIST SP 800-30: Guide for Conducting Risk Assessments [82] NIST SP 800-39: Managing Information Security Risk [83]

2.6 Information SecurityIn connection with PHD communication, security is most frequently used in the meaning of information security: It describes characteristics of information-processing and information-storing systems, which maximize confidentiality (see Section 2.6.1), integrity (see Section 2.6.2), and availability (see Section 2.6.3). These three core principles of information security are called the CIA triad. Information security serves the protection from dangers and/or threats, avoidance of damage, and the minimization of risks. The extended CIA triad additionally includes non-repudiation (see Section 2.6.4) as a principle.

2.6.1 Confidentiality

Confidentiality has been defined by the International Organization for Standardization in ISO/IEC 27002 as “ensuring that information is accessible only to those authorized to have access.” Minimizing disclosure of information to unauthorized individuals or systems is one cornerstone of information security. A confidentiality breach might take many forms, even if no information technology is involved, for example: eavesdropping on conversations of others, looking over the shoulder to read information, looking into secret documents, a computer virus, or Trojan horse which sends information to another person. In the context of PHD, a confidentiality breach primarily means eavesdropping information somewhere between the source (e.g., sensor) and the sink (e.g., PC, physician’s computer, hospital server). To enforce confidentiality, the information could be encrypted during transmission and storage as well as requiring authentication and/or authorization within the request before transmission.

2.6.2 Integrity

18

Copyright © 2018 IEEE. All rights reserved.

727728

729

730731732733734735736

737

738739740741

742

743744745

746

747748749750751752753

754

755756757758759760761762763764765

766

In information security, integrity means that data is not be modified or deleted without authorization. Integrity is violated when information is changed by someone who is not authorized to do so. A security breach related to integrity might occur directly on the devices (e.g., because of a virus) and on the way from information source to recipient.

Authentication technologies help ensure that the original data is not altered or deleted during the transfer. They also provide technological means to check if the data come from the right sender and not from someone who only pretends to be the sender. This is achieved for example through electronic signatures and certificates.

2.6.3 Availability

Information security availability means the information is available when it is needed. The computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it have to be functioning correctly and reliably.

2.6.4 Non-repudiation

Non-repudiation means clearness about where the received information comes from and where it is going or who requesting it. An example of clear connection are biometric values of a specific user/patient. But also context information about who gathered the values (e.g., the nurse) and which device was used (e.g., device ID) falls into this category and needs sometimes to be recorded and communicated in a way that no person or device can deny what their contribution was to the overall activity. This is achieved through electronic signatures and audit trails.

2.6.5 Related Publications

The following regulations, standards, and guidance deal with information security in the PHD domain:

Worldwide

ISO/IEC 27032 Information technology - Security techniques - Guidelines for cybersecurity [11]

AAMI TIR57/Ed. 1, Principles for medical device information security risk management [84]

IHE PCD MEM Cybersecurity Work Group Whitepaper - Medical Device Cyber Security – Best Practice Guide [85]

USA

FDA Guidance Content of Premarket Submissions for Management of Cybersecurity in Medical Devices [4]

FDA Guidance Content of Postmarket Management of Cybersecurity in Medical Devices [66]

NIST SP 800-39: Managing Information Security Risk [83]

2.7 Safety and UsabilitySafety and usability play key roles as part of risk management (Section 2.5) and information security (Section 2.6). Risk management views the PHD holistically by considering the PHD,

19

Copyright © 2018 IEEE. All rights reserved.

767768769770

771772773774

775

776777778779

780

781782783784785786

787

788789

790

791792793794795796

797

798799800801802

803

804805

users, intended use, interfaces, applied security, and the environment to identify, describe, and reduce risks of the system. When viewing the PHD as a black box, the relationship between the PHD, security, safety, and usability is depicted in Figure 2-3 and is as follows: security is keeping what’s inside the box secure, safety is keeping what’s outside the box safe, and usability ensures interaction with what’s inside the box is as intended.

Figure 2-3 Security, safety, and usability relationship

2.7.1 Safety relationships

An upmost concern in the design of a PHD is safety during intended use. Safety is determined by avoiding potential harm to the end-user, HCP, or operator, and their environment. Regulatory authorities approve the sale and distribution of medical devices based on evidence that the benefit of the regulated PHD outweighs the risks to safety. A PHD is designed and manufactured to be safe. The manufacturer takes reasonable measures to identity the risks inherent in the device. If the risks can be eliminated, the manufacturer should eliminate them. If the risks cannot be eliminated, the manufacturer should reduce the risks as possible and provide for protection appropriate to those risks (e.g., alarms).

It is important to distinguish safety and information security. Both are of great importance in PHD design and intended use, and while these terms may be coupled, they are distinct (see Usability relationships). Safety is the protection of the end-user and the environment from the system. Information security is the protection of the system from influence of an end-user and the environment to ensure confidentiality, integrity, and availability. However, in order to maintain safety, one should also ensure security.

2.7.2 Usability relationships

A PHD is intended to focus on the end-user and HCP needs in an effort to ease their day-to-day burden while improving their quality of life. This has been a challenge for many, particularly since the “technological revolution” has entered the medical sector. When introducing a PHD, positive motivation and compliance are major issues. The most advanced technology cannot achieve benefits if potential end-users cannot figure out how they are benefited.

20

Copyright © 2018 IEEE. All rights reserved.

806807808809810

811812

813

814815816817818819820821

822823824825826827

828

829830831832833

Usability plays a crucial role in the design of a PHD. A PHD user interface (UI) that establishes effectiveness, efficiency, ease of end-user learning and user satisfaction is consider highly usable [87]. As examples, some end-users might be physically handicapped or visually impaired, and thus the system must conform with accessibility guidelines. A HCP UI that provides a clear and concise summary of measured vitals at a critical moment could be lifesaving. Fitness and wellness trainers along with HCP generally focus on a very particular part of the data, so personalized interfaces and menus greatly improve usability. Before implementing a PHD system in the “real world,” various usability studies with representative target groups maximize consistent positive end-user and HCP engagement.

Usability is a tool that can identify the risk associated with using the PHD. However, it is very difficult to determine the use errors until the PHD use is simulated and observed. Usability studies can be conducted to assess intended use cases and determine if there is any risk, for example, of harm to the user and their environment or impediment to the medical treatment. Controls can be add to the PHD to mitigate these potential risks such that they are eliminated or reduced to the extent possible. Regulatory bodies consider usability testing a valuable component of product development and recommends that manufacturers consider usability testing of a PHD as part of a robust design control [88].

Usability is of importance from an information security point-of-view. Experience shows that a weak point in security is typically human misuse or human limitation. For example, one may continuously forget to logoff when they have completed their tasks leaving the system open for attack as impersonation of an authorized user. By understanding the workflow of the system and behaviors of the user, attack like this could be blocked. The system should defend the user from potential attacks by understanding the user workflow and making the user aware when potential security risks exist. On the other hand, information security should not unreasonably hinder access to device data or its intended use.

2.7.3 Related Publications

The following regulations, standards, and guidance deal with safety and usability in the PHD domain:

IEC 60601-1 Medical electrical equipment - Part 1: General requirements for basic safety and essential performance [86]

IEC 61010-1 Safety requirements for electrical equipment for measurement, control and laboratory use - Part 1: General requirements [76]

ISO 9241-210 Human-centred design for interactive systems [87] FDA Applying Human Factors and Usability Engineering to Medical Devices [88] ISO/IEC 62366-1 Application of usability engineering to medical devices [87]

2.8 Threat ModelingThreat modeling is an approach of analyzing the security of a device, application, or system in a structural way such that vulnerabilities can be identified, enumerated, and prioritized. Threat modeling typically employs a systematic approach to identify attack vectors and assets most desired by an attacker. This leads to decomposition of the system to look at each attack vector and asset individually and determine to which kind of attacks they are vulnerable. From this a list of vulnerabilities can be created for the system and ordered in terms of risk, potential to cause harm, or any other criteria deemed appropriate.

21

Copyright © 2018 IEEE. All rights reserved.

834835836837838839840841842

843844845846847848849850

851852853854855856857858

859

860861

862863864865866867868

869

870871872873874875876

2.8.1 Common Approaches to Threat Modeling

There are various approaches to creating a threat model that range from making a list of known vulnerabilities to adopting a framework. Following is a list of approaches analyzed by this effort including a pros and cons comparison. For additional details, please see Appendix A –Frameworks and Methodologies.

2.8.1.1 List Known Potential Vulnerabilities

One may attempt listing all the vulnerabilities that could affect your system. While it is impossible to list all potential vulnerabilities, one should concentrate those vulnerabilities that could be exercised by known threats.

2.8.1.2 STRIDE

STRIDE is a classification scheme, useful for system decomposition, for characterizing known threats according to the kinds of exploit that are used by the attacker. The STRIDE acronym is formed from the first letter of each of the following threat categories: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. STRIDE does not include a scoring system.

2.8.1.3 DREAD

DREAD is a classification scheme for quantifying, comparing and prioritizing the amount of risk presented by each evaluated threat. DREAD modeling influences the thinking behind setting the risk rating, and is also used directly to sort the risks. The DREAD algorithm is used to compute a risk value, which is an average of all five categories. The DREAD acronym is formed from the first letter of each of the following attributes below: Damage Potential, Reproducibility, Exploitability, Affected Users, and Discoverability.

2.8.1.4 CWSS

The Common Weakness Scoring System (CWSS) both identifies vulnerabilities and provides a scoring system to prioritize those vulnerabilities. It is a collaborative, community-based effort that focuses on analyzing software and reported bugs to determine the relative importance of the detected weaknesses.

2.8.1.5 Trike

Trike is a threat modeling framework with similarities to the STRIDE and DREAD threat modeling processes. However, Trike differs because it uses a risk-based approach with distinct implementation, threat, and risk models, instead of using the STRIDE/DREAD aggregated threat model (attacks, threats, and weaknesses).

2.8.1.6 OCTAVE

OCTAVE is a heavyweight risk methodology approach originating from Carnegie Mellon University’s Software Engineering Institute (SEI) in collaboration with CERT. OCTAVE focuses on organizational risk, not technical risk.

22

Copyright © 2018 IEEE. All rights reserved.

877

878879880881

882

883884885

886

887888889890891

892

893894895896897898

899

900901902903

904

905906907908

909

910911912

2.8.1.7 Attack-Defense Trees

An Attack-Defense Tree (ADTree) is a node-labeled rooted tree describing the measures an attacker might take to attack a system and the defenses that a defender can employ to protect the system.

2.8.1.8 Comparison

The Table 2-4 describes the pros and cons for the threat modeling frameworks described above.

Table 2-4 – Threat modeling framework pros and cons

Framework Pros Cons

List of known potential vulnerabilities

Easy to conceptualize and train Multitude of tools (Excel, Word,

Notepad, etc.)

Requires domain expertise Discovery limited to the knowledge base

of the user No indication of completeness of results No relationship between vulnerabilities

exposed (i.e., chain attacks) No built-in scoring system Not repeatable Not auditable Not scalable

STRIDE

Easy to conceptualize and train Tooling available Does not require domain expertise Repeatable process Auditable Scalable

No direct support for embedded systems

No built-in scoring system No relationship between vulnerabilities

exposed (i.e., chain attacks)

DREAD

Easy to conceptualize and train Built-in scoring system Follow best practices per NIST-800-

30

No tools available No direct support for embedded

systems No relationship between vulnerabilities

exposed (i.e., chain attacks) Requires domain expertise Discovery limited to the knowledge base

of the user No indication of completeness of results Not repeatable Not auditable Not scalable Depreciated

CWSS

Potential to automate detection of vulnerabilities

Built-in scoring system Ability to include specific contextual

details Tooling available Repeatable process Scalable

Depends on software to analyze Difficult to use at the design phase No relationship between vulnerabilities

exposed (i.e, chain attacks)

23

Copyright © 2018 IEEE. All rights reserved.

913

914915916

917

918

919

Framework Pros Cons

Trike

Extensive involvement of stakeholders

Auditable Communication of risks to entire

business and in each stakeholder domain

Tooling available Shows vulnerability relationship via

attack trees

Extremely complex Performance requires involvement with

most of the business Extensive work effort Project appears to be abandoned in

2012

OCTAVE

Extensive involvement of stakeholders

Auditable Communication of risks to entire

business and in each stakeholder domain

Large and complex (consists of 18 volumes)

No direct support for embedded systems

Performance requires involvement with most of the business

Extensive work effort

Attack-Defense Trees

Easy to conceptualize and train Shows vulnerability relationship via

attack trees Can be applied to embedded

systems Tooling available

Requires domain expertise Discovery limited to the knowledge

base of the user No indication of completeness of results Not repeatable Not auditable No involvement of stakeholders Highly technical report; not business

friendly No built-in scoring system Extensive work effort

2.8.2 Selected Approach

For this work, STRIDE was the selected approach for threat modeling because it is a repeatable, scalable, and auditable process with available tooling. Please see Appendix B – STRIDE for additional details related to this threat modeling framework. For tooling, the Microsoft Threat Modeling Tool (TMT) 2016 was use.

In the context of PHD communication and this work, each device type is modeled based on the use case description and focuses on data flow between processes and external actors. These models are generalized such that the model of a specific device type should be applicable to all devices of that type. The model identifies the threat surfaces to the device which are used for system decomposition.

The Microsoft TMT used data flow diagrams (DFDs) to model the system. Data flow diagrams are typically used to graphically represent a system, but you can use a different representation (such as a UML or SysML diagram) as long as you apply the same basic method: decompose the system into parts and show that each part is not susceptible to relevant threats.

24

Copyright © 2018 IEEE. All rights reserved.

920

921

922923924925

926927928929930

931932933934

The DFDs used for threat modeling consist of four elements: data flows, data stores, processes, and interactors. Data flows represent data in motion over system interfaces. Data stores represent data at rest within the system. Processes create, read, update, or delete data and are typically applications run within the system. Interactors are the end points of the system (e.g., end-user) and generally are providers and consumers that are outside the scope of the system. Trust boundaries represent the borders between trusted and untrusted elements of the DFD.

2.8.3 Related Publications

The following guidance deals with threat modeling:

NIST Framework for Improving Critical Infrastructure Cybersecurity [87] NIST Special Publication 1800-8 – Securing Wireless Infusion Pumps In Healthcare

Delivery Organizations [91] MSDN Magazine – Uncover Security Design Flaws Using The STRIDE Approach [92] Threat Modeling – Design for Security, Wiley [93]

OWASP Threat Risk Modeling [94] Foundations of Attack-Defense Trees [95] Trike Methodology [96]

2.9 Vulnerability AssessmentVulnerability assessment is the process of identifying, quantifying, and mitigating the vulnerabilities in a system.

Identifying the vulnerabilities of a system can be achieved by cataloging assets and capabilities of the system or decomposing a model of the system. A variety of frameworks and methodologies exist, some of which are described in Appendix A – Frameworks andMethodologies. Identification of vulnerabilities could be conducted at any stage of the system lifecycle. However, identifying vulnerabilities early in the lifecycle, especially at the design stage, will assist in improving the security of a system.

Quantifying the vulnerabilities is typically achieved via a scoring system, which provides a rank and priority to each of the vulnerabilities based on specific criteria. While various scoring systems exist, the Common Vulnerability Scoring System (CVSS) has become a widely accepted method for quantifying the severity of vulnerabilities. More details is provided in Section 2.9.1.1.

Mitigation of a cybersecurity vulnerability is achieved by introducing an information security control into the system designed specifically to strengthen the system against an attack using that vulnerability. Often the vulnerabilities are not removed but reduced to an acceptable level of risk.

2.9.1 Common Approaches to Quantifying Vulnerabilities

There are various approaches to quantifying vulnerabilities identified from a system threat model. Typically, these approaches provide a scoring system to prioritize the vulnerabilities, such as DREAD described in Appendix A – Frameworks and Methodologies. Following is a list of approaches analyzed by this effort including a comparison.

25

Copyright © 2018 IEEE. All rights reserved.

935936937938939940

941

942

943944945946947

948949950

951

952953

954955956957958959

960961962963

964965966967

968

969970971972

2.9.1.1 CVSS

The Common Vulnerability Scoring System (CVSS) is an open industry standard for normalized scoring of vulnerabilities across disparate hardware and software platforms. Assigning scores to vulnerabilities allows the prioritization to guide which the vulnerabilities should be mitigated.

The CVSS assessment is composed of metrics for three areas of concern:

Base Metrics: Represent the intrinsic and fundamental characteristics of a vulnerability that are constant over time and user environments.

Temporal Metrics: Represent the characteristics of a vulnerability that change over time but not among user environments.

Environmental Metrics: Represent the characteristics of a vulnerability that are relevant and unique to a particular user’s environment.

2.9.1.2 eCVSS

The original version of CVSS was designed to address software only systems and create scores after the software systems were in the field. For this effort, changes to CVSS are needed to support physical medical devices and create scores at design time to guide the development of systems.

This resulted in embedded Common Vulnerability Scoring System (eCVSS) being created as a slightly modified branch of CVSS 2.0. This is not an effort of Forum of Incident Response and Security Teams (FIRST), but instead proposed by members of this work.

The eCVSS modifications to CVSS are as follows:

The Temporal Group was effectively removed by forcing the three attributes to a neutral value, since the scoring is conducted at design time.

The three “Requirement” attributes in the Environmental group (i.e., Confidential, Integrity, Availability) where recognized to be system wide attributes. These are only set once for the system and inform all the identified vulnerabilities.

The Target Distribution attribute was removed as it refers to distribution of the system, and this scoring is conducted at design time. Instead a new Awareness attribute replaced the Target Distribution attribute.

2.9.1.3 CWSS

Similar to CVSS, the Common Weakness Scoring System (CWSS) provides a standard scoring system to prioritize software weaknesses in a consistent, flexible, open manner. Unlike CVSS, CWSS is a collaborative, community-based effort addressing needs from government, academia, and industry. CWSS is distinct from CVSS, but not a competitor, and both solutions can be leveraged together.

The CWSS assessment is organized into three metric groups:

Base Finding Metrics: Capture the inherent risk of the weakness, confidence in the accuracy of the finding, and strength of controls.

Attack Surface Metrics: Represent the barriers that an attacker have overcome in order to exploit the weakness.

Environment Metrics: Represent the characteristics of the weakness that are specific to an environment or operational context.

26

Copyright © 2018 IEEE. All rights reserved.

973

974975976

977

978979980981982983

984

985986987988

989990991

992

993994995996997998999

1000

1001

10021003100410051006

1007

100810091010101110121013

2.9.1.4 Comparison

While CVSS and eCVSS are only scoring systems, CWSS can both detect weaknesses and score those weaknesses. CWSS depends on software being available and thus is difficult to apply at the design phase of a system. Alternatively, independent of how the vulnerabilities are identified, both CVSS and eCVSS can be used to prioritize.

As discussed previously, and similar to CWSS, CVSS was designed to address software only systems and create scores after the software systems were in the field. Modifications made to CVSS to better support physical medical devices and create scores at design time to guide the development of systems brought forth eCVSS

2.9.2 Selected Approach

For this work, eCVSS was the selected approach for quantifying vulnerabilities identified by the system threat model. As previously stated in 2.9.1.2, eCVSS includes modifications specific to this work that other scoring systems are lacking to allow scoring of embedded devices at design time.

2.9.3 Related Publications

The following guidance deals with vulnerability assessment:

Common Vulnerability Scoring System [97] Common Weakness Scoring System [99]

2.10 Mitigation In the context of PHD cybersecurity, mitigation is the act of introducing security controls into a device or system to prevent the attacker from causing harm or reduce the impact of an attack. Failure to maintain PHD cybersecurity can result in compromised device functionality, loss of data (medical or personal) – in other words availability, loss of integrity, or exposure of other connected devices or networks to security threats. This in turn may have the potential to result in patient illness, injury, or death.

System vulnerabilities identify the areas subject to potential threats and determine the need for and type of mitigation. Specific frameworks used to decompose a system and quantify vulnerabilities are designed to identify threats based on common security properties. Generally, the protection of these common security properties use general mitigation techniques, which are then realized by specific security controls.

2.10.1 STRIDE Category and Security Property

STRIDE is a classification scheme, useful for system decomposition typically using a system threat model, for characterizing known threats according to the kinds of exploit that are used by the attacker. See Section 2.8 and Appendix B – STRIDE for additional information related to threat modeling and STRIDE.

The core principles of information security are called the CIA triad, as described in Section 2.6. The CIA triad can be extended to include authentication, authorization, and non-repudiation.

The STRIDE categories were defined based on the common security properties. Table 2-5 provides the mapping between the STRIDE category and the underlying security property.

27

Copyright © 2018 IEEE. All rights reserved.

1014

1015101610171018

1019102010211022

1023

1024102510261027

1028

1029

10301031

1032

103310341035103610371038

10391040104110421043

1044

1045104610471048

10491050

10511052

Table 2-5 – STRIDE threat category and security property

Threat Category Security PropertySpoofing AuthenticationTampering IntegrityRepudiation Non-repudiationInformation Disclosure ConfidentialityDenial of Service AvailabilityElevation of Privilege Authorization

2.10.2 Design Principles

The hostile environment that can result from a connected PHD requires the development of design principles to produce robust systems. Manufacturers are responsible to assure PHD cybersecurity and maintain the intended device functionality and safety. Thus, manufacturers should address cybersecurity during the design and development of the PHD, as this can result in more robust and efficient mitigation of user risks. This can be achieved by establishing design principles for their device related to cybersecurity to address Error: Reference source not found:

Identification of assets, threats, and vulnerabilities. Assessment of the impact of threats and vulnerabilities on device functionality and end-

users/patients. Assessment of the likelihood of a threat and of a vulnerability being exploited. Determination of risk levels and suitable mitigation strategies. Assessment of residual risk and risk acceptance criteria.

2.10.2.1 Secure by Design and Secure by Default Principles

Secure by Design means that the software has been designed from the ground up to be secure. Under Secure by Design principles, manufacturers may assume malicious practices, and take care to minimize the impact when an attempt is made to exploit a system.

Secure by Default is the concept of designing a minimal required set of functionality with a secure configuration. This includes but is not limited to:

Least privilege: All components and users operate with the fewest possible permissions. Defense in depth: Design does not rely on a single threat mitigation solution alone for

protection, rather layers of protection are implemented. Secure default settings: Based on the known attack surfaces for the system, the design

minimizes the attack surfaces in the default configuration. Avoidance of insecure operating system changes: Applications do not make or require

any default changes to the operating system or security settings that reduce security for the host computer without consideration of possible risks.

Services off by default: The services off by default allows that, if a feature of a system is rarely used, that feature is de-activated by default.

2.10.2.2 Privacy by Design and Privacy by Default Principles

Privacy by Design means that privacy and data protection are embedded throughout the entire system lifecycle, from the early design stage to their deployment, use, and ultimate disposal. This includes but is not limited to:

28

Copyright © 2018 IEEE. All rights reserved.

1053

1054

105510561057105810591060

106110621063106410651066

1067

106810691070

10711072

1073107410751076107710781079108010811082

1083

108410851086

Provide notice of privacy practices to users: Provide appropriate notice to users about data that is collected, stored, or shared so that users can make informed decisions about how their personal information is used and disclosed.

Do not Store Secrets: Collect the minimum amount of data that is required for a particular purpose, and use the least sensitive form of that data.

Protect Secrets & Secret Data; De-Identification: Encrypt sensitive data in transfer, limit access to stored data, and ensure that data usage complies with the system’s intended use. If encryption is not possible, anonymize patient data (e.g., log and trace files).

Privacy by Default means that privacy and data protection are embedded as default configuration settings in a system. This includes but is not limited to:

Least Privilege: Ship with secure default privacy settings and prevent unauthorized access through technical controls.

Do not Store Secrets: Process and store only minimum necessary data. Retain data for the shortest possible time.

Protect Secrets & Secret Data: Protect any sensitive data at rest with access controls and encryption.

2.10.2.3 Ensure Robust Interface Design

Ensure robust interface design means that the system maintains the ability to function as intended in a hostile operating context. This includes but is not limited to:

Input Validation; Input Sanitization: Protect from input tampering through fuzzing, SQL injection, malicious SD card.

Message Authentication Code; Encryption: Require all wireless communication interfaces to be robust against the occurrence of eavesdropping, injection and replay attacks.

Protect Secrets & Secret Data: Employ protection of technical secrets, keeping any objects safe that are used to secure data through such mechanisms as encryption keys, passwords, and tokens.

2.10.2.4 Limit Access to Trusted Users Only

Limit access to trusted users only means the system requires authentication and restricts requests to authorized functions. This includes but is not limited to:

Authentication: Limit access to devices through the authentication of users (e.g., user ID and password, smartcard, biometric). Use appropriate authentication (e.g., multi-factor authentication to permit privileged device access to system administrators, service technicians, maintenance personnel). Require authentication or other appropriate controls before permitting software or firmware updates, including those affecting the operating system, applications, and anti-malware;

Quality of Service: Use automatic timed methods to terminate sessions within the system where appropriate for the use environment;

Authorization; Least Privilege: Where appropriate, employ a layered authorization model by differentiating privileges based on the user role (e.g., caregiver, system administrator) or device role;

Do not Store Secrets; Protect Secrets & Secret Data: Strengthen password protection by avoiding “hardcoded” password or common words (i.e. passwords which are the

29

Copyright © 2018 IEEE. All rights reserved.

10871088108910901091109210931094

10951096

109710981099110011011102

1103

11041105

11061107110811091110111111121113

1114

11151116

1117111811191120112111221123112411251126112711281129

same for each device, difficult to change, and vulnerable to public disclosure) and limit public access to passwords used for privileged device access;

Physical Tamper Resistant; Physical Tamper Evidence: Where appropriate, provide physical locks on devices and their communication ports to minimize tampering;

2.10.2.5 Ensure Trusted Content

Ensure trusted content means the system employs security measures to determine the integrity and origin of the content is provides to the user. This includes but is not limited to:

Message Authentication Code; Authentication; Digital Signature: Restrict software or firmware updates to authenticated code. One authentication method manufacturers may consider is code signature verification;

Systematic Procedures; Authorization: Use systematic procedures for authorized users to download version-identifiable software and firmware from the manufacturer;

Encryption; Message Authentication Code; Digital Signature: Ensure capability of secure data transfer to and from the device, and when appropriate, use methods for encryption.

2.10.3 Mapping of Mitigation Categories, Security Capabilities, Mitigation Techniques, and Design Principles

Each security property has primary mitigation techniques to address the vulnerabilities the STRIDE threat is taking advantage. For example, OWASP has defined primary mitigation techniques for the STRIDE threat categories. Table 2-6 provides a list of mitigations grouped into the following categories defined by the NIST Cybersecurity Framework:

Identify: Process of recognizing the attributes that identify the object. Within the NIST Cybersecurity Framework, the identify category is intended to limit access to trusted users only and ensure integrity of trusted content.

Protect: The ability to limit or contain the impact of a potential cybersecurity event.o Prevent: Measures that avoid, preclude or limit the impact of a cybersecurity event.o Limit: Measures intended to reduce the impact of a cybersecurity event.

Detect: Security controls intended to detect a cybersecurity event. Respond: Appropriate activities to take action regarding a detected cybersecurity event. Recover: Appropriate activities to maintain plans for resilience and to restore any

capabilities or services that were impaired due to a cybersecurity event.

Also provided in Table 2-6 is a mapping to ISO 80001-2-2 security capabilities. The security capabilities are broad categories of technical, administrative or organizational controls to manage risks to confidentiality, integrity, availability and accountability of data and systems. The capabilities are intended to support health delivery organizations, PHD manufacturers and information technology vendors. Among the 19 security capabilities described in [78], the “Third-party components in product lifecycle roadmap” is not mapped in Table 2-6 since it is not related to the interfaces to and from the PHD.

Table 2-6 – Mitigation categories, security capabilities, mitigation techniques, and design principles

Mitigation Category(based on NIST Framework)

Security Capability(based on ISO/IEC 80001-2-2)

Mitigation Technique & Design Principle

Identify Node Authentication Authentication

30

Copyright © 2018 IEEE. All rights reserved.

1130113111321133

1134

11351136

11371138113911401141114211431144

11451146

1147114811491150

1151115211531154115511561157115811591160

1161116211631164116511661167

1168

Mitigation Category(based on NIST Framework)

Security Capability(based on ISO/IEC 80001-2-2)

Mitigation Technique & Design Principle

Personal AuthenticationDigital Signatures

Protect

Prevent

AuthorizationHealth Data De-identification

Health Data Storage and ConfidentialityHealth Data Integrity and Authenticity

Physical Locks on DevicesAutomatic Logoff

Configuration of security features

AuthorizationDe-Identification

Do not Store SecretsEncryption

FilteringMessage Authentication Code

Physical Tamper ResistantProtect Secrets & Secret Data

LimitSoftware and Application Hardening

Security Guidelines

Input SanitizationInput Validation

Quality of ServiceLeast Privilege

Throttling

DetectAudit

Physical Locks on DevicesAudit Trail

Physical Tamper Evidence

RespondMalware Detection and Protection

Emergency AccessEnd-User Signalization

Invalidate Compromised Security

RecoverData Backup and Disaster Recovery

Cybersecurity product updatesRe-Establish Security

2.10.4 Related Publications

The following guidance deals with mitigations:

NIST Framework for Improving Critical Infrastructure Cybersecurity [87] ISO/IEC 80001-2-2 Application of risk management for IT-networks incorporating

medical devices [78] FDA - Content of Premarket Submissions for Management of Cybersecurity in Medical

Devices [4] Threat Modeling – Design for Security, Wiley [93]

3 Methods3.1 Device TypesThis work analyzes five device types, representing a spectrum from non-regulated to highest regulated medical device classification described in Section 2.3.1.2. The chosen device types include the following with US/EU/China classification, respectively:

Physical activity monitor representing Class II exempt/IIa/non-regulated medical device.

31

Copyright © 2018 IEEE. All rights reserved.

1169

1170

117111721173117411751176

1177

1178

117911801181

1182

Pulse oximeter representing a Class II/IIa/II medical device. Sleep apnoea breathing therapy equipment representing a Class II/IIa/II medical device. Insulin delivery device representing a Class II/IIb/II medical device. Implantable continuous glucose monitor representing a Class III/IIb/III medical device.

3.2 Iterative Vulnerability AssessmentThe device type analysis is an iterative approach to vulnerability assessment, which include the following steps and is depicted in Figure 3-4:

1. System context: Represented by a device use case description.2. System decomposition: Modeling of the system using a data flow diagram that is

analyzed using STRIDE to generate a list of vulnerabilities.3. Scoring (pre-mitigation): Quantifying each identified vulnerability using eCVSS scoring

system.4. Mitigation: Mitigating the high-risk vulnerabilities.5. Scoring (post-mitigation): Quantifying the mitigated vulnerabilities using eCVSS scoring

system to determine if the risk was reduced to an acceptable level.6. Repeat from step 4 until all vulnerabilities have been reduced to an acceptable level of

risk.

Figure 3-4 Vulnerability assessment workflow

For each device type, this work completes steps 1-6, which includes the resulting pre-mitigation vulnerabilities for a device with no information security controls to determine common vulnerabilities across all device types, defining a set of mitigations based on the identified vulnerabilities, and the resulting post-mitigation assessment.

3.2.1 System Context

System context is step 1 in the iterative vulnerability assessment. Each device type system context is provided in a use case description, which includes an introduction to and system context of the device. This document contains the name, a brief description, a block diagram, a

32

Copyright © 2018 IEEE. All rights reserved.

1183118411851186

1187

11881189

1190119111921193119411951196119711981199

12001201

1202120312041205

1206

120712081209

list of intended actors, a list of assets, and a mapping between intended actors and assets along with any relevant references.

3.2.1.1 Actors

Personal health devices could have up to fourteen actor roles from seven categories, which an individual may interact with one or more of PHD (see Figure 3-5).

1. Manufacturer: Trained individuals or group working for an enterprise (e.g., legally responsible distributing company, supplier) which handle the complete, or participate in a part of, the systems’ life-cycle from conception to development to maintenance to end of life.

a. R&D Engineer: Plan, design, specify, develop and/or test the system components as part of research and development (R&D).

b. Supplier: Build out of raw materials and/or sub-components the system components in a factory.

c. Technical Support: Helps customer with complains about system components and create bridge to R&D (e.g., service and investigation).

d. Seller: Demonstrate the system to individuals (e.g., patient, payer) who decide or have influence if the system is used or not.

2. Operator: Trained individuals or group working for an enterprise (e.g., clinic, medical practice, diabetes center) supporting the work of the Health Care Provider by managing the equipment.

a. IT Network Professional: Responsible for the enterprise network and computing facilities.

b. IT Security Professional: Responsible for securing the enterprise network and computing facilities.

c. Biomedical Engineer: Responsible for configuring, testing and maintaining the system components owned by or used in the perimeter of the enterprise.

3. Business User: Trained individuals or group working for an enterprise (e.g., health insurance company, governmental organization, health care supply store) participating in the supply chain.

a. Payer: Pays for the system components and services. Depending on the local rules this actor decides or is part of the decision if the system is used or not.

b. Distributor: Brings the system components from the manufacturer to the patient. Depending on the local rules this actor decides or is part of the decision if the system is used or not.

4. Health Care Provider (HCP): Trained individuals or group working for an enterprise (e.g., clinic, medical practice, diabetes center) interacting with the patient and operating the system components for demonstration purposes or to adapt the therapy.

a. Counselor: Typically looks after the patient on a short-cycle basis and helps them to manage their disease via training and consulting (e.g., Diabetes Nurse Educator). Depending on the local rules this actor decides or is part of the decision if the system is used or not.

b. Nurse: Supports the physician and helps the patient during their stay in the HCP perimeter. Depending on the local rules this actor decides or is part of the decision if the system is used or not.

c. Physician: Diagnose disease and setup initial therapy. Authorize systems which are subject to prescription. After therapy is running, typically looks after the

33

Copyright © 2018 IEEE. All rights reserved.

12101211

1212

12131214

12151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255

patient on a long-cycle basis in order to adapt the therapy. Depending on the local rules this actor decides or is part of the decision if the system is used or not.

5. End-User: Individuals that typically use the system.a. Patient: Use the system components in order to treat the disease (e.g., person

with diabetes). Patients should generally have elementary knowledge about the disease and knowledge about personal hygiene. Depending on the local rules this actor decides or is part of the decision if the system is used or not.

b. Caregiver: Cares for the patient who is sick or disabled. Children, elderly persons or handicapped patients may need assistance provided by a reliable caregiver (e.g., parents). These caregivers need to have knowledge equivalent to the level required for patients.

uc [Package] Actors [System Users]

Potentially bad actor Bad actor

Manufacturer Operator

IT Network ProfessionalIT Security Professional

Biomedical EngineerR&D Engineer Producer Technical Support

Health Care Professional (HCP)

Counselor Physican Caregiv erPatient

End User

Payer Distributor

Patient Visitor

Malicious Agent

Hacker

Business User

Seller

Third-party User

Nurse Attacker

Figure 3-5 Actors

3.2.1.2 Assets

For the intended actors the PHD controls, stores and transmits various assets (i.e., the data asset inventory). However, these assets may potentially interest a bad actor. The assets of interest are listed in Table 3-7Error: Reference source not found.

Table 3-7 - Assets

Name DescriptionCredentials Login credentials such as username and passwords, tokens, PINs,

34

Copyright © 2018 IEEE. All rights reserved.

125612571258125912601261126212631264126512661267

12681269

1270

127112721273

1274

Name Descriptionwireless secure codes, etc.

Therapy data Therapy relevant data such as treatment settings, measurements, etc.

Device data Non-therapy relevant data (e.g., language selection).Logs History of actions executed by the PHD which provides support to

the manufacturer, HCP, etc.Indication Information from a PHD telling the user that the PHD needs

attention; for example status, reminder, error, warning and maintenance.

Device control Control over the device functionalities.Firmware/Software Application The application running on the device.Intellectual Property The intellectual property within the medical device.Following assets are not in scopeProtected Health Information (PHI)

Protected health information includes any information about health status, provision of health care or payment for health care that can be linked to a specific individual.

Note: PHD communication standards do not define storage and transmission of PHI which falls under privacy regulations. Instead data is only linked to an anonymous System-ID.

Personal Health Records (PHR) Personal health records are electronic records with individually identifiable health information that can be typically drawn from multiple sources and that is managed, shared and controlled primarily from the individual.

Note: PHD communication standards do not define storage and transmission of PHR which falls under privacy regulations. Instead data is only linked to an anonymous System-ID.

3.2.1.3 Mapping Actors to Assets

The mapping between the actors and assets uses the four basic actions one can conduct on an asset: Create, Read, Update, and Delete, also known as CRUD. The mapping describes intended actions that a specific actor would typically perform. For example, the manufacturer may „Create“ device information (e.g., serial number, regional configurations, etc.) while the health care professional or end-user may „Read“ the device information.

3.2.2 System Decomposition

System decomposition is step 2 in the iterative vulnerability assessment. The data flow of each device type is modeled and threat surfaces are identified. The model of the data flow then undergoes a system decomposition to create a list of vulnerabilities.

3.2.2.1 System Boundaries

As part of the modeling the device type, boundaries are used to define areas of inherent trust. A trust boundary is depicted as a dotted box, and when used in a model indicates that both sides

35

Copyright © 2018 IEEE. All rights reserved.

1275

12761277127812791280

1281

128212831284

1285

12861287

of the boundary do not trust the other side. As such, when a data flow crosses a trust boundary, it becomes an untrusted data flow and generates applicable vulnerabilities. The PHD and connected device use trust boundaries to show contained processes, actuators, sensors, data flows, and data stores.

3.2.2.2 Threat Model

Threat modeling of a device type was realized by making a DFD, which identifies the external actors, system processes, interactors, and internal storage, and the data flow interfaces between those elements. External actors are typically the human users of the system (e.g., the patient, health care professional). System processes are controllers of the system (e.g., the delivery controller of an insulin delivery device that determines how much insulin should be delivered and when, and then controls the pump drive). Interactors are the sensor and actuators of the system, (e.g., the pump drive of an insulin delivery device). Internal storage is the data storage within the device (e.g., stored therapy settings, observations). The threat model of each device type was reviewed, compared, and harmonized. The harmonization effort provided common nomenclature and layout, and lead to a generic DFD for a personal health device, which is depicted in Figure 3-6.

The Microsoft Threat Modeling Tool (TMT) was used to create the DFD of a device type. In addition to the elements described above, the TMT allows for definition of trust boundaries that depict regions within the DFD where the system has inherent trust (see Section 3.2.2.1). Since this phase of this work does not consider physical attacks, internal data flows are trusted and out-of-scope for vulnerability assessment. This trust boundary is depicted with a dotted line box, as shown in Figure 3-6.

Figure 3-6 PHD generic threat model

3.2.2.3 Vulnerability List

Once the device type was modeled and the threat surfaces have been identified, it is possible to decompose the system using a specific framework to generate a list of vulnerabilities. For this work, STRIDE was chosen.

36

Copyright © 2018 IEEE. All rights reserved.

1288128912901291

1292

12931294129512961297129812991300130113021303

130413051306130713081309

13101311

1312

131313141315

The TMT can generate a list of vulnerabilities based on the DFD. Each category of STRIDE generates vulnerabilities based on the elements of the data flow. See Table 10-17 for a mapping of DFD element to STRIDE category. Each of these STRIDE categories can be further broken down into vulnerability types, as listed in Table 10-18. While most of these vulnerability types are applicable to this work, the following vulnerability types were defined as out-of-scope:

Elevation of Privilege: Cross Site Request Forgery Spoofing: Spoofing of Source Data Store Spoofing: Spoofing of Destination Data Store Information Disclosure: Weak Access Control for a Resource Denial of Service: Data Flow is Potentially Interrupted, when the data flow is internal to

the device

This vulnerability list was exported from the TMT and importing into Excel using a VBA macro (see Appendix F – TMT Export Macro).

3.2.3 Scoring

Scoring is step 3 and step 5 in the iterative vulnerability assessment. This work selected eCVSS as the scoring system to quantify identified vulnerabilities, both as part of the pre-mitigation assessment and post-mitigation assessment (see Appendix C – CVSS and eCVSS for additional details and comparison between CVSS and eCVSS). For pre-mitigation assessment scoring is used to detect common vulnerabilities across all the device types. As well, unique vulnerabilities to a device type will be detected. These pre-mitigation vulnerabilities are categorized into low-, moderate-, or high-risk based on their assigned score. For post-mitigation assessment, scoring is used to determine the reduction of risk for the mitigated vulnerabilities. eCVSS assigns a score between 0 and 10 using the equations described in Appendix D – Scoring Equations. Microsoft Excel was used to set to the values of the eCVSS metrics and calculate the score for each vulnerability.

3.2.3.1 eCVSS Metric Guidelines

To assist with harmonization of quantifying pre- and post-mitigation vulnerabilities across device types and organizations, we used the guidelines described in Table 3-8 to clarify interpretation of the eCVSS metric definitions for setting the metric values.

Table 3-8 – Pre- and post-mitigation assessment guidelines

Category Guideline

GeneralFor pre-mitigation assessment, assume no security controls have been implemented.When in doubt, assess the worst case scenario.

Impacts Safety Efficacy

Identifies threats/vulnerabilities that impact the safety of the patient.This metric does not have a direct effect on the assessment scoring, but is input to other metrics (e.g., Collateral Damage Potential).

Access Vector Do not make any assumptions about the transport the device typically uses.

Access Complexity

Score higher if: It requires insider knowledge of the device design. Depends on narrow time window. Vulnerable configuration very rarely seen in practice. Requires specific actions or information before a successful attack can be

launched.

37

Copyright © 2018 IEEE. All rights reserved.

13161317131813191320

132113221323132413251326

13271328

1329

13301331133213331334133513361337133813391340

1341

134213431344

1345

Category Guideline

Authentication For pre-mitigation analysis, this should be set to None, unless the device type Use Case requires authentication.

Confidentiality Impact

Complete means the attacker can access any data. For example, set to Complete when spoofing/impersonating device/user making read requests or remote code execution for read functionality.Partial means the attacker can access data, but it has no control of the type of data it can access. For example, set to Partial when tampering data in transit or spoofing/impersonating device/user receiving CUD requests.None means the attacker cannot access any data. For example, set to None when denial of Service (e.g., blocking data flow) or spoofing/impersonating device/user sending data.

Integrity Impact

Complete means the attacker can modify any data. For example, set to Complete when spoofing/impersonating device/user sending data requests or remote code execution CUD functionality.Partial means the attacker can modify data, but it has no control of the type of data it can modify. For example, set to Partial when tampering data in transit or repudiation (e.g., modifying log data).None means the attacker cannot modify any data. For example, set to None when denial of Service (e.g., blocking data flow) or spoofing/impersonating device/user receiving data.

Availability Impact

Complete means the attacker can shut down or stop principal functionality of the targeted device. For example, set to Complete when spoofing/impersonating device/user controlling device or denial of service (e.g., crashing or stopping the device).Partial means the attacker can interrupt or reduce performance of the targeted device. For example, set to Partial when denial of service (e.g., blocking data flow) or tampering data in transit (i.e., denial of service).None means the attacker cannot affect the availability of the system. For example, set to None when spoofing/impersonating device/user receiving data or Information disclosure.

Collateral Damage Potential

When in doubt, use the Suggested Collateral Damage Value approach (see Section 3.2.3.2).Set value to greater that None when: Impacts Safety Efficacy set to Yes Impacts to Confidentiality (e.g., potential legal damage) Impacts to Integrity (e.g., potential business damage) Impacts to Availability (e.g., potential patient damage)

Awareness

Set to User when: Access Vector is Local, and the device is attached to the user (e.g., reading or

modifying data from device display) Device is attached to user, and the device stops or crashesSet to Automatic when a device can no longer communicate with device under attack.Set to Complete when denial of service related to user interaction.

3.2.3.2 Suggested Collateral Damage

This work detected that the eCVSS Environmental metric Collateral Damage Potential may be subjective and is difficult to maintain repeatability. To improve this situation, we have included a Suggested Collateral Damage Value within our assessment. This suggested value is determined

38

Copyright © 2018 IEEE. All rights reserved.

1346

134713481349

by considering, for every vulnerability, the potential for business damage, legal damage, and patient damage. The Suggested Collateral Damage Value is only a suggestion and the assessment may set any value for the Collateral Damage Potential. Table 3-9 provides a definition for each type of damage value.

Table 3-9 – Suggested collateral damage value definitions

Type of Damage Potential

Value Value Description

Business,Legal, orPatient

None There is no damage related to business, legal, or the patient.

LowThere is potential for minor level of damage related to business, legal, or the patient. For example, attack only affects a single instance of the product, some patients/physicians lose confidence in the product, minor patient harm.

Medium

There is potential for major level of damage related to business, legal, or the patient. For example, regulatory forces recall, attack may affect one or more batches of the product but not all products, moderate patient harm (excluding life-threating harm), public loses confidence in the product.

High

There is potential for catastrophic level of damage related to business, legal, or the patient. For example, regulatory forces stop of sale, attack may affect all instances of the product, legal action that threatens the business viability, severe patient harm (including death), public loses confidence in the brand/company.

3.2.3.3 Device Wide Metrics

One of the modifications of CVSS that eCVSS introduced is the use of the Environment group “Requirement” metrics (i.e., Confidential, Integrity, and Availability) as device wide. As such, these metrics are only set once for the system and each identify vulnerability is scored using these values. Table 3-10 describes the harmonized values of the device wide metrics for each device type.

The Confidentiality Requirement for all device types was scored as Medium because these devices do not store PHI, which removes the need for High.

The Integrity Requirement for all devices was scored as High because data from these devices inform therapy decisions except the Physical Activity Monitor, which was scored as Medium.

The Availability Requirement for all devices was scored as Medium because unavailability of these devices is not life-threating except the Physical Activity Monitor and Pulse Oximeter, which does no harm and thus scored as Low.

Table 3-10 – Device wide metrics for device types

Device Type Confidentiality Requirement

Integrity Requirement

Availability Requirement

Physical Activity Monitor Medium Medium LowPulse Oximeter Medium High LowSleep Apnoea Breathing Therapy Equipment Medium High Medium

Insulin Delivery Device Medium High MediumContinuous Glucose Monitor Medium High Medium

39

Copyright © 2018 IEEE. All rights reserved.

1350135113521353

1354

1355

13561357135813591360

13611362

13631364

136513661367

1368

3.2.3.4 Risk Level Thresholds

To determine the level of risk for each scored vulnerability, thresholds were defined. The definition of the thresholds were harmonized across the device types and organizations by domain expert interpretation of the vulnerability and the pre-mitigation assessment score to ensure the thresholds are appropriate. These thresholds distinguish low-, moderate-, and high-risk vulnerabilities and are:

Low-Risk: eCVSS score < 3.5 Moderate-Risk: 3.5 ≤ eCVSS score < 7 High-Risk: 7 ≤ eCVSS score

3.3 MitigationMitigation is step 4 in the iterative vulnerability assessment. The mitigations techniques defined in Section 2.10.3 will be investigated and potentially applied to the vulnerabilities identified through system threat modeling and decomposition (Section 3.2.2) and quantified using a scoring system (Section 3.2.3). As such, only mitigation techniques that address the specific vulnerabilities will be included in step 4 and will affect step 5 post-mitigation scoring.

This work will only apply generic security controls based on the mitigation technique to ensure the security control addresses the vulnerability. Thus, this work is not biased by a specific security control or algorithm. Any robust, industry proven security control that uses the mitigation technique may be implemented.

A pre-assessment of the mitigation techniques was conducted to guide device type analysis. Table 3-11 provides the list of mitigation techniques and their mapping to STRIDE categories. Using this mapping, moderate- and high-risk vulnerabilities for each STRIDE category will be mitigated by one or more of the mapped mitigation techniques. First the mitigation technique is investigated to ensure it will address the vulnerability for a specific device type and then harmonization across all device types will be conducted. The harmonization effort ensures any mitigation technique that addresses a threat for a specific device type may also be used across all device types. In the cases where multiple mitigations techniques can address the moderate- and/or high-risk vulnerabilities, effort is made to reduce the mitigation to a single technique usable across all device types.

Table 3-11 – Application of mitigations techniques based on STRIDE categories.The columns are defined as: S for spoofing, T for tampering, R for repudiation, I for information disclosure, D for denial of service, and E for elevation of privileges. The ‘X’ indicates potential application of the identified mitigation to the corresponding STRIDE category.

Mitigation Technique S T R I D E

IdentifyAuthentication X XDigital Signatures X X

Protect Prevent Authorization X X X XDe-Identification XDo not Store Secrets X X X XEncryption XFiltering XMessage Authentication Code XPhysical Tamper Resistant

40

Copyright © 2018 IEEE. All rights reserved.

1369

13701371137213731374

137513761377

1378

13791380138113821383

1384138513861387

1388138913901391139213931394139513961397

1398139914001401

Mitigation Technique S T R I D E

Protect Secrets & Secret Data X X X X

Limit

Input Sanitization XInput Validation XQuality of Service XLeast Privileges XThrottling X

DetectAudit Trail XPhysical Tamper Evidence

RespondEnd-User Signalization X X X X XInvalidate Compromised Security

Recover Re-Establish Security

4 ResultsThe results provided in this section result from steps 1-6 of the initial iteration of the iterative vulnerability assessment described in Section 3.2. See the following sections for additional information related to specific steps.

System contexto Section 3.1 for a list of device typeso Sections 15.1.1, 15.2.1, 15.3.1, 15.4.1, and 15.5.1 for system context for specific

device types System decomposition

o Appendix B – STRIDE for a description of STRIDEo Section 3.2.2.1 for the generic threat modelo Sections 15.1.2, 15.2.2, 15.3.2, 15.4.2, and 15.5.2 for specific device type threat

models Scoring (pre-mitigation & post-mitigation)

o Section 3.2.3.1 for description of eCVSSo Section 3.2.3.3 for definition of system metrics of specific device typeso Section 3.2.3.4 for definition of risk levels for device typeso Sections 15.1.3, 15.2.3, 15.3.3, 15.4.3, and 15.5.3 for specific device types

vulnerability scores Mitigation

o Section 2.10.3 and 3.3 for description of mitigation techniques

4.1 Quantified Pre-Mitigation VulnerabilitiesTo better understand the vulnerabilities and related risk for each device type and across device types, the quantified pre-mitigation vulnerabilities were grouped by STRIDE categories and risk level. A list of all the analyzed device type along with the associated number and risk level for the quantified vulnerabilities identified by STRIDE is provided in Error: Reference source notfound. The risk levels are low, moderate, and high and are colour coded with green, orange, and red, respectively. The bottom row of the table is the aggregate of the STRIDE categorized vulnerabilities for each risk level across all device types.

Table 4-12 – Number and risk level of STRIDE pre-mitigation vulnerabilities by device type.

41

Copyright © 2018 IEEE. All rights reserved.

1402

140314041405

14061407140814091410141114121413141414151416141714181419142014211422

1423

1424142514261427142814291430

1431

The columns are defined as: S for spoofing, T for tampering, R for repudiation, I for information disclosure, D for denial of service, and E for elevation of privileges. Risk levels have the following color code: green for low-risk, orange for moderate-risk, and red for high-risk.

Device Type Number of Vulnerabilities Risk Level S T R I D E Total

Physical Activity Monitor 89

Low 1 - 9 - 21 3 34Moderate 12 3 4 1 - 8 28

High 11 1 1 3 - 11 27

Pulse Oximeter 60Low 4 - 12 4 9 - 29

Moderate 10 4 - 2 1 7 24High 4 - - - - 3 7

Sleep Apnoea Breathing Therapy Equipment

120Low 7 7 10 11 19 6 60

Moderate 9 5 1 - 9 11 35High 7 6 1 - - 11 25

Insulin Delivery Device 84

Low 3 - 7 5 11 3 29Moderate 8 5 7 4 - 12 36

High 7 - - - - 12 19

Continuous Glucose Monitor 79

Low 4 2 2 3 10 4 25Moderate 9 4 10 4 - 16 43

High 4 - - 1 - 6 11

All Device Types 432Low 19 9 40 23 70 16 177

Moderate 48 21 22 11 10 54 166High 33 7 2 4 - 43 89

Elevation of privilege produced the most high-risk vulnerabilities, followed closely by spoofing, and the second most moderate-risk vulnerabilities, where spoofing produced one additional moderate-risk vulnerability. Tampering produced the third most high-risk and moderate-risk vulnerabilities. The remaining STRIDE categories procedure few high-risk vulnerabilities, but a comparable amount of moderate-risk vulnerabilities to tampering. Low-risk vulnerabilities were deemed acceptable and as such received no further analysis.

4.2 Identified Attack VectorsTo identify attack vectors, the moderate- and high-risk vulnerabilities were added to the threat model of each device type. The following figures depict the threat model for each device type with moderate- and high-risk vulnerabilities overlaying the data flows between the effected DFD elements. These DFD elements represent the attacker vector and asset under attack. The data flow related to a moderate-risk vulnerabilities are colored orange and the data flows related to a high-risk vulnerabilities are colored red. In the case where a data flow has both a moderate- and high-risk vulnerability, these vulnerabilities are depicted by both an orange and red colored data flow. Data flows internal to the PHD and connected device as well as the internal data stores are out of scope for this phase of this work and thus are colored grey and not considered in this analysis.

42

Copyright © 2018 IEEE. All rights reserved.

143214331434

143514361437143814391440

1441

1442144314441445144614471448144914501451

Figure 4-7 Physical activity monitor moderate- and high-risk data flows. The risk level of the data flow has the following color code: orange for moderate-risk and red for high-

risk.

The physical activity monitor threat model in Figure 4-7 includes the PHD controller and connected device controller, the PHD sensors, and external actors of patient, manufacturer, caregiver, physician, and nurse. The data flows between the PHD and connected device controllers incurred both moderate- and high-risk vulnerabilities. These data flows are typically implemented as a wireless interface. The data flow between the external actors and the system also incurred both moderate- and high-risk vulnerabilities. The risk level of vulnerabilities on these data flows depend on the type of external actor and intended use of the system. The patient, caregiver, physician, and nurse data flows are typically implemented as an UI and the manufacturer data flows could be implemented as a UI, wired, or wireless interface.

The pulse oximeter threat model in Figure 4-8 includes the PHD controller, the connected device controller, the pulse oximeter sensor, and external actors of patient, HCP, and manufacturer. The data flows between the PHD and connected device controllers incurred both moderate- and high-risk vulnerabilities. These data flows are typically implemented as a wired or wireless interface. The data flows between the HCP and connected device controller as well as the manufacturer and PHD controller incurred both moderate- and high-risk vulnerabilities. The HCP data flows are typically implemented as a UI and the manufacturer data flows could be implemented as a UI, wired, or wireless interface.

43

Copyright © 2018 IEEE. All rights reserved.

1452145314541455

145614571458145914601461146214631464

14651466146714681469147014711472

Figure 4-8 Pulse oximeter threat model with moderate- and high-risk data flows. The risk level of the data flow has the following color code: orange for moderate-risk and red for high-risk.

Figure 4-9 Insulin delivery device moderate- and high-risk data flows. The risk level of the data flow has the following color code: orange for moderate-risk and red for high-risk.

The insulin delivery device threat model in Figure 4-9 includes the PHD controller, the connected device controller, the insulin pump drive actuator, and external actors of patient, HCP, and manufacturer. The data flows between the PHD and connected device controllers incurred both moderate- and high-risk vulnerabilities. These data flows are typically implemented as a wired or wireless interface. The data flows between the HCP and connected device controller as well as the manufacturer and PHD controller incurred both moderate- and high-risk vulnerabilities. The data flow from the HCP to PHD controller incurred moderate-risk vulnerabilities. The HCP data flows are typically implemented as a UI and the manufacturer data flows could be implemented as a UI, wired, or wireless interface.

44

Copyright © 2018 IEEE. All rights reserved.

147314741475

147614771478

147914801481148214831484148514861487

Figure 4-10 Sleep apnoea breathing therapy equipment moderate- and high-risk data flows. The risk level of the data flow has the following color code: orange for moderate-risk and red for high-risk.

The SABTE threat model in Figure 4-10 includes the PHD controller, the connected device controller, the humidifier and flow pump actuators, external actors of patient, HCP, and manufacturer, and external data store memory card. The data flows between the PHD and connected device controllers incurred both moderate- and high-risk vulnerabilities. These data flows are typically implemented as a wired or wireless interface. The data flows between the memory card and both the PHD controller and connected device controller incurred moderate-risk vulnerabilities. These data flows are typically implemented as a wired interface. The data flows between the HCP and connected device controller as well as the manufacturer and PHD controller incurred moderate-risk vulnerabilities. The data flows between the HCP and PHD controller incurred moderate-risk vulnerabilities. The data flows between the patient and both PHD controller and connected device controller incurred moderate- and high-risk vulnerabilities. The HCP and patient data flows are typically implemented as a UI and the manufacturer data flows could be implemented as a UI, wired, or wireless interface.

The CGM threat model in Figure 4-11 includes the PHD controller, PHD sensor controller, the connected device controller, the PHD sensor, and external actors of patient, HCP, and manufacturer. The data flows between the PHD and connected device controllers incurred both moderate- and high-risk vulnerabilities. These data flows are typically implemented as a wired or wireless interface. The data flows between the HCP and connected device controller as well as the manufacturer and both the PHD controller and PHD sensor controller incurred moderate- and high-risk vulnerabilities. The data flows between the HCP and PHD controller incurred moderate-risk vulnerabilities. The HCP data flows are typically implemented as a UI and the manufacturer data flows could be implemented as a UI, wired, or wireless interface.

45

Copyright © 2018 IEEE. All rights reserved.

148814891490

1491149214931494149514961497149814991500150115021503

150415051506150715081509151015111512

Figure 4-11 Continuous glucose monitoring moderate- and high-risk data flows. The risk level of the data flow has the following color code: orange for moderate-risk and red for high-risk.

Figure 4-12 depicts the generic threat model with common moderate- and high-risk vulnerabilities across all device types and also across all but one device type. In the case of a common data flow and risk level across all but one device type (see Figure 4-12 b)), the follow states the device type that was excluded:

the PHD controller and manufacturer incurred high-risk vulnerabilities, excluding SABTE and

the HCP, or similar external actor, and the connected device controller incurred high-risk vulnerabilities, excluding SABTE.

Figure 4-12 Common moderate- and high-risk data flows a) across all device types and b) across all but one device type.

46

Copyright © 2018 IEEE. All rights reserved.

151315141515

1516151715181519

1520152115221523

152415251526

4.3 Decomposed Attack VectorsIn order to determine the appropriate mitigating information security control, as described in Table 10-19, the quantified vulnerabilities for each attack vector were decomposed into the corresponding STRIDE category. Table 4-13 describes the STRIDE category and risk level of a quantified vulnerability by interface between the DFD external actors and controllers across all analyzed device types. The color code of the risk level is consistent with the other results provided in this section. In the cases were an interface is affected by both a moderate- and high-risk vulnerability, the table cell is split and both colors are included. Included as sub-text to the source DFD element are examples of the type of interface that may be used.

Table 4-13 – Quantified STRIDE vulnerabilities by interface across all device types.The columns are defined as: S for spoofing, T for tampering, R for repudiation, I for information disclosure, D for denial of service, and E for elevation of privileges. Risk levels have the following color code: green for low-risk, orange for moderate-risk, and red for high-risk.

Source Destination S T R I D E

PHD(UI, wireless, wired)

Connected DeviceManufacturer

PatientHCP

Connected Device(UI, wireless, wired)

PHDManufacturer

PatientHCP

Manufacturer(UI, wireless, wired)

PHDConnected Device

Patient(UI)

PHDConnected Device

Health Care Professional(UI)

PHDConnected Device

The appropriateness of the mitigating information security control for a given interface can be determined from Table 4-13. In some cases, like the data flow from the PHD controller or connected device controller to the patient, only controls related to a specific STRIDE category (i.e., Spoofing) is appropriate. In other cases, considering controls appropriate for multiple STRIDE categories are necessary. Only in a few cases, such as the data flows between the PHD and connected device controllers, the complete set of information security controls should be considered.

4.4 Post-Mitigation AssessmentAnalysis of the moderate- and high-risk pre-mitigation vulnerabilities along with the mapping between mitigation techniques and STRIDE categories (see Table 3-11) was conducted to identify specific mitigation techniques to be used as part of the post-mitigation assessment (step 4 and 5 of the iterative vulnerability assessment described in Section 3.2).

Section 2.10 provides the identified mitigations techniques to be used along with the mapping to the NIST Cybersecurity Framework and STRIDE categories presenting the relevance of the mitigation to the identified moderate- and high-risk vulnerabilities. As part of the post-mitigation analysis, the mitigation techniques were further reduced to those most applicable to

47

Copyright © 2018 IEEE. All rights reserved.

1527

15281529153015311532153315341535

1536153715381539

1540154115421543154415451546

1547

1548154915501551

1552155315541555

selected PHD device types and secure data exchange. Note that physical tamper resistant/evidence are considered out of scope for this work. Also note that some mitigation techniques are not applicable to secure data exchange, such as input sanitization and validation.

Table 4-14 presented the mitigation techniques that were employed to reduce moderate- and high-risk vulnerabilities across all device types. Mitigations marked as mandatory (M) shall be included if a high-risk vulnerability was identified for that STRIDE category, and should be considered to mitigate moderate-risk vulnerabilities. Mitigations marked as conditionally mandatory (CM) shall be included if a high-risk vulnerability was identified for that STRIDE category and a specific condition or requirement exists, and should be considered to mitigate moderate-risk vulnerabilities. Given this criteria, all mitigations that were included in the post-mitigation assessment were considered for all types of vulnerabilities. This means that if an authentication mitigation was included due to a high-risk spoofing vulnerability over a wireless interface, all moderate- and high-risk vulnerabilities over this interface were reassessed considering the authentication mitigation.

Table 4-14 – Mitigation techniques selected for post-mitigation analysis.M indicates that if this STRIDE category identified a vulnerability to be mitigated, this mitigation technique is mandatory. CM indicates that if this STRIDE category identified a vulnerability to be mitigated, if an additional condition is met this mitigation technique is mandatory.

Mitigation Technique S T R I D E

IdentifyAuthentication M MDigital Signatures CM CM

Protect

Prevent

Authorization M M M MDe-Identification CMDo not Store SecretsEncryption CMFilteringMessage Authentication Code CMPhysical Tamper ResistantProtect Secrets & Secret Data

Limit

Input SanitizationInput ValidationQuality of ServiceLeast Privileges CMThrottling

DetectAudit Trail CMPhysical Tamper Evidence

RespondEnd-User SignalizationInvalidate Compromised Security

Recover Re-Establish Security

The definition of conditionally mandatory mitigation techniques is as follows.

Tamperingo Message Authentication Code (MAC): If the integrity and/or authentication of

the data exchange is required, MAC is mandatory.o Digital Signature: If the manufacturer requires identity attached to the data

exchange, digital signature is mandatory

48

Copyright © 2018 IEEE. All rights reserved.

155615571558

15591560156115621563156415651566156715681569

1570157115721573

1574

15751576157715781579

Repudiationo Audit Trail: When the PHD has an intended use case where it receives

commands from a connected device of another legal manufacturer, an audit trail is mandatory. When the PHD only connects to devices from its manufacturer, an audit trail is not mandatory, since that manufacturer is wholly responsible.

o Digital Signature: If the manufacturer requires identity attached to the data exchange, digital signature is mandatory. If an audit trail is mandatory and the manufacturer want an immutable audit trail, digital signatures are mandatory.

Information Disclosureo De-Identification: If there is any personal information in the data exchange, de-

identification is mandatory, such that the personal information is either removed or protected in such a way that passive listeners cannot access this information.

o Encryption: If the data itself or the data exchange is intended to be confidential, encryption is mandatory.

Elevation of Privilegeso Least Privileges: When the device has an intended use case including multiple

authorization levels, run with least privileges is mandatory.

While not identified in Table 4-14, the device type analysis was harmonized such that End-User Signalization is a conditional mitigation and limited to the last mode of defense in cases where other mitigation techniques cannot address a moderate- or high-risk vulnerability. Even then, End-User Signalization should be limited to only critical functions/values that can produce adverse events to the patient.

The mitigation techniques identified in Table 4-14 were included in the post-mitigation assessment for all device types. The results of the post-mitigation assessment are presented in Table 4-15 along with the pre-mitigation assessment for comparison.

Table 4-15 – Pre- and Post- mitigation comparison of STRIDE vulnerabilities by device type.The columns are defined as: S for spoofing, T for tampering, R for repudiation, I for information disclosure, D for denial of service, and E for elevation of privileges. Risk levels have the following color code: green for low-risk, orange for moderate-risk, and red for high-risk.

Device Type Risk LevelPre-mitigation Post-mitigation

S T R I D E S T R I D E

Physical Activity Monitor

Low 1 - 9 - 21 3 3 4 10 2 21 4Moderate 12 3 4 1 - 8 21 - 4 2 - 18

High 11 1 1 3 - 11 - - - - - -

Pulse OximeterLow 4 - 12 4 9 - 17 4 12 6 10 10

Moderate 10 4 - 2 1 7 1 - - - - -High 4 - - - - 3 - - - - - -

Sleep Apnoea Breathing Therapy Equipment

Low 7 7 10 11 19 6 14 13 10 11 19 18Moderate 9 5 1 - 9 11 9 5 2 - 9 10

High 7 6 1 - - 11 - - - - - -

Insulin Delivery Device

Low 3 - 7 5 11 3 12 - 11 7 11 17Moderate 8 5 7 4 - 12 6 5 3 2 - 10

High 7 - - - - 12 - - - - - -

49

Copyright © 2018 IEEE. All rights reserved.

1580158115821583158415851586158715881589159015911592159315941595159615971598

15991600160116021603

160416051606

1607160816091610

Device Type Risk LevelPre-mitigation Post-mitigation

S T R I D E S T R I D E

Continuous Glucose Monitor

Low 4 2 2 3 10 4 10 6 10 6 10 20Moderate 9 4 10 4 - 16 7 - 2 2 - 6

High 4 - - 1 - 6 - - - - - -

All Device TypesLow 19 9 40 23 70 16 56 27 53 32 71 69

Moderate 48 21 22 11 10 54 44 10 11 6 9 44High 33 7 2 4 - 43 - - - - - -

The physical activity monitor high risk vulnerabilities were mitigated down to moderate or low risk and some moderate risk vulnerabilities were reduced to low risk. Risk reduction was achieved using authentication, authorization, encryption, message authentication code, and least privileges. Most of the remaining post-mitigation moderate risk vulnerabilities were related to implementation bugs (elevation of privileges), spoofing of the sensors and repudiation. These vulnerabilities can be further mitigated using digital signatures, audit logging and user signalization.

The pulse oximeter moderate- and high-risk vulnerabilities were reduced to low-risk using at least one of authentication, authorization, encryption, message authentication code, and user signalization. The remaining moderate-risk vulnerability was spoofing the manufacturer which may lead to loading improper firmware.

The insulin pump high-risk vulnerabilities were mitigated to low- or moderate-risk, and most of the moderate-risk vulnerabilities were reduced to low-risk. Risk reduction was achieved using authentication, authorization, encryption, message authentication code, and least privileges. The residual post-mitigation moderate-risk vulnerabilities were related to wireless control of the insulin pump from the connected device and insulin pump firmware update on the service and investigation interface.

The CGM high risk vulnerabilities were mitigated to low- or moderate-risk, and most of the moderate-risk vulnerabilities were reduced to low-risk. Risk reduction was achieved using authentication, authorization, encryption, message authentication code, and least privileges. The residual post-mitigation moderate-risk vulnerabilities were related to control of the CGM from the connected device and the manufacturer interface to the CGM.

5 Discussion5.1 PHD Vulnerability AssessmentThe risk analysis for all device types has been broken up into a "pre" and "post" assessments to demonstrate the efficacy of mitigations that reduce the overall risk of the device. Security threats and vulnerabilities may result in breaches of data or reduction of effectiveness of the device, but additionally patient safety concerns are present as a result of exploiting the identified vulnerabilities. This is the case even for these non-life-support device.

The device types included in this work are to be considered as baseline devices; where these devices have common features, connectivity and data exchange as most of the devices currently available in the market of the same or similar type. Obviously, if a specific system includes a novel or different feature set, the analysis should be updated to include these details and any

50

Copyright © 2018 IEEE. All rights reserved.

1611161216131614161516161617

1618161916201621

162216231624162516261627

16281629163016311632

1633

1634

16351636163716381639

1640164116421643

new vulnerabilities identified should be assessed to ensure the suggested mitigations reduce the related risk to acceptable levels.

In addition, device use case impacts the risk analysis and mitigations to consider. This includes the sensitivity of information stored on the device, typical interfaces to the device, and the requirements on CIA. The pre-mitigation scoring reflects these details by the assigned vulnerability risk level. As well, specific mitigations may be included based on requirements of the manufacturer or use case (see conditional mandatory mitigations in Section 4.4). Again, specific systems should be evaluated using similar methods.

As shown in Figure 4-12, there is commonality of vulnerabilities across device types. Not surprisingly, with the use of common mitigation techniques, as shown in Table 4-14, these common vulnerabilities can be addressed (see Table 4-15). However, some vulnerabilities could be further mitigated using techniques not applicable to secure data exchange. For example, input sanitization and validation, avoid storing secrets and otherwise protect stored secrets, physical tamper evidence and resistance, and end-user signalization are tried and true mitigation techniques used in industry and some are discussed in [100]. This work is focused on identifying mitigation techniques to support secure data exchange.

It should be noted that while the UI was identified as an interface with high and moderate risk vulnerabilities, this work is focused on securing data exchange. While some of the UI vulnerabilities can be mitigated using the design principles and mitigation techniques described in Table 4-14 (e.g., user authorization by passcode), these vulnerabilities require additional consideration herein not discussed.

5.1.1 Physical Activity Monitor

Physical activity monitors are wearable personal health devices, and usually used by a patient and contain generally less sensitive information than the other device types. A PAM is not life threatening and offers only measurements which are verified by the HCP: counselor, nurse, and physician.

With this scoping in mind, a threat model was created then analyzed using the recommended STRIDE mitigation techniques. Threat vectors that were impossible or irrelevant were considered out of scope, and 89 valid threat vectors were analyzed for risk level in a pre-mitigation assessment.

Physical activity monitors store limited amount of personal health information and sensitive data related to the patient. The following post-mitigation assumption were made:

For authentication mitigation, the attack is limited to one instance, which reduced the collateral damage potential metric.

For authorization mitigation, the integrity and confidentiality metrics may be reduced based on the interface and users of that interface.

Encryption and message authentication codes protect the confidentiality and integrity of the data, respectively.

The physical activity monitor is not life threatening and as such collateral damage potential was limited to Medium-High.

51

Copyright © 2018 IEEE. All rights reserved.

16441645

164616471648164916501651

16521653165416551656165716581659

16601661166216631664

1665

1666166716681669

1670167116721673

16741675

16761677167816791680168116821683

In our vulnerability assessment we used an inheritance model for the actors. They inherit from each other the access levels to the patient’s data. This was done on two levels: the End-User and the Health Care Professional.

The physical activity monitor (PAM) device context, threat model, pre-mitigation vulnerability assessment, and post-mitigation vulnerability assessment were include in Appendix G – DeviceType Analysis Section 15.1.

5.1.2 Pulse Oximeter

Since pulse oximeters could include the concept of thresholds as a way to signal to caregivers that the measurements have passed a certain pre-determined value, the threat model included the possibility of configuring these thresholds from the connected device to the pulse oximeter. It is uncommon to desire threshold levels on pulse oximeters that would fit the PHD style, but both the 11073 PHD family of standards as well as the Medical Devices Working Group standards allow for clinical-level monitoring use cases, so these were included to be more rigorous. The only foreseeable application of data exchange with a pulse oximeter is to retrieve stored measurements or configure these threshold levels.

Threshold related threats are prevalent through this work. Four of the seven high-risk vulnerabilities are directly caused by the ability to configure these thresholds remotely. A simple solution is to not allow remote configuration, or to not support thresholds on the pulse oximeter. Other mitigations related to thresholds are relatively impractical for small personal devices, such as displaying the threshold levels on the pulse oximeter at all times.

Pulse oximeters do not usually store protected health information (PHI) and for the security analysis it was assumed that no PHI confidentiality was violated in the case of access to the data generated by the pulse oximeter. The Connected Device may contain PHI and thus it was assumed an attack could violate PHI confidentiality.

It was also assumed that if the authentication mitigation was used on a threat vector, the user could detect the authentication failure. For example, if a pulse oximeter is talking to a connected device, and authentication fails, the devices will simply not communicate, which is user detectible. This was generally the reason that threat vectors like spoofing were mitigated down to safe levels. In some cases, it should be ensured that the user does actually know that authentication has failed.

Another assumption was that if any false measurements were being created or thresholds were improperly configured or measurements were past a threshold and were not properly articulated to caregivers, patient death is possible, and thus the scoring of the Potential Collateral Damage metric was ranked High. This represents a worst-case scenario and only applies in some use cases such as clinical-level monitoring, and most use cases for a personal health devices will be used in much lower criticality settings, such as a home healthcare spot-check device for disease management or wellness. The risk of patient harm should be adjusted based on the use case, and cybersecurity controls should be applied as needed to mitigate risk to acceptable levels.

The pulse oximeter system context, threat model, pre-mitigation vulnerability assessment, and post-mitigation vulnerability assessment were include in Appendix G – Device Type Analysis Section 15.2.

52

Copyright © 2018 IEEE. All rights reserved.

168416851686

168716881689

1690

16911692169316941695169616971698

16991700170117021703

1704170517061707

170817091710171117121713

171417151716171717181719172017211722

172317241725

5.1.3 Sleep Apnoea Breathing Therapy Equipment

The sleep apnoea breathing therapy equipment (SABTE) device (also called sleep therapy device) in the analysis is a hypothetical one, with primary functions found in today's models of SABTE devices. The analysis is based on the knowledge of such devices and we assume that increased connectivity in the future will only serve to expand the threat vectors. Therefore, the risk analysis of SABTE in the context of communication with PHDs will provide a solid basis for preparation against these future threats.

While the SABTE is not considered life-supporting, we acknowledge that the proper exploitation of known and unknown vulnerabilities may result in control, or overwriting, of functions of the on-board logic devices. In such cases, scenarios that previously would be considered "hardware mitigated" may still be exploitable with complex attacks. Note that, for this analysis, we do not consider the chaining of attacks. It may be argued that a properly chained attack could remove hardware mitigations and thus increase the risk of other lower-scored risks.

And, for this reason, our focus for this analysis is the high-risk items that might lead to severe adverse events. We target the high-risk vulnerabilities to validate that the inclusion of our mitigations will reduce the risk to acceptable levels. In almost all cases of high-risk, this was the true. The remaining moderate-risk may be debated for risk acceptance or possibly even considered for out of scope. We find the scenario of repudiation from an outside actor (beyond the trust boundary) to be a rather difficult risk to fully control without defining expectations of message delivery.

The SABTE system context, threat model, pre-mitigation vulnerability assessment, and post-mitigation vulnerability assessment were include in Appendix G – Device Type Analysis Section 15.3.

5.1.4 Insulin Delivery Device

An insulin delivery device (also called insulin pump) is capable of directly harming the person with diabetes (PwD) with an incorrect dose of insulin. This harm is only vital to the PwD with the elapse of time during which the PwD does not react to the situation (e.g., increase the intake of sugars). This was considered in the collateral damage potential metric of the assessment. While an over-dose or under-dose of insulin may cause adverse effects on the PwD, PwDs are trained for these occurrences and have other means to prevent these effects. As such the maximum value for the collateral damage potential metric was limited.

Insulin pumps do not usually store PHI and for the security analysis it was assumed that no PHI confidentiality would be violated in the case of access to the data generated by the insulin pump. The Connected Device may contain other PHI confidential information, so in the case of attacks on the Connected Device it was assumed confidentiality of PHI could be violated.

Most of the residual post-mitigation moderate-risk vulnerabilities were related to wireless control of the insulin pump from the connected device and may be deemed acceptable as these vulnerabilities are a typical tradeoff for the benefit of this PHD. However, these vulnerabilities could be further mitigated with user signalization (e.g., haptic or audible signalization on the insulin pump) for critical functions (e.g., delivery of insulin as a bolus). The other residual post-mitigation moderate-risk vulnerabilities were lack of input validation and spoofing of the manufacturer on the service and investigation interface of the insulin pump. This interface is

53

Copyright © 2018 IEEE. All rights reserved.

1726

172717281729173017311732

173317341735173617371738

1739174017411742174317441745

174617471748

1749

1750175117521753175417551756

1757175817591760

1761176217631764176517661767

typically a wired interface to this wearable device and the user would typically be aware that is interface is in use.

The following assumption were made for the analysis of the insulin pump:

For authentication mitigation, the attack is limited to one instance, which reduced the collateral damage potential metric value.

For authorization mitigation, the integrity and confidentiality metric values may be reduced based on the interface and users of that interface.

Since an insulin pump is physically attached to the PwD, any attack that uses a wired interface of the insulin pump is detectable to the PwD.

The insulin pump system context, threat model, pre-mitigation vulnerability assessment, and post-mitigation vulnerability assessment were include in Appendix G – Device Type Analysis Section 15.3.

5.1.5 Continuous Glucose Monitor

A CGM is incapable of directly harming a PwD. This is represented in the Collateral Damage Potential metric where many vulnerabilities have a value of low. However it provides key information to inform therapy decisions. The integrity of said information is of the utmost importance followed by availability. As well, some CGMs require calibration, which improves the accuracy of the measurements. Allowing calibration from untrusted sources may greatly affect the quality of the measurements, and thus this action should be limited to authorized sources.

A CGM does not usually store PHI and for the security analysis it was assumed that no PHI confidentiality would be violated in the case of access to the data generated by the CGM.

The analysis assumed that the CGM would apply the least privilege design principle to data access such that only authorized users could modify the generated data, and thus reduces the impact on data integrity and confidentiality. Since the CGM is typically worn on the PwD and the controller in possession of the PwD, the user is aware of any direct attacks using a wired interface against the CGM or connected device.

The CGM device context, threat model, pre-mitigation vulnerability assessment, and post-mitigation vulnerability assessment were include in Appendix G – Device Type Analysis Section 15.5.

5.2 Multi-Component System Vulnerability AssessmentAs the connectivity of PHDs increase, multi-component, heterogenous systems become more wide spread. A multi-component system includes multiple connected components from potentially various manufacturers either within a single device or as a system (e.g., system of systems), where at least one component is a PHD. Examples of multi-component systems include an automated insulin delivery (AID) system, patient monitor, and cloud service that gathers data from PHDs to support consumers, providers, or payers. An AID connects a CGM and insulin pump to an automated dosing controller and a patient monitor typically includes devices such as a thermometer, blood pressure monitor, and pulse oximeter.

The vulnerability assessment described in this work is applicable to a multi-component system. In both a standalone PHD or as part of multi-component system, one must assume a hostile

54

Copyright © 2018 IEEE. All rights reserved.

17681769

1770

177117721773177417751776

177717781779

1780

178117821783178417851786

17871788

17891790179117921793

179417951796

1797

17981799180018011802180318041805

18061807

environment and the PHD does not know the inner workings of the connected device(s); thus the PHD should not trust the connected device(s) implicitly. Instead it is the responsibility of the manufacturer to ensure the PHD interfaces are secure, well described, and without any hidden functionality. Also, the security of one component of the system should not depend on the security of another. Each component of the system by itself must provide sufficient security to protect against direct attacks or chaining of attacks. Assessing each component individually for vulnerabilities without any inherent trust between the components of the system greatly improves the security of the system as a whole (i.e., defense in depth as in Section 2.10.2.1). This assessment should not be omitted even when the multiple components were intended to work together.

5.3 Software of Unknown ProvenanceThe development of a PHD is similar to the development of any system, where manufacturers implement within their domain of specialties, and otherwise include third party solutions. If these third party or off-the-shelf solutions are software, they are known as software of unknown provenance (SOUP). Since the medical device manufacturer is held liable for any harm that occurs from intended use of the device, they are also responsible for SOUP within their system.

One way of handle SOUP within a system is the manufacturer validates each version of the SOUP included with the system. Alternatively, the manufacturer could include additional controls within the system to protect it from potential vulnerabilities of the SOUP. This work of developing an application end-to-end security toolbox addresses the latter. The example SOUP that would be mitigated is a wireless or wired interface related solution (e.g., Bluetooth stack, USB stack). The benefit of including additional controls to handle potential vulnerabilities of the SOUP reduces the burden on the manufacturer when validating the SOUP.

5.4 Threat Modeling ToolThe Microsoft TMT comes with a standard set of stencils intended for modeling software systems, specifically web services. It is possible to create custom stencils specific to the desired application. However, this work decided that the standard stencils of TMT was sufficient for its application. Of the standard stencils, the generic element was used in all cases except when modeling a human external actor. The benefit of using the generic element is that no assumptions are made by TMT about the element and all element properties are set to their default values, which is either no value set or not applicable. Again for this work, the default values were sufficient and only specific properties were given alternative values (i.e., setting destination authenticated to a value of ‘No’ since it was assumed that the PHD and connected device had no information security controls in the pre-mitigation state).

5.5 STRIDESTRIDE is a threat classification model developed by Microsoft for thinking about computer information security threats (see Section 2.8.1.2 and Appendix B – STRIDE for additional details). The main use of STRIDE is to decompose a system threat model into a threat list by analyzing each component for susceptibility to the threats. By definition, STRIDE considers intended use of the system to identify vulnerabilities. As example, an elevation of privilege attack by changing the execution flow of a process allows the attacker to change the flow of the program execution within the process to the attacker’s choosing. However, only the flow is changed, but the underlying behavior is still the same. If the device was an insulin pump, this attack could not override a hard threshold limiting the maximum bolus, but instead deliver a bolus under the

55

Copyright © 2018 IEEE. All rights reserved.

1808180918101811181218131814181518161817

1818

18191820182118221823

1824182518261827182818291830

1831

1832183318341835183618371838183918401841

1842

184318441845184618471848184918501851

maximum limit when it is not expected by the PwD. Similarly an attack elevating privileges by using remote code execution uses the existing programing interface of the device within the allowed parameters.Also by definition, STRIDE does not consider chaining of attacks. A chain attack is when a relationship between vulnerabilities is identified and the attacker plans the attack knowing this relationship, i.e., an attack on vulnerability B would be simplified if you first successfully attacked vulnerability A or a successful attack on vulnerability A and B creates a new vulnerability C. Instead STRIDE looks at each vulnerability independently. Some threat modeling systems that consider chain attacks are Attack-Defense Trees and Trike.A variant of STRIDE exists that includes lateral movement (LM) as a threat category [97]. Lateral movement refers to the technique attackers use to move through a network looking for desirable assets. Considering the focus of this work is on the interface to and from the PHD, STRIDE-LM is not a strong candidate. However, when considering a direct-to-cloud interface, LM has an important role.

5.6 eCVSSThe eCVSS scoring system has no explicit metric that an attack would occur and be successful, e.g., likelihood. Instead, if an attack can be successful, it will be successful. So mitigations that reduce the probability of a successful attack often result in the same post-mitigation risk score. In some cases (e.g., pulse oximeter analysis), it was argued the threat could not possibly happen, and was mitigated to a score of zero. In other cases, such as when using authorization that could possibly be sidestepped, the Awareness metric usually had to reach at least a score of User to be considered mitigated by this exercise. Some risk assessment methodologies may consider the likelihood of an event to be reduced after mitigation, e.g., by reducing the window of opportunity of the attack.

Within the eCVSS scoring system, various metrics require the analysis sets a value of None, Partial, or Complete. For those that wish to apply this approach within their organization, it is important to note the difference between Partial and Complete. Shortly, Partial mean that the attacker can successful attack the system, but not with the granularity desired. As an example, the attack may be to gather data from the system, but the attacker cannot specify which data is received. On the other hand, Complete means that the attacker has complete control of the attack.

A vulnerability that may include direct harm to the patient is identified as part of eCVSS in two ways. First the system has a yes/no entry for each vulnerability to identify those that impact safety efficient of the system. This allows filtering of the analysis to include these vulnerabilities in the risk management process as these vulnerabilities may be subject to non-cybersecurity mitigations. As well, the collateral damage potential metric considers the patient along with business and legal damages. The weighting of the collateral damage potential is significant such that even seemingly basic vulnerabilities with direct harm to the patient will receive a higher risk score.

While user awareness is not a core mitigations identified as part of this work, instead a last mode of defense, it should be noted that eCVSS weights awareness heavily. While this work is in agreement that user awareness is a power mitigation, the focus is on secure data exchange. However, manufacturers should evaluate the affective use of increasing user awareness to further reduce vulnerabilities to an acceptable risk level.

56

Copyright © 2018 IEEE. All rights reserved.

18521853185418551856185718581859186018611862186318641865

1866

186718681869187018711872187318741875

1876187718781879188018811882

18831884188518861887188818891890

18911892189318941895

This work harmonized the analysis of various device types, including low-, moderate- and high-risk threshold levels. Those that wish to follow this approach should evaluate and assign appropriate threshold levels for the system under assessment.

5.7 Out of Scope Threat VectorsPhysical attacks to the PHD and connected device is currently out of scope for this work. As well, the internet and intranet connectivity of the PHD and the connected device is currently out of scope, and data flowing into and out of internal storage and pin attacks is currently out of scope. The current iteration of this work is focused on the interfaces of the PHD and the connected device within the intended use cases. This scoping was necessary to reduce the variables to a reasonable level. However, it is the intention of the group to conduct another iteration of this work where physical attacks, internal data flows and storage, and internet/intranet connectivity are in scope.

Finally, the authenticity of the user from which the physiological measure is taken or the therapy is given is out of scope for this work. This last item is not necessarily a cybersecurity attack, but it may cause adverse outcome to the therapy of the patient.

6 ConclusionPersonal Health Device (PHD) Cybersecurity is the process and the capability of preventing

unauthorized access, unauthorized modification, misuse, denial of use or the unauthorized use

of any information that is stored on, accessed from, or transferred to and from a PHD. In this is risk analysis the process part and information security the capability part of PHD Cybersecurity.

The PHD Cybersecurity Whitepaper provides a comprehensive overview about the background, describes the analyzed and used methods, provides the results of a manufacturer in-depended detailed risk analysis of use cases covering PHD from non-regulated to class III (process part of PHD cybersecurity), and provides generic recommendation for a scalable information security toolbox appropriate for PHD communication (capability part of PHD cybersecurity). With that the whitepaper could become the basis for the standardization of secure interoperability in open consensus standards (e.g., by IEEE 11073 PHD WG), specifications (e.g., by Bluetooth MedWG) and guidelines (e.g., by PCHA GTC).

Such standardization of secure Plug & Play interoperability has several benefits:

This increases patient confidence that a device will work in a multi-vendor environment. It will enable care providers in remote environments to closely monitor patient needs

while allowing adjustments of therapy settings. This gives providers trust that the data exchange is proven secured. The approval work of regulatory bodies could be streamlined and reduced because the

various manufacturers will combine their PHD information security using a scalable information security toolbox based on trusted open consensus standards.

57

Copyright © 2018 IEEE. All rights reserved.

189618971898

1899

19001901190219031904190519061907

190819091910

1911

1912

19131914191519161917

19181919

19201921192219231924192519261927

1928

1929193019311932193319341935

For payers the cost of care will be reduced by providing proven secure interoperability between different devices and manufactures.

Provides manufacturer the ability to build devices which works in a multi-vendor environment.

The re-use of the scalable information security toolbox will help manufacturers to secure development time and decrease time to market.

7 OutlookThis work is the first step of a multi-step roadmap. Figure 7-13 shows the long-term roadmap which consists of 3 steps. Step 1 focuses on the PHD interface and enabling standardized plug and play application end-to-end information security between the PHD and a personal health gateway. Step 2 connects the security of the PHD interface with the services interface inside the personal health gateway. Step 3 scales standardized secure plug and play interoperability from the PHD to the cloud.

Figure 7-13 Long-term roadmap

Step 1 of the long-term roadmap has three phases, as shown in Figure 7-14, of which this work is the first phase. This whitepaper, which describes mitigation techniques to address common vulnerabilities of PHDs, will lead to a recommended practice defining specific, industry proven information security controls. Phase 3 is the adoption of these information security controls within standardized PHD communication used for secure data exchange.

58

Copyright © 2018 IEEE. All rights reserved.

193619371938193919401941

1942

194319441945194619471948

19491950

19511952195319541955

Figure 7-14 Near-term roadmap

59

Copyright © 2018 IEEE. All rights reserved.

19561957

8 Citations[1] Personal Connected Health Alliance, “Continua Design Guidelines,”

December 2017 (www)

[2] ISO/IEEE 11073-10201:2004, “Domain information model”, December 2004 (www)

[3] ISO/IEEE 11073-20601:2016, “Optimized exchange protocol”, June 2016 (www)

[4] FDA, “Content of Premarket Submissions for Management of Cybersecurity in Medical Devices”, October 2014 (www)

[5] M. Rozenfeld, the institute, “IEEE Standards on Cybersecurity,” Issue 1, vol. 39, p. 13, March 2015 (www)

[6] ISO Online Browsing Platform (www)

[7] Collaborative Research for Effective Diagnosis, Joint Initiative for Global Standards Harmonization Health Informatics Document Registry and Glossary – Standards Knowledge Management Tool (www)

[8] Bluetooth Core Specification (www)

[9] IEEE Standards Dictionary (www)

[10] ISO/TR 18638:2017, “Health Informatics – Guidance on health information privacy education in healthcare organizations”, June 2017 (www)

[11] ISO/IEC 27032:2012, “Information Technology – Security Techniques – Guidelines for cybersecurity”, July 2012 (www)

[12] NIST SP 800-12 Rev. 1, “An Introduction to Computer Security”, June 2017 (www)

[13] ISO/IEC 2382:2015, “Information technology – Vocabulary”, May 2015 (www)

[14] IEEE Standards Dictionary Online (www)

[15] ISO 25237:2017, “Health informatics – Pseudonymization”, January 2017 (www)

[16] ISO/IEC 9798-1:2010, “Information technology – Security techniques – Entity authentication – Part 1 : General”, July 2010 (www)

60

Copyright © 2018 IEEE. All rights reserved.

1958

19591960

19611962

19631964

19651966

19671968

1969

197019711972

1973

1974

197519761977

19781979

19801981

19821983

1984

19851986

19871988

[17] ISO 24100:2010, “Intelligent transport systems – Basic principles for personal data protection in probe vehicle information systems”, May 2010 (www)

[18] ISO/IEC 27033-1:2015, “Information technology – Security techniques – Network security – Part 1: Overview and concepts”, August 2015 (www)

[19] ISO 14971:2007, “Medical Devices – Application of risk management to medical devices”, March 2007 (www)

[20] ISO/TS 14441:2013, “Health informatics – Security and privacy requirements of HER systems for use in conformity assessment”, December 2013 (www)

[21] ISO/IEC TS 17961:2013, “Information technology – Programming languages, their environments and system software interfaces – C secure coding rules”, November 2013 (www)

[22] ISO 16609:2012, “Financial services – Requirements for message authentication using symmetric techniques”, March 2012 (www)

[23] ISO/IEC TS 30104:2015, “Information technology – Security techniques – Physical Security Attacks, Mitigation Techniques and Security Requirements, May 2015 (www)

[24] ISO/IEC 29180:2012, “Information technology -- Telecommunications and information exchange between systems -- Security framework for ubiquitous sensor networks”, December 2012 (www)

[25] ISO/IEC TR 26927:2011, “Information technology -- Telecommunications and information exchange between systems -- Corporate telecommunication networks -- Mobility for enterprise communications”, September 2011 (www)

[26] ISO/TR 16056-2:2004, “Health informatics -- Interoperability of telehealth systems and networks -- Part 2: Real-time systems”, July 2004 (www)

[27] IEEE 11073 Personal Health Device Working Group (www)

[28] FDA Classify Your Medical Device (www)

[29] European Commission Medical Devices Regulatory Framework (www)

[30] China Food and Drug Administration, Regulations for the Supervision and Administration of Medical Devices (www)

[31] FDA Product Classification, Physical Activity Monitor (www)61

Copyright © 2018 IEEE. All rights reserved.

198919901991

19921993

19941995

199619971998

199920002001

20022003

200420052006

200720082009

2010201120122013

201420152016

2017

2018

2019

20202021

2022

[32] FDA Product Classification, Pulse Oximeter (www)

[33] FDA Product Classification, Sleep Apnoea Breathing Therapy Equipment (www)

[34] FDA Product Classification, Insulin Delivery Device (www)

[35] FDA Product Classification, Continuous Glucose Monitor (www)

[36] European Commission Medical Devices Directive 93/42/EEC (www)

[37] ISO/IEEE 11073-10101, “Health informatics – Point-of-care medical device communication – Part 10101:Nomenclature” (www)

[38] ISO/IEEE 11073-20101, “Health informatics – Point-of-care medical device communication – Part 20101:Application profiles – Base standard” (www)

[39] ISO/IEEE 11073-30200, “Health informatics – Point-of-care medical device communication – Part 30200:Transport profile – Cable connected” (www)

[40] ISO/IEEE 11073-30300, “Health informatics – Point-of-care medical device communication – Part 30300:Transport profile – Infrared wireless” (www)

[41] ISO/IEEE 11073-00103, “Health informatics – Personal health device communication – Part 00103:Overview” (www)

[42] ISO/IEEE 11073-10404, “Health informatics – Personal health device communication – Part 10404:Device specialization – Pulse oximeter” (www)

[43] ISO/IEEE 11073-10419, “Health informatics – Personal health device communication – Part 10419:Device specialization – Insulin Pump” (www)

[44] ISO/IEEE 11073-10424, “Health informatics – Personal health device communication – Part 10424:Device specialization – Sleep apnoea breathing therapy equipment (SABTE)” (www)

[45] ISO/IEEE 11073-10425, “Health informatics – Personal health device communication – Part 10425:Device specialization – Continuous glucose monitor (CGM)” (www)

[46] ISO/IEEE 11073-10441, “Health informatics – Personal health device communication – Part 10441:Device specialization – Cardiovascular fitness and activity monitor” (www)

62

Copyright © 2018 IEEE. All rights reserved.

2023

20242025

2026

2027

2028

20292030

203120322033

203420352036

203720382039

20402041

204220432044

204520462047

204820492050

205120522053

205420552056

[47] Bluetooth Core Specification 5.0 (www)

[48] Bluetooth Core Specification Supplement 7 (www)

[49] Bluetooth Core Specification Addendum 6 (www)

[50] Bluetooth Blood Pressure Profile (www)

[51] Bluetooth Blood Pressure Service (www)

[52] Bluetooth Continuous Glucose Monitoring Profile (www)

[53] Bluetooth Continuous Glucose Monitoring Service (www)

[54] Bluetooth Insulin Delivery Profile (www)

[55] Bluetooth Insulin Delivery Service (www)

[56] Bluetooth Glucose Profile (www)

[57] Bluetooth Glucose Service (www)

[58] Bluetooth Heart Rate Profile (www)

[59] Bluetooth Heart Rate Service (www)

[60] Bluetooth Health Thermometer Profile (www)

[61] Bluetooth Health Thermometer Service (www)

[62] Bluetooth Pulse Oximeter Profile (www)

[63] Bluetooth Pulse Oximeter Service (www)

[64] Bluetooth Weight Scale Profile (www)

[65] Bluetooth Weight Scale Service (www)

[66] FDA, “Postmarket Management of Cybersecurity in Medical Devices”, December 2016 (www)

[67] ISO 31000:2018, “Risk management – Guidelines” (www)

[68] Healthcare IT News, “Medical device vendor disables internet updates over hacking risk, FDA alerts” (www)

[69] Bloomberg Businessweek, “The $250 biohack that’s revolutionizing life with diabetes” (www)

[70] Healthcare IT News, “FDA recalls 5,000 Abbott heart” (www)

63

Copyright © 2018 IEEE. All rights reserved.

2057

2058

2059

2060

2061

2062

2063

2064

2065

2066

2067

2068

2069

2070

2071

2072

2073

2074

2075

20762077

2078

20792080

20812082

2083

[71] Healthcare IT News, “DHS warns of security flaws in GE, Philips, Silex medical devices” (www)

[72] Wired, “Medical devices are the next security nightmare” (www)

[73] Healthcare IT News, “Hospital survival guide for a world overflowing with unsecured medical devices” (www)

[74] ISO 13485, “Medical devices – Quality management systems – Requirements for regulatory purposes” (www)

[75] ISO 14971, “Medical devices – Application of risk management to medical devices” (www)

[76] IEC 61010-1, “Safety requirements for Electrical Equipment for Measurement, Control, and Laboratory Use – Part 1:General Requirements” (www)

[77] IEC 62304, “Medical device software – Software life cycle processes” (www)

[78] IEC TR 80001-2-2, “Application of risk management for IT-networks incorporating medical devices – Part 2-2: Guidance for the disclosure and communication of medical device security needs, risks and controls.” (www)

[79] European Commission, “Directive 98/79/EC of the European parliament and of the council” (www)

[80] European Commission, “Regulation (EU) 2017/746 of the European parliament and of the council” (www)

[81] FDA CFR 21 820.30 “Design Controls” (www)

[82] NIST SP 800-30, “Guide for Conducting Risk Assessments” (www)

[83] NIST SP 800-39, “Managing Information Security Risk: Organization, Mission, and Information System View” (www)

[84] AAMI TIR57, “Principles for medical device security – Risk management” (www)

[85] IHE Patient Care Device Whitepaper, “Medical Equipment Management (MEM): Cybersecurity” (www)

[86] IEC 60601-1, “Medical electrical equipment – Part 1: General requirements for basic safety and essential performance” (www)

64

Copyright © 2018 IEEE. All rights reserved.

20842085

2086

20872088

20892090

20912092

209320942095

20962097

2098209921002101

21022103

21042105

2106

2107

21082109

21102111

21122113

21142115

[87] ISO 9241-210, “Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems” (www)

[88] FDA, “Applying Human Factors and Usability Engineering to Medical Devices” (www)

[89] ISO/IEC 62366-1, “Medical devices – Part 1: Application of usability engineering to medical devices” (www)

[90] NIST, “Framework for improving critical infrastructure cybersecurity” (www)

[91] NIST Special Publication 1800-8, “Securing wireless infusion pumps in healthcare delivery organizations” (www)

[92] MSDN Magazine, “Uncover security design flaws using the STRIDE approach” (www)

[93] Wiley, “Threat modeling: Designing for security” (www)

[94] OWASP, “Threat risk modeling” (www)

[95] Barbara Kordy et al, “Foundations of Attack-Defense trees” (www)

[96] Octotrike, “Trike v.1 methodology document” (www)

[97] Lockheed Martin, “A Threat-Driven Approach to Cyber Security” (www)

[98] FIRST, “A Complete Guide to the Common Vulnerability Scoring System”, June 2007 (www)

[99] Mitre, “Common Weakness Enumeration: A community-developed list of software weakness types” (www)

[100] IEEE Cybersecurity, “Building Code for Medical Device Software Security” (www)

[101] NCCoE, “Use Case: WIRELESS MEDICAL INFUSION PUMPS”, Draft, December 2014 (www)

[102] HHS, “Summary of the HIPAA Privacy Rule” (www)

65

Copyright © 2018 IEEE. All rights reserved.

21162117

21182119

21202121

21222123

21242125

21262127

2128

2129

2130

2131

2132

21332134

21352136

21372138

21392140

2141

2142

9 Appendix A – Frameworks and MethodologiesSee the following references for additional details:

https://www.owasp.org/index.php/Threat_Risk_Modeling#Identify_Threats http://people.irisa.fr/Barbara.Kordy/papers/adt.pdf http://octotrike.org/papers/Trike_v1_Methodology_Document-draft.pdf

9.1.1 List Known Potential Vulnerabilities

One could make a simple list of all the vulnerabilities that could affect your system. While it is impossible to list all potential vulnerabilities, it is also unlikely that new threats will be created to exploit new vulnerabilities within a custom system. Therefore one should concentrate on known threats.

9.1.2 STRIDE

STRIDE is a classification scheme, useful for system decomposition, for characterizing known threats according to the kinds of exploit that are used by the attacker. The STRIDE acronym is formed from the first letter of each of the following threat categories: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. STRIDE does not include a scoring system.

Spoofing – A key risk for systems that have many users but provide a single execution context at the application and database level. An example of identity spoofing is illegally accessing and then using another user's authentication information, such as username and password.

Tampering – Data tampering involves the malicious modification of data. Examples include unauthorized changes made to persistent data, such as that held in a database, and the alteration of data as it flows between two computers over an open network, such as the Internet.

Repudiation – Repudiation threats are associated with users who deny performing an action without other parties having any way to prove otherwise—for example, a user performs an illegal operation in a system that lacks the ability to trace the prohibited operations.

Information Disclosure – Information disclosure threats involve the exposure of information to individuals who are not supposed to have access to it—for example, the ability of users to read a file that they were not granted access to, or the ability of an intruder to read data in transit between two computers.

Denial of Service – Denial of Service (DoS) attacks deny service to valid users—for example, by making a Web server temporarily unavailable or unusable. Protection against certain types of DoS threats improves system availability and reliability.

Elevation of Privilege – An unprivileged user gains privileged access and thereby has sufficient access to compromise or destroy the entire system. Elevation of privilege threats include those situations in which an attacker has effectively penetrated all system defenses and become part of the trusted system itself, a dangerous situation indeed.

66

Copyright © 2018 IEEE. All rights reserved.

2143

2144

214521462147

2148

2149215021512152

2153

21542155215621572158

215921602161

2162216321642165

216621672168

2169217021712172

217321742175

2176217721782179

9.1.3 DREAD

DREAD is a classification scheme for quantifying, comparing and prioritizing the amount of risk presented by each evaluated threat. DREAD modeling influences the thinking behind setting the risk rating, and is also used directly to sort the risks. The DREAD algorithm is used to compute a risk value, which is an average of all five categories. The DREAD acronym is formed from the first letter of each of the following attributes below:

Damage Potential – the amount of damage caused by a successful exploit Reproducibility – the ease to reproduce the successful exploit Exploitability – resources required to conduct the successful exploit Affected Users – the number of users affected by a successful exploit Discoverability – the ease to discover that an attack occurred

DREAD scoring is sum of all the above attributes, where each attribute is assigned an integer value from 0 to 10 where

0 to <3 = low,

3 to <6 = moderate, and

6 to 10 = high,

and then divided by 5.

9.1.4 Trike

Trike is a threat modeling framework with similarities to the STRIDE and DREAD threat modeling processes. However, Trike differs because it uses a risk based approach with distinct implementation, threat, and risk models, instead of using the STRIDE/DREAD aggregated threat model (attacks, threats, and weaknesses). From the Trike paper, Trike’s goals are:

1. With assistance from the system stakeholders, to ensure that the risk this system entails to each asset is acceptable to all stakeholders.

2. Be able to tell whether we have done this.3. Communicate what we’ve done and its effects to the stakeholders.4. Empower stakeholders to understand and reduce the risks to them and other

stakeholders implied by their actions within their domains.

9.1.5 OCTAVE

OCTAVE is a heavyweight risk methodology approach originating from Carnegie Mellon University’s Software Engineering Institute (SEI) in collaboration with CERT. OCTAVE focuses on organizational risk, not technical risk.

OCTAVE comes in two versions:

Full OCTAVE, for large organizations, and OCTAVE-S for small organizations,

both of which have specific catalogs of practices, profiles, and worksheets to document the modeling outcomes.

OCTAVE is useful when:

67

Copyright © 2018 IEEE. All rights reserved.

2180

21812182218321842185

21862187218821892190

21912192

2193

2194

2195

2196

2197

2198219922002201

220222032204220522062207

2208

220922102211

2212

22132214

22152216

2217

Implementing an organizational culture of risk management and controls becomes necessary.

Documenting and measuring business risk becomes timely. Documenting and measuring the overall IT security risk, particularly as it relates to the

corporate IT risk management, becomes necessary. When documenting risks surrounding complete systems becomes necessary. To accommodate a fundamental reorganization, such as when an organization does not

have a working risk methodology in place, and requires a robust risk management framework to be put in place.

The limitations of OCTAVE are:

OCTAVE mandates Likelihood = 1 (i.e., it assumes a threat will always occur) and this is inappropriate for many organizations. OCTAVE-S makes the inclusion of this probability optional, but this is not part of the more comprehensive OCTAVE standard.

Consisting of 18 volumes, OCTAVE is large and complex, with many worksheets and practices to implement.

It does not provide a list of “out of the box” practices for assessing and mitigating security risks.

9.1.6 Attack-Defense Trees

An Attack-Defense Tree (ADTree) is a node-labeled rooted tree describing the measures an attacker might take in order to attack a system and the defenses that a defender can employ to protect the system. The two key features of an ADTree are the representation of refinements and countermeasures. Every node may have one or more children of the same type representing a refinement into sub-goals of the node’s goal. If a node does not have any children of the same type, it represent basic actions. Every node may also have one child of opposite type, representing a countermeasure. Thus, an attack node may have several children which refine the attack and one child which defends against the attack. The defending child in turn may have several children which refine the defense and one child that is an attack node and counters the defense.

Attack-Defense Trees are conceptual diagrams showing how an asset, or target, might be attacked, which details all possible attack methods to the lowest level of the attack. Attack-Defense Trees support a wide variety of approaches.

68

Copyright © 2018 IEEE. All rights reserved.

221822192220222122222223222422252226

2227

2228222922302231223222332234

2235

2236223722382239224022412242224322442245

224622472248

2249

2250

10 Appendix B – STRIDESTRIDE is a threat classification model developed by Microsoft for thinking about computer information security threats. STRIDE is a mnemonic for information security threats in six categories, as listed in Table 10-16.

Table 10-16 – STRIDE categories and security properties

Threat Category Description Security Property

Spoofing

A key risk for systems that have many users but provide a single execution context at the application and database level. An example of identity spoofing is illegally accessing and then using another user's authentication information, such as username and password.

Authentication

Tampering

Data tampering involves the malicious modification of data. Examples include unauthorized changes made to persistent data, such as that held in a database, and the alteration of data as it flows between two computers over an open network, such as the Internet.

Integrity

Repudiation

Repudiation threats are associated with users who deny performing an action without other parties having any way to prove otherwise—for example, a user performs an illegal operation in a system that lacks the ability to trace the prohibited operations.

Non-repudiation

Information Disclosure

Information disclosure threats involve the exposure of information to individuals who are not supposed to have access to it—for example, the ability of users to read a file that they were not granted access to, or the ability of an intruder to read data in transit between two computers.

Confidentiality

Denial of Service

DoS attacks deny service to valid users—for example, by making a Web server temporarily unavailable or unusable. Protection against certain types of DoS threats improves system availability and reliability.

Availability

Elevation of Privilege

An unprivileged user gains privileged access and thereby has sufficient access to compromise or destroy the entire system. Elevation of privilege threats include those situations in which an attacker has effectively penetrated all system defenses and become part of the trusted system itself, a dangerous situation indeed.

Authorization

The main use of STRIDE is to decompose a system threat model into a threat list by analyzing each component for susceptibility to the threats. The resulting threat list can be used an input into vulnerability assessment (see Sections 2.9 and 3.2.3), where the vulnerability of each threat is scored to determine its risk and then vulnerabilities of unacceptable risk are appropriately mitigated by adding information security controls to the system. The resulting system with additional information security controls becomes input for the vulnerability assessment and the scoring and mitigation process is repeated. This iterative approach is continued until the remaining vulnerabilities are determined to be of an acceptable level of risk. The rationale for this approach is that mitigation of all high-risk vulnerabilities identified by each threat for each

69

Copyright © 2018 IEEE. All rights reserved.

2251

225222532254

2255

225622572258225922602261226222632264

component of the system produces a secure system. In this work, a threat model is used to decompose the system (see Sections 2.8 and 3.2.2).

Table 10-17 lists each element of the DFD and shows which STRIDE threat category to which it is susceptible.

Table 10-17 – Data flow diagram element and STRIDE threat category

Element Spoofing Tampering Repudiation Information Disclosure

Denial of Service

Elevation of Privilege

Data Flows X X XData Stores X X XProcesses X X X X X XInteractors X X

Table 10-18 lists the possible vulnerability type for each STRIDE category.

Table 10-18 – Vulnerability type for STRIDE threat category

STRIDE Threat Category

Vulnerability Type Description

Spoofing

Spoofing of Source

External Actor

An external actor may be spoofed by an attacker and this may lead to unauthorized access to a process or data store. Consider using a standard authentication mechanism to identify the external actor.

Spoofing of Destination

External Actor

An external actor may be spoofed by an attacker and this may lead to data being sent to the attacker's target instead of external actor. Consider using a standard authentication mechanism to identify the external actor.

Spoofing of Source Process

A process may be spoofed by an attacker and this may lead to unauthorized access to another process or data store. Consider using a standard authentication mechanism to identify the source process.

Spoofing of Destination

Process

A process may be spoofed by an attacker and this may lead to information disclosure by an external actor, another process or data store. Consider using a standard authentication mechanism to identify the destination process.

Spoofing of Source Data

Store

A data store may be spoofed by an attacker and this may lead to incorrect data delivered to the requesting process or external actor. Consider using a standard authentication mechanism to identify the source data store.

Spoofing of Destination Data Store

A data store may be spoofed by an attacker and this may lead to data being written to the attacker's target instead of data store. Consider using a standard authentication mechanism to identify the destination data store.

Tampering Potential Lack of Input

Validation for a Process

Data flowing across interface may be tampered with by an attacker. This may lead to a denial of service attack against a process or data store, an elevation of privilege attack against a process or data store, or an information disclosure by a process or data store. Failure to verify that input is as expected is a root cause of a very large number of exploitable issues. Consider all paths and the way they handle data. Verify that all input is verified for correctness using an approved list

70

Copyright © 2018 IEEE. All rights reserved.

22652266

22672268

2269

2270

2271

STRIDE Threat Category

Vulnerability Type Description

input validation approach.

Repudiation

Potential Data Repudiation by Process

A process claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data.

External Actor Potentially

Denies Receiving Data

An external actor claims that it did not receive data from a process on the other side of the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data.

Information Disclosure

Data Flow Sniffing

Data flowing across interface may be sniffed by an attacker. Depending on what type of data an attacker can read, it may be used to attack other parts of the system or simply be a disclosure of information leading to compliance violations. Consider encrypting the data flow.

Weak Access Control for a

Resource

Improper data protection of a data store can allow an attacker to read information not intended for disclosure. Review authorization settings.

Denial of Service

Data Flow Is Potentially Interrupted

An external agent interrupts data flowing across a trust boundary in either direction.

Potential Process Crash

or Stop

A process crashes, halts, stops or runs slowly; in all cases violating an availability metric.

Potential Excessive Resource

Consumption for a Process

or a Data Store

Does a process or a data store take explicit steps to control resource consumption? Resource consumption attacks can be hard to deal with, and there are times that it makes sense to let the OS do the job. Be careful that your resource requests don't deadlock, and that they do timeout.

Elevation of PrivilegeCross Site Request Forgery

Cross-site request forgery (CSRF or XSRF) is a type of attack in which an attacker forces a user's browser to make a forged request to a vulnerable site by exploiting an existing trust relationship between the browser and the vulnerable web site. In a simple scenario, a user is logged in to web site A using a cookie as a credential. The other browses to web site B. Web site B returns a page with a hidden form that posts to web site A. Since the browser will carry the user's cookie to web site A, web site B now can take any action on web site A, for example, adding an admin to an account. The attack can be used to exploit any requests that the browser automatically authenticates, e.g., by session cookie, integrated authentication, IP whitelisting, … The attack can be carried out in many ways such as by luring the victim to a site under control of the attacker, getting the user to click a link in a phishing email, or hacking a reputable web site that the victim will visit. The issue can only be resolved on the server side by requiring that all authenticated state-changing requests include an additional piece of secret payload (canary or CSRF token) which is known only to the legitimate web site and the browser and which is protected in transit through

71

Copyright © 2018 IEEE. All rights reserved.

STRIDE Threat Category

Vulnerability Type Description

SSL/TLS. See the Forgery Protection property on the flow stencil for a list of mitigations.

Elevation by Changing the

Execution Flow in a Process

An attacker may pass data into a process in order to change the flow of program execution within the process to the attacker's choosing.

Elevation Using

Impersonation

A process or external actor may be able to impersonate the context of another process or external actor in order to gain additional privilege.

A Process May be Subject to Elevation of

Privilege Using Remote Code

Execution

A process may be able to remotely execute code for another process.

OWASP have taken the STRIDE threat categories and assigned appropriate primary mitigation techniques to address the vulnerabilities of which the STRIDE threat is taking advantage. Table10-19 shows this the list of suggested primary mitigation techniques.

Table 10-19 – STRIDE threat category and primary mitigation techniques

Threat Category Primary Mitigation Techniques

SpoofingAuthenticationProtect secrets & secret dataDo not store secrets

Tampering

AuthorizationMACsDigital signaturesInput validation and sanitizationPhysical tamper resistantPhysical tamper evidence

Repudiation Digital signaturesAudit trails

Information Disclosure

AuthorizationPrivacy-enhanced protocolsEncryptionProtect secrets & secret dataDo not store secrets

Denial of Service

AuthenticationAuthorizationFilteringThrottlingQuality of service

Elevation of PrivilegeAuthorizationRun with least privilegeEnd-user signalization

72

Copyright © 2018 IEEE. All rights reserved.

227222732274

2275

2276

11 Appendix C – CVSS and eCVSSCommon Vulnerability Scoring System (CVSS) is an open framework for normalized scoring of vulnerabilities across disparate hardware and software platforms and can be used to prioritize vulnerabilities that pose the greatest risk to the system for remediation [97].

CVSS is composed of three metric groups: Base, Temporal, and Environmental. Each metric group is composed of a set of metrics as shown in Figure 11-15.

Figure 11-15 CVSS metric groups and attributes

Using the 14 separate attributes from the three metric groups of CVSS, one can score a vulnerability from 0 to 10 with granularity of tenths of units, where a score of 0 means no issue and of 10 means a high-level concern. The CVSS metric groups are described in Table 11-20.

Table 11-20 – CVSS metric group description

Metric Group Description

Base

Represents the intrinsic and fundamental characteristics of a vulnerability that are constant over time and user environments.

TemporalRepresents the characteristics of a vulnerability that change over time but not among user environments.

EnvironmentalRepresents the characteristics of a vulnerability that are relevant and unique to a particular user’s environment.

11.1 Common Vulnerability Scoring SystemThe Common Vulnerability Scoring System (CVSS) is an open industry standard for normalized scoring of vulnerabilities across disparate hardware and software platforms. Assigning scores to vulnerabilities allows the prioritization to guide which the vulnerabilities should be mitigated. The CVSS assessment is composed of metrics of three areas of concern: Base Metrics, Temporal Metrics, and Environmental Metrics.

11.1.1 Base Metrics

The base metric group captures the characteristic of a vulnerability that are constant with time and across user environments. The Access Vector, Access Complexity, and Authentication metrics capture how the vulnerability is accessed and whether or not extra conditions are

73

Copyright © 2018 IEEE. All rights reserved.

2277

227822792280

22812282

22832284

228522862287

2288

2289

22902291229222932294

2295

229622972298

required to exploit it. The Confidentiality, Integrity, and Availability Impact metrics measure how a vulnerability, if exploited, will directly affect an asset.

Table 11-21 provides a definition for each base metric as well as the possible values.

Table 11-21 – Base metrics and values descriptions

Base Metric Metric Description Metric Value

Value Description

Access Vector (AV)

How the vulnerability is exploited. The more remote the attacker can be to attack a system, the greater the score.

Local (L) Attacker requires physical access to the device.

Adjacent (A)

Attacker requires access to a broadcast or very short range communications.

Network (N)

Attacker requires access to WAN or internet.

Access Complexity (AC)

The complexity of the attack required to exploit the vulnerability once an attacker has gained access to the system. The lower the required complexity, the higher the vulnerability score.

High (H) Specialized access conditions existMedium (M)

The access conditions are somewhat specialized.

Low (L)

Specialized access conditions or extenuating circumstances do not exist.

Authentication (Au)

The number of times an attacker have to authenticate to a system in order to exploit a vulnerability. The fewer authentication instances that are required, the higher the vulnerability score

Multiple (M)

Exploiting the vulnerability requires that the attacker authenticate two or more times.

Single (S)One instance of authentication is required to access and exploit the vulnerability.

None (N) Authentication is not required to access and exploit the vulnerability.

Confidentiality Impact (C)

The impact on confidentiality of a successfully exploited vulnerability.

None (N) There is no impact to the confidentiality of the system.

Partial (P)

There is considerable information disclosure. Access to some system files is possible, but the attacker does not have control over what is obtained or the scope of the loss is constrained.

Complete (C)

There is total information disclosure, resulting in all system files being revealed.

Integrity Impact (I)

The impact to integrity of a successfully exploited vulnerability.

None (N) There is no impact to the integrity of the system.

Partial (P)

Modification of some system files or information is possible, but the attacker does not have control over what can be modified, or the scope of what the attacker can affect is limited.

Complete (C)

There is a total compromise of system integrity.

Availability Impact (A)

The impact to availability of a successfully exploited vulnerability.

None (N) There is no impact to the availability of the system.

Partial There is reduced performance or

74

Copyright © 2018 IEEE. All rights reserved.

22992300

2301

2302

Base Metric Metric Description Metric Value

Value Description

(P) interruptions in resource availability.

Complete (C)

There is a total shutdown of the target system, rendering the system’s principal functionality non-operational.

11.1.2 Temporal Metrics

The temporal metrics capture the characteristics of a vulnerability that change over time. Three such factors that CVSS captures are: confirmation of the technical details of vulnerability, the remediation status of the vulnerability, and the availability of exploit code or techniques. Since temporal metrics are optional they each include a metric value that has no effect on the score. This value is used when the user feels the particular metric does not apply and wishes to “skip over” it.

Table 11-22 provides a definition for each temporal metric as well as the possible values.

Table 11-22 – Temporal metrics and values descriptions

Temporal Metric

Metric Description Metric Value Value Description

Exploitability (E)

The current state of exploit techniques or code availability. The more easily a vulnerability can be exploited, the higher the vulnerability score.

Unproven (U) No exploit code is available, or an exploit is entirely theoretical.

Proof-of-Concept (POC)

Proof-of-concept exploit or code or an attack demonstration that is not practical for most systems is available.

Functional (F) Functional exploit code is available.

High (H)

Either the vulnerability is exploitable by functional mobile autonomous code, or no exploit is required (manual trigger) and details are widely available.

Not Defined (ND)

Assigning this value to the metric will not influence the score.

Remediation Level (RL)

The level of the fix to the vulnerability. The less official and permanent a fix, the higher the vulnerability score.

Official Fix (OF) A complete vendor solution is available.

Temporary Fix (TF)

There is an official but temporary fix available.

WorkaroundFix (W)

There is an unofficial, non-vendor solution available.

Unavailable (U) There is either no solution available or it is impossible to apply.

Not Defined (ND)

Assigning this value to the metric will not influence the score.

Report Confidence (RC)

The degree of confidence in the existence of the vulnerability and credibility of the known technical details.

Unconfirmed (UC)

There is a single unconfirmed source or possibly multiple conflicting reports.

Uncorroborated (UR)

There are multiple non-official sources, possibly including

75

Copyright © 2018 IEEE. All rights reserved.

2303

230423052306230723082309

2310

2311

Temporal Metric

Metric Description Metric Value Value Description

independent security companies or research organizations.

Confirmed (C)The vulnerability has been acknowledged by the vendor or author of the affected technology.

Not Defined (ND)

Assigning this value to the metric will not influence the score.

11.1.3 Environmental Metrics

The environmental metrics capture the characteristics of a vulnerability that are associated with a user’s IT environment. Since environmental metrics are optional they each include a metric value that has no effect on the score over time. This value is used when the user feels the particular metric does not apply and wishes to “skip over” it.

Table 11-23 provides a definition for each environmental metric as well as the possible values.

Table 11-23 – Environmental metrics and values descriptions

Environmental Metric

Metric Description Metric Value Value Description

Collateral Damage Potential (CDP)

The potential for loss of life or physical assets through damage or theft of property or equipment. This metric may also measure economic loss of productivity or revenue. The greater the damage potential, the higher the vulnerability score

None (N)There is no potential for loss of life, physical assets, productivity or revenue.

Low (L)

A successful exploit of this vulnerability may result in slight physical damage, property damage, loss of revenue or productivity.

Low-Medium (LM)

A successful exploit of this vulnerability may result in moderate physical damage, property damage, loss of revenue or productivity.

Medium-High (MH)

A successful exploit of this vulnerability may result in significant physical damage, property damage, loss of revenue or productivity.

High (H)

A successful exploit of this vulnerability may result on catastrophic physical damage, property damage, loss of revenue or productivity.

Not Defined (ND)

Assigning this value to the metric will not influence the score.

Target Distribution (TD)

The proportion of vulnerable systems; percentage of systems that could be affected by the vulnerability. The greater the proportion of

None (N)No target systems exist, or targets are so highly specialized that they only exist in a laboratory setting.

Low (L) Targets exist inside the environment, but on a small scale.

Medium (M) Targets exist inside the environment,

76

Copyright © 2018 IEEE. All rights reserved.

2312

2313231423152316

2317

2318

Environmental Metric

Metric Description Metric Value Value Description

vulnerable systems, the higher the score.

but on a medium scale.

High (H) Targets exist inside the environment on a considerable scale.

Not Defined (ND)

Assigning this value to the metric will not influence the score.

Confidentiality Requirement (CR), Integrity Requirement (IR), and Availability Requirement (AR)

Enables the analyst to customize the score depending on the importance of the affected IT asset to a user’s organization, measured in terms of confidentiality, integrity, and availability.

Low (L)

Loss of [confidentiality | integrity | availability] is likely to have only a limited adverse effect on the organization or individuals associated with the organization (e.g., employees, customers).

Medium (M)

Loss of [confidentiality | integrity | availability] is likely to have a serious adverse effect on the organization or individuals associated with the organization (e.g., employees, customers).

High (H)

Loss of [confidentiality | integrity | availability] is likely to have a catastrophic adverse effect on the organization or individuals associated with the organization (e.g., employees, customers).

Not Defined (ND)

Assigning this value to the metric will not influence the score.

11.1.4 Equations

Scoring equations and algorithms for the base, temporal and environmental metric groups are described in Appendix D – Scoring Equations. Further discussion of the origin and testing of these equations is available at www.first.org/cvss.

11.1.5 Vector

The values of the CVSS metrics are typically described in a compressed format called a vector. A vector is comprised of a metric- value pair separated by a full colon. Both the metric and their value is represented by a letter code (see Table 11-21, Table 11-22, and Table 11-23). Following is an example of a vector:AV:L AC:H Au:N C:N I:P A:C E:U RL:TF RC:UC CDP:L TD:L CR:M IR:L AR:M

11.2 Embedded Common Vulnerability Scoring SystemThe original version of CVSS was designed to address software only systems and create scores after the software systems were in the field. For this effort changes to CVSS are needed to support physical medical devices and create scores at design time to guide the development of systems.

77

Copyright © 2018 IEEE. All rights reserved.

2319

232023212322

2323

2324232523262327

2328

2329

2330233123322333

This resulted in embedded Common Vulnerability Scoring System (eCVSS) being created as a slightly modified branch of CVSS 2.0. This is not an effort of Forum of Incident Response and Security Teams (FIRST), but instead proposed by members of this work.

Changes to CVSS are as follows:

The Temporal Group was effectively removed by forcing the three attributes to a neutral value, since the scoring is conducted at design time.

The three “Requirement” attributes in the Environmental group (i.e., Confidential, Integrity, Availability) where recognized to be system wide attributes. These are only set once for the system and inform all the identified vulnerabilities.

The Target Distribution attribute was removed as it refers to distribution of the system, and this scoring is conducted at design time. Instead a new Awareness attribute replaced the Target Distribution attribute.

Based on these changes, eCVSS has the following 8 attributes to evaluate each vulnerability:

Access Vector Access Complexity Authentication Confidentially Impact Integrity Impact Availability Impact Collateral Damage Potential Awareness

Table 11-24 describes the new Awareness attribute and any change in meaning of metrics. Otherwise, see Section 11.1 for the original definitions provided in CVSS. While the definitions may have been altered from the original CVSS meaning, the original intent has been preserved in eCVSS.

Table 11-24 – eCVSS updated definitions of metrics

Environmental Metric

Metric Description Metric Value Value Description

Authentication (Au)

The strength of the authentication process used to exploit the vulnerability.

Multiple (M)

Authentication employs industry’s best practice for vetting the authenticity of the user or device. Examples include:Storing of hashed credentials onlyMultiple levels of authenticationEnforced unique credentials

Single (S)

Authentication is easily defeated or users a weak method for vetting. Examples include:Storing or transmitting of credentials in plain textFixed (i.e., hard coded) credentialsAutomatic trust based on device type

None(N) Authentication is not required to access and exploit the vulnerability.

78

Copyright © 2018 IEEE. All rights reserved.

233423352336

2337

23382339234023412342234323442345

2346

23472348234923502351235223532354

2355235623572358

2359

Environmental Metric

Metric Description Metric Value Value Description

Awareness (Aw)

The ability of a vulnerability exploit to be detected by the system or its user. It is meant as an environment-specific indicator to lower the scoring as a result of an exploit being detected.

None (N) Exploit cannot be detected by the user or the device.

User (U)Exploit is detectable by the user. Such as the device case has obvious alterations or tamper evident.

Automatic (A) Exploit is detectable by the device (either software or hardware).

Complete (C) Exploit is detectable by the user and the device.

Not Defined (ND)

Assigning this value to the metric will not influence the score.

Confidentiality Requirement (CR), Integrity Requirement (IR), and Availability Requirement (AR)

Enables the analyst to customize the score depending on the importance of the affected target device to an organization, measured in terms of confidentiality, integrity, and availability.

Low (L)

Loss of [confidentiality | integrity | availability] is likely to have only a limited adverse effect on the organization or users of the device.

Medium (M)

Loss of [confidentiality | integrity | availability] is likely to have a serious adverse effect on the organization or users of the device.

High (H)

Loss of [confidentiality | integrity | availability] is likely to have a catastrophic adverse effect on the organization or users of the device.

Not Defined (ND)

Assigning this value to the metric will not influence the score.

11.2.1 Impact Safety Efficacy

Since our approach is conducted against PHDs, we also consider impacts to patient safety. Any vulnerability that impacts safety is marked and should be addressed as part of patient risk management. While safety and security may be coupled and could be commonly mitigated, this work is not intended to address impacts to safety. However, please note that the CVSS Environment metric Collateral Damage Potential value considers patient harm.

11.2.2 Suggested Collateral Damage Value

This work detected that the CVSS Environmental metric Collateral Damage Potential may be affect by subjectivity and is difficult to maintain repeatability. To improve this situation, we have included a Suggested Collateral Damage Value within our assessment. This suggested value is determined by considering, for every vulnerability, the potential for business damage, legal damage, and patient damage. The Suggested Collateral Damage Value is only a suggestion and the assessment may set any value for the CVSS Environment metric Collateral Damage Potential. Table 11-25 provides a definition for each type of damage value.

Table 11-25 – Suggested collateral damage value definitions

79

Copyright © 2018 IEEE. All rights reserved.

2360

23612362236323642365

2366

2367236823692370237123722373

2374

Type of Damage Potential

Value Value Description

Business,Legal, orPatient

None There is no damage related to business, legal, or the patient.

LowThere is potential for minor level of damage related to business, legal, or the patient. For example, attack only affects a single instance of the product, some patients/physicians lose confidence in the product, minor patient harm.

Medium

There is potential for major level of damage related to business, legal, or the patient. For example, regulatory forces recall, attack may affect one or more batches of the product but not all products, moderate patient harm (excluding life-threating harm), public loses confidence in the product.

High

There is potential for catastrophic level of damage related to business, legal, or the patient. For example, regulatory forces stop of sale, attack may affect all instances of the product, legal action that threatens the business viability, severe patient harm (including death), public loses confidence in the brand/company.

The Suggested Collateral Damage Value using the following equations:BDP = Business Damage Potential

LDP = Legal Damage Potential

PDP = Patient Damage Potential

SCDV = Suggested Collateral Damage Value

If (no values entered in BDP, LDP, and PDP) {

SCDV = ""

} Else If (all damage potential values == none) {

SCDV = "None"

} Else If (any damage potential value == high) {

SCDV = "High"

} Else If (2 of the damage potential values == medium) {

SCDV = "Medium-High"

} Else If (only 1 damage potential value == medium AND any other damage potential value == low) {

SCDV = "Low-Medium"

} Else If (any damage potential value == low) {

SCDV = "Low"

} Else {

SCDV = "None"

}

11.2.3 Equations

Scoring equations and algorithms for the base and environmental metric groups are described in Appendix D – Scoring Equations.

11.2.4 Vector

80

Copyright © 2018 IEEE. All rights reserved.

23752376

2377

2378

2379

2380

2381

2382

2383

2384

2385

2386

2387

2388

23892390

2391

2392

2393

2394

2395

2396

2397

23982399

2400

Similar to CVSS, eCVSS metrics and their values can be represented in a compressed format known as a vector. An eCVSS vector uses letter codes from Table 11-21, Table 11-23, and Table11-24. See Section 11.1.5 for additional details.

81

Copyright © 2018 IEEE. All rights reserved.

240124022403

2404

12 Appendix D – Scoring EquationsAs CVSS was originally designed to be used post release of a software system and eCVSS is a design time (i.e., “pre-release”) analysis tool, some changes were made to remove attributes that are not relevant for a design time assessment, this can be seen in the differences between lines (20 and 44) and (21-22 and 45-46). Attributes that are not relevant have been removed.

The other change can be seen is the difference between line 22 and 46, where Awareness has been added in place of TargetDistribution.

12.1 Original CVSS v2 EquationsIf Base Section completed

Impact = 10.41 * (1 - (1 – ConfidentialityImpact) * (1 - IntegrityImpact) * (1 - AvailabilityImpact))

If Impact == 0 then

fImpact = 0

else

fImpact = 1.176

ExploitabilityScore = 20 * AccessComplexity * Authentication * AccessVector

BaseScore = Round_To_Tenths(((0.6 * Impact) + (0.4 * ExploitabilityScore) - 1.5) * fImpact)

If Temporal Section completed Then

TemporalScore = Round_To_Tenths(BaseScore * Exploitability * RemediationLevel * ReportConfidence)

If Environment Section completed Then

AdjustedImpact = Round_To_Tenths(Minimum(10, 10.41 * (1 - (1 – ConfidentialityImpact * ConfidentialityRequirement) *

(1 – IntegrityImpact * IntegrityRequirement) *

(1 – AvailabilityImpact * AvailabilityRequirement))))

AdjustedBaseScore = Round_To_Tenths(((0.6 * AdjustedImpact) + (0.4 * ExploitabilityScore) - 1.5) * fImpact)

AdjustedTemporalScore = Round_To_Tenths(AdjustedBaseScore * Exploitability * RemediationLevel * ReportConfidence)

EnvironmentalScore = Round_To_Tenths((AdjustedTemporalScore + (10 - AdjustedTemporalScore) *

CollateralDamagePotential) * TargetDistribution)

If EnvironmentalScore == 0 Then

OverallScore = BaseScore

Else

OverallScore = EnvironmentalScore

82

Copyright © 2018 IEEE. All rights reserved.

2405

2406240724082409

24102411

24122413

24142415

2416

2417

2418

2419

2420

2421

2422

2423

2424

24252426

2427

2428

24292430

24312432

24332434

2435

24362437

24382439

2440

2441

2442

2443

2444

2445

2446

12.2 eCVSS EquationsIf Base Section completed

Impact = 10.41 * (1 - (1 – ConfidentialityImpact) * (1 - IntegrityImpact) * (1 - AvailabilityImpact))

If Impact == 0 then

fImpact = 0

else

fImpact = 1.176

ExploitabilityScore = 20 * AccessComplexity * Authentication * AccessVector

BaseScore = Round_To_Tenths (((0.6 * Impact) + (0.4 * ExploitabilityScore) - 1.5) * fImpact)

If Environment Section completed Then

AdjustedImpact = Round_To_Tenths(Minimum (10, 10.41 * (1 - (1 – ConfidentialityImpact * ConfidentialityRequirement) *

(1 – IntegrityImpact * IntegrityRequirement) *

(1 – AvailabilityImpact * AvailabilityRequirement))))

AdjustedBaseScore = Round_To_Tenths(((0.6 * AdjustedImpact) + (0.4 * ExploitabilityScore) - 1.5) * fImpact)

AdjustedTemporalScore = Round_To_Tenths(AdjustedBaseScore)

EnvironmentalScore = Round_To_Tenths((AdjustedTemporalScore + (10 - AdjustedTemporalScore) *

CollateralDamagePotential) * Awareness)

If EnvironmentalScore == 0 Then

OverallScore = BaseScore

Else

OverallScore = EnvironmentalScore

83

Copyright © 2018 IEEE. All rights reserved.

24472448

24492450

2451

2452

2453

2454

2455

2456

2457

24582459

24602461

24622463

24642465

2466

24672468

2469

2470

2471

2472

2473

2474

2475

2476

13 Appendix E – eCVSS Metric Value Numeric Equivalent

Metric Group Metric Metric Value Numeric Equivalent

Base

Access Vector

Not Defined 0.000Local 0.395 Adjacent Network 0.646 Network 1.000

Access Complexity

Not Defined 0.000 High 0.350 Medium 0.610 Low 0.710

Authentication

Not Defined 0.000 None 0.704 Single Instance 0.560 Multiple Instances 0.450

Confidentiality ImpactIntegrity ImpactAvailability Impact

Not Defined 0.000 None 0.000 Partial 0.275 Complete 0.660 Partial 0.275 Complete 0.660

Environmental

Collateral Damage Potential

Not Defined 0.000None 0.000Low 0.100Low-Medium 0.300Medium-High 0.400High 0.500

Awareness

Not Defined 1.000Complete 0.840Automatic 0.680User 0.510None 0.000

Confidentiality RequirementIntegrity RequirementAvailability Requirement

Not Defined 1.00Low 0.500Medium 1.000High 1.510

84

Copyright © 2018 IEEE. All rights reserved.

2477

2478

24792480

14 Appendix F – TMT Export MacroPublic Sub TMTAutoImport()

Dim xmlFilePath As Variant

Dim prefixID As Variant

Dim iRet As Integer

Dim oList As ListObject

Dim i As Integer

Dim RowCount As Integer

Dim sdltmtWorksheet As Worksheet

Dim vulAssPreWorksheet As Worksheet

Dim key As Variant

Dim value As Variant

iRet = MsgBox("Would you like to import the data from the Microsoft SDL Threat Modeling Tool?" & Chr(10) & Chr(10) & "Note that this will overwrite the 'Vulnerability Assessment (pre)' sheet, 'Potential Vulnerability' section.", vbOKCancel, "Notify User")

If iRet = vbCancel Then Exit Sub

'Get the Device Type ID Prefix from the user

prefixID = InputBox("Enter device type ID prefix (e.g. Insulin Pump = 'IP')")

'Let the user select the correct TM4/TM7 file to load

xmlFilePath = Application.GetOpenFilename _

(FileFilter:="TMT 2016,*.TM7,TMT 2014,*.TM4", _

Title:="Open SDL Threat Modeling Tool File", MultiSelect:=False)

If TypeName(xmlFilePath) = "Boolean" Then Exit Sub

'Add the SDLTMT sheet

Application.DisplayAlerts = False

If sheetExists("SDLTMT") Then

Worksheets("SDLTMT").Visible = xlSheetVisible

Worksheets("SDLTMT").Delete

85

Copyright © 2018 IEEE. All rights reserved.

2482

2483

2484

2485

2486

2487

2488

2489

2490

2491

2492

2493

2494

2495

2496249724982499

2500

2501

25022503

25042505

25062507

2508

2509

25102511

2512

25132514

2515

2516

2517

2518

End If

Set sdltmtWorksheet = Worksheets.Add(After:=Worksheets(Worksheets.Count))

sdltmtWorksheet.Name = "SDLTMT"

Set sdltmtWorksheet = Worksheets("SDLTMT")

sdltmtWorksheet.Activate

'Load the XML file into the table

ActiveWorkbook.XmlImport URL:=xmlFilePath, ImportMap:=Nothing, Overwrite:=True, Destination:=Range("$A$1")

Worksheets("SDLTMT").ListObjects(Worksheets("SDLTMT").ListObjects.Count).Name = "XMLtable"

Application.DisplayAlerts = True

' Clear up the xml table by deleting all the graphic related entries

Application.ScreenUpdating = False

Set oList = Worksheets("SDLTMT").ListObjects("XMLtable")

RowCount = oList.DataBodyRange.Rows.Count

For i = RowCount To 1 Step -1

Debug.Print oList.DataBodyRange.Cells(i, 2)

If oList.DataBodyRange.Cells(i, 2) = "DRAWINGSURFACE" Then

oList.ListRows(i).Delete

End If

Next

'Kill the first 68 columns that are blank

For i = 1 To 68

oList.ListColumns(1).Delete

Next

' populate the Vulnerability Assessment (Pre) sheet

Set vulAssPreWorksheet = Worksheets("Vulnerability Assessment (pre)")

RowCount = oList.DataBodyRange.Rows.Count

colID = 5

86

Copyright © 2018 IEEE. All rights reserved.

2519

25202521

2522

2523

2524

2525

2526

25272528

25292530

2531

25322533

2534

25352536

2537

25382539

2540

2541

2542

2543

2544

25452546

2547

2548

2549

2550

2551

25522553

2554

2555

2556

colKey = 10

colValue = 11

currentID = oList.DataBodyRange.Cells(1, colID)

currentVulAssWorksheetRow = 9

currentPrefixIDCount = 1

vulAssPreWorksheet.Cells(currentVulAssWorksheetRow, 5) = currentID

vulAssPreWorksheet.Cells(currentVulAssWorksheetRow, 1) = prefixID & currentPrefixIDCount

For i = 1 To RowCount Step 1

Debug.Print oList.DataBodyRange.Cells(i, colID)

tempID = oList.DataBodyRange.Cells(i, colID)

If IsEmpty(tempID) Then

' end of threat list

Exit For

ElseIf tempID <> currentID Then

' move to the next vulnerability assessment (pre) row

currentID = tempID

currentVulAssWorksheetRow = currentVulAssWorksheetRow

+ 1

currentPrefixIDCount = currentPrefixIDCount + 1

vulAssPreWorksheet.Cells(currentVulAssWorksheetRow, 5) = currentID

vulAssPreWorksheet.Cells(currentVulAssWorksheetRow, 1) = prefixID & currentPrefixIDCount

End If

Debug.Print oList.DataBodyRange.Cells(i, colKey)

key = oList.DataBodyRange.Cells(i, colKey)

Debug.Print oList.DataBodyRange.Cells(i, colValue)

value = oList.DataBodyRange.Cells(i, colValue)

If key = "Title" Then

vulAssPreWorksheet.Cells(currentVulAssWorksheetRow, 2) = value

ElseIf key = "UserThreatCategory" Then

87

Copyright © 2018 IEEE. All rights reserved.

2557

2558

2559

25602561

2562

2563

25642565

25662567

2568

2569

25702571

2572

2573

2574

2575

2576

2577

2578

2579

25802581

25822583

2584

25852586

2587

2588

2589

25902591

25922593

2594

vulAssPreWorksheet.Cells(currentVulAssWorksheetRow, 3) = value

ElseIf key = "UserThreatShortDescription" Then

vulAssPreWorksheet.Cells(currentVulAssWorksheetRow, 4) = value

ElseIf key = "UserThreatDescription" Then

vulAssPreWorksheet.Cells(currentVulAssWorksheetRow, 4) = vulAssPreWorksheet.Cells(currentVulAssWorksheetRow, 4) & vbNewLine & vbNewLine & value

End If

Next

vulAssPreWorksheet.Activate

sdltmtWorksheet.Visible = xlSheetVeryHidden

Application.ScreenUpdating = True

End Sub

88

Copyright © 2018 IEEE. All rights reserved.

25952596

2597

25982599

2600

260126022603

2604

2605

26062607

2608

2609

2610

2611

15 Appendix G – Device Type Analysis15.1 Physical Activity Monitor

15.1.1 System Context

15.1.1.1 Use Case Description

A physical activity monitor (PAM) device is intended to track a user’s physical activity during the day. A PAM may also monitor the sleeping patterns of the user. It typically measures body movement using an accelerometer and/or a heart rate sensor and classifies it in categories such as walking, running, cycling etc. Using this information and based the user’s characteristics, such as gender and age, it calculates the amount of calories burned. It is typically associated with a service that supports users to achieve goals such as becoming more active, reduce weight or getting insight in their activity level over time. These services may also be of a more clinical nature, where patients with chronic diseases are being monitored using the PAM as one of the sensors.

15.1.1.2 Intended Actors

The intended actors of the PAM are

manufacturer, nurse, physician, counselor, patient, and caregiver.

15.1.1.3 Exchanged Data

The exchanged data of the PAM include

Device settings:o Identification: Information identifying the deviceo Firmwareo Technical settings (e.g., Bluetooth pairing/connection)o Patient Information (e.g., weight, height)o User account(s)/PIN code(s)o Privacy settings (e.g., label sensitive information)

Monitoring configuration:o Configuration of the Physical Activity Monitor behavior and usage. (e.g., goals,

reminder)o PAM Configuration of the different activity monitoring modes (e.g., running,

cooking, sleeping) provided by Physical Activity Monitor. Observations:

o PAM Live or stored-and-forwarded data related to heart rate, accelerometer data (e.g., steps count, sleep data) and other sensor data, which splits into (depending on privacy settings defined above in device settings):

89

Copyright © 2018 IEEE. All rights reserved.

2612

2613

2614

2615

261626172618261926202621262226232624

2625

2626

262726282629263026312632

2633

2634

2635263626372638263926402641264226432644264526462647264826492650

o Sensitive activity data: very granular sensor data;o Shareable activity data; o Monitored/aggregated activity data (e.g., achieved goals)

15.1.1.4 Actors Mapped to Assets

Table 15-26 depicts the intended actor access to assets.

Table 15-26 – Mapping PAM actors to assets

Asset \ Actor Patient Caregiver Counselor Nurse Physician Manufacturer

Device Settings

Identification - - - - - CFirmware U U - - - CTechnical Settings CRUD CRUD - - - -

Patient Information CRUD CRUD - - - -

User Account CRUD CRUD - - - -Privacy Settings CRUD - - - - -

Monitoring Configuration CRUD CRUD CRUD R CRUD -

Observations

Sensitive sensor data RD - - - R -

Shareable sensor RD RD R R R -

Aggregated CRUD CRUD R R R -C = Create, R = Read, U = Update, D = Delete

15.1.2 Threat Model

The PAM threat model is provided in Figure 15-16 and Table 15-27 describes the data flows.

90

Copyright © 2018 IEEE. All rights reserved.

265126522653

2654

2655

2656

2657

2658

2659

Figure 15-16 Physical activity monitor threat model

Table 15-27 – Description of PAM threat model data flows

Data Flow ID Description

1 Patient - Read - Connected Device - Therapy Setting/Observation2 Patient - Create/Update/Delete - Connected Device - Therapy Setting/Observation3 Caregiver - Read - Physical Activity Monitor - Observation without Sensitive data4 Caregiver - Update/Delete - Physical Activity Monitor - Device Configuration/Therapy Setting5 Patient - Create - Vital Signs (e.g., Heart Rate, Movement)6 Patient - Read - Sensor - Sensitive Data

7 Physician - Read - Connected Device - Device Configuration/Therapy Setting/Observation with Sensitive Data

8 Counselor - Create/Update/Delete - Connected Device - Device Configuration9 Nurse - Read - Connected Device - Device Configuration/Observation without Sensitive Data

10 Caregiver - Create/Update/Delete - Connected Device - Device Configuration/Therapy Setting without Privacy/Observation without Sensitive Data

11 Caregiver - Read - Connected Device - Device Configuration/Therapy Setting without Privacy/Observation without Sensitive Data

12 Connected Device - Wireless - Read - Physical Activity Monitor - Device Configuration/Therapy Setting/Observation

13 Connected Device - Wireless - Create/Update/Delete - Physical Activity Monitor - Device

91

Copyright © 2018 IEEE. All rights reserved.

26602661

2662

Configuration/Therapy Setting/Observation

14 Manufacturer - Create - Physical Activity Monitor - Device Configuration/Therapy Setting/Firmware

15 Manufacturer - Update - Connected Device - Firmware

92

Copyright © 2018 IEEE. All rights reserved.

15.1.3 Pre- & Post-Mitigation Vulnerability Assessment

See Section 11.2.4 for additional details on eCVSS pre- or post-mitigation vectors.Device Type Scoring

Name Classification Confidentiality Requirement Integrity Requirement Availability Requirement Moderate-Risk Threshold High-Risk Threshold

Physical Activity Monitor Class II exempt Medium Medium Low 3.5 7

Potential Vulnerability Assessment

Name Category Pre-Mitigation Vector Pre-Score Post-Mitigation Vector Post-Score

Spoofing the Collector & Configuring device Process Spoofing ISE:N AV:A AC:L Au:N C:P I:N A:N CDP:L Aw:N 4.0 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:L Aw:N 2.1

Spoofing the Manufacturer External Entity Spoofing ISE:N AV:A AC:L Au:N C:C I:C A:N CDP:MH Aw:N 8.7 ISE:N AV:A AC:H Au:M C:P I:P A:N CDP:MH Aw:N 5.6

Potential Lack of Input Validation for Collector & Configuring device Tampering ISE:N AV:A AC:L Au:N C:C I:C A:P CDP:MH Aw:U 4.3 ISE:N AV:A AC:H Au:M C:N I:N A:P CDP:MH Aw:U 2.0

Potential Data Repudiation by Collector & Configuring device Repudiation ISE:N AV:A AC:L Au:N C:N I:P A:N CDP:L Aw:N 4.0 ISE:N AV:A AC:H Au:M C:N I:P A:N CDP:L Aw:N 2.1

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:L Au:N C:P I:N A:N CDP:LM Aw:N 5.3 ISE:N AV:A AC:H Au:M C:N I:N A:N CDP:LM Aw:N 3.0

Potential Process Crash or Stop for Collector & Configuring device Denial Of Service ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:L Aw:U 2.1 ISE:N AV:A AC:H Au:M C:N I:N A:C CDP:L Aw:U 1.2

Data Flow Update firmware Is Potentially Interrupted Denial Of Service ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:L Aw:U 2.1 ISE:N AV:A AC:H Au:M C:N I:N A:C CDP:L Aw:U 1.2

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:A AC:L Au:N C:N I:P A:N CDP:L Aw:N 4.0 ISE:N AV:A AC:H Au:M C:N I:P A:N CDP:L Aw:N 2.1

Collector & Configuring device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:A AC:L Au:N C:C I:C A:N CDP:MH Aw:N 8.7 ISE:N AV:A AC:H Au:M C:P I:P A:N CDP:MH Aw:N 5.6

Elevation by Changing the Execution Flow in Collector & Configuring device Elevation Of Privilege ISE:N AV:A AC:L Au:N C:C I:C A:N CDP:MH Aw:N 8.7 ISE:N AV:A AC:H Au:M C:P I:P A:N CDP:MH Aw:N 5.6

Spoofing of the Nurse External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:LM Aw:N 6.5 ISE:N AV:L AC:H Au:M C:P I:N A:N CDP:LM Aw:N 3.6

External Entity Nurse Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:H Au:M C:N I:N A:N CDP:L Aw:N 1.0

Data Flow R configuration, observations (w/o sensor) Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:P CDP:L Aw:U 0.5 ISE:N AV:L AC:H Au:M C:N I:N A:P CDP:L Aw:U 0.4

Spoofing the Counselor (inherits Nurse) External Entity Spoofing ISE:N AV:L AC:H Au:N C:C I:C A:N CDP:MH Aw:N 7.4 ISE:N AV:L AC:H Au:M C:P I:P A:N CDP:MH Aw:N 5.4

Potential Data Repudiation by Collector & Configuring device Repudiation ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:L Aw:U 1.0 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:L Aw:U 0.9

Data Flow Wireless Input Link Is Potentially Interrupted Denial Of Service ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:LM Aw:U 2.7 ISE:N AV:A AC:H Au:M C:N I:N A:C CDP:LM Aw:U 2.0

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:A AC:L Au:N C:N I:C A:N CDP:LM Aw:N 7.3 ISE:N AV:A AC:H Au:M C:N I:P A:N CDP:LM Aw:N 3.8

Spoofing the Patient (inherits Caregiver) External Entity Spoofing ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:LM Aw:N 5.8 ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:LM Aw:N 5.8

Potential Process Crash or Stop for Collector & Configuring device Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:L Aw:U 1.2 ISE:N AV:L AC:H Au:M C:N I:N A:C CDP:L Aw:U 1.0

Data Flow R privacy settings all sensor data Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:P CDP:LM Aw:U 1.5 ISE:N AV:L AC:H Au:M C:N I:N A:P CDP:LM Aw:U 1.4

Spoofing the Collector & Configuring device Process Spoofing ISE:N AV:L AC:H Au:N C:C I:N A:N CDP:H Aw:N 7.0 ISE:N AV:L AC:H Au:M C:P I:N A:N CDP:H Aw:N 5.5

Spoofing the Collector & Configuring device Process Spoofing ISE:N AV:L AC:H Au:S C:C I:C A:N CDP:MH Aw:N 7.3 ISE:N AV:L AC:H Au:M C:P I:P A:N CDP:MH Aw:N 5.4

Spoofing the Patient (is Caregiver) External Entity Spoofing ISE:N AV:L AC:H Au:S C:C I:C A:N CDP:H Aw:N 7.8 ISE:N AV:L AC:H Au:M C:P I:P A:N CDP:H Aw:N 6.2

Data Flow R observations Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:P CDP:L Aw:U 0.5 ISE:N AV:L AC:H Au:M C:N I:N A:P CDP:L Aw:U 0.4

2663

2664

Elevation by Changing the Execution Flow in Physical Activity Monitor Elevation Of Privilege ISE:N AV:L AC:H Au:N C:P I:P A:N CDP:MH Aw:N 5.6 ISE:N AV:L AC:H Au:M C:P I:P A:N CDP:MH Aw:N 5.4

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:L AC:H Au:N C:P I:P A:N CDP:LM Aw:N 4.8 ISE:N AV:L AC:H Au:M C:P I:P A:N CDP:LM Aw:N 4.6

Potential Process Crash or Stop for Physical Activity Monitor Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:L Aw:U 1.2 ISE:N AV:L AC:H Au:M C:N I:N A:C CDP:L Aw:U 1.0

Potential Data Repudiation by Physical Activity Monitor Repudiation ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:L Aw:N 2.1 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:L Aw:N 1.8

Potential Data Repudiation by Collector & Configuring device Repudiation ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:MH Aw:N 6.4 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:MH Aw:N 4.5

Potential Process Crash or Stop for Collector & Configuring device Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.0 ISE:N AV:L AC:H Au:M C:N I:N A:C CDP:LM Aw:U 1.9

Potential Data Repudiation by Collector & Configuring device Repudiation ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:LM Aw:N 5.8 ISE:N AV:L AC:H Au:M C:N I:C A:N CDP:LM Aw:N 5.6

Potential Process Crash or Stop for Collector & Configuring device Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.0 ISE:N AV:L AC:H Au:M C:N I:N A:C CDP:LM Aw:U 1.9

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:LM Aw:N 5.8 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:LM Aw:N 3.6

Elevation by Changing the Execution Flow in Collector & Configuring device Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:MH Aw:N 6.4 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:MH Aw:N 4.5

Spoofing of the Caregiver External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:LM Aw:N 6.5 ISE:N AV:L AC:H Au:M C:P I:N A:N CDP:LM Aw:N 3.6

External Entity Caregiver Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:H Au:M C:N I:N A:N CDP:L Aw:N 1.0

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:A AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:A AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0

Potential Process Crash or Stop for Sensors (HR, accelerometer, temperature, ...) Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.0 ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.0

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:LM Aw:N 5.8 ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:LM Aw:N 5.8

Elevation by Changing the Execution Flow in Sensors (HR, accelerometer, temperature, ...) Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:L Aw:N 2.1 ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:L Aw:N 2.1

Spoofing of the Caregiver External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:LM Aw:N 6.5 ISE:N AV:L AC:H Au:M C:P I:N A:N CDP:LM Aw:N 3.6

External Entity Caregiver Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:H Au:M C:N I:N A:N CDP:L Aw:N 1.0

Elevation by Changing the Execution Flow in Collector & Configuring device Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:H Aw:N 7.0 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:H Aw:N 5.5

Spoofing the Physical Activity Monitor Process Spoofing ISE:N AV:A AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:A AC:H Au:M C:N I:N A:N CDP:N Aw:N 0.0

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:L AC:H Au:N C:C I:C A:N CDP:MH Aw:N 7.4 ISE:N AV:L AC:H Au:M C:P I:P A:N CDP:MH Aw:N 5.4

Spoofing the Caregiver External Entity Spoofing ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:LM Aw:N 4.5 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:LM Aw:N 3.6

Spoofing the Physical Activity Monitor Process Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:LM Aw:N 6.5 ISE:N AV:L AC:H Au:M C:P I:N A:N CDP:LM Aw:N 3.6

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:MH Aw:N 6.4 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:MH Aw:N 4.5

Potential Lack of Input Validation for Physical Activity Monitor Tampering ISE:N AV:A AC:L Au:N C:C I:C A:P CDP:H Aw:U 4.4 ISE:N AV:A AC:L Au:N C:N I:N A:P CDP:H Aw:U 3.0

Potential Data Repudiation by Physical Activity Monitor Repudiation ISE:N AV:A AC:L Au:N C:N I:N A:N CDP:MH Aw:N 4.0 ISE:N AV:A AC:L Au:N C:N I:N A:N CDP:MH Aw:N 4.0

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:MH Aw:N 7.7 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:MH Aw:N 4.7

Data Flow Generic Data Flow Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:P CDP:LM Aw:U 1.5 ISE:N AV:L AC:H Au:M C:N I:N A:P CDP:LM Aw:U 1.4

Potential Process Crash or Stop for Physical Activity Monitor Denial Of Service ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:L Aw:U 2.1 ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:L Aw:U 2.1

Data Flow R settings (w/o privacy) observations (w/o sensitive sensor) Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:P CDP:L Aw:U 0.5 ISE:N AV:L AC:H Au:M C:N I:N A:P CDP:L Aw:U 0.4

Data Flow Wireless Output Link Is Potentially Interrupted Denial Of Service ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:LM Aw:U 2.7 ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:LM Aw:U 2.7

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:LM Aw:N 7.3 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:LM Aw:N 3.8

Collector & Configuring device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:MH Aw:N 7.7 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:MH Aw:N 4.7

Elevation by Changing the Execution Flow in Collector & Configuring device Elevation Of Privilege ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:MH Aw:N 7.7 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:MH Aw:N 4.7

1

Copyright © 2018 IEEE. All rights reserved.

Spoofing the Collector & Configuring device Process Spoofing ISE:N AV:A AC:L Au:N C:N I:C A:N CDP:L Aw:N 6.5 ISE:N AV:A AC:H Au:M C:N I:C A:N CDP:L Aw:N 4.7

Spoofing the Physical Activity Monitor Process Spoofing ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:LM Aw:N 7.3 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:LM Aw:N 3.8

Potential Lack of Input Validation for Physical Activity Monitor Tampering ISE:N AV:A AC:L Au:N C:N I:C A:P CDP:LM Aw:U 3.7 ISE:N AV:A AC:H Au:M C:N I:N A:P CDP:LM Aw:U 1.5

Potential Data Repudiation by Physical Activity Monitor Repudiation ISE:N AV:A AC:L Au:N C:N I:C A:N CDP:H Aw:N 8.1 ISE:N AV:A AC:H Au:M C:N I:P A:N CDP:H Aw:N 5.6

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:MH Aw:N 7.7 ISE:N AV:A AC:H Au:M C:N I:N A:N CDP:MH Aw:N 4.0

Potential Process Crash or Stop for Physical Activity Monitor Denial Of Service ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:LM Aw:U 2.7 ISE:N AV:A AC:H Au:M C:N I:N A:C CDP:LM Aw:U 2.0

Spoofing the Physical Activity Monitor Process Spoofing ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:LM Aw:N 7.3 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:LM Aw:N 3.8

Spoofing the Collector & Configuring device Process Spoofing ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:LM Aw:N 7.3 ISE:N AV:A AC:H Au:M C:C I:N A:N CDP:LM Aw:N 5.9

Potential Lack of Input Validation for Collector & Configuring device Tampering ISE:N AV:A AC:L Au:N C:C I:C A:P CDP:H Aw:N 9.0 ISE:N AV:A AC:L Au:N C:N I:N A:P CDP:H Aw:U 3.0

Potential Data Repudiation by Collector & Configuring device Repudiation ISE:N AV:A AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:A AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0

Data Flow Sniffing Information Disclosure ISE:N AV:N AC:L Au:N C:C I:N A:N CDP:LM Aw:N 8.5 ISE:N AV:N AC:H Au:M C:N I:N A:N CDP:LM Aw:N 3.0

Potential Process Crash or Stop for Collector & Configuring device Denial Of Service ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:LM Aw:U 2.7 ISE:N AV:A AC:L Au:N C:N I:N A:C CDP:LM Aw:U 2.7

External Entity Physician (inherits Counselor) Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0

Data Flow R sensitive sensor data Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:P CDP:LM Aw:U 1.5 ISE:N AV:L AC:H Au:N C:N I:N A:P CDP:LM Aw:U 1.5

2

Copyright © 2018 IEEE. All rights reserved.

15.2 Pulse Oximeter

15.2.1 System Context

15.2.1.1 Use Case Description

A pulse oximeter device is used to analyze the oxygen saturation of a patient’s arterial blood. This type of device is typically a fingertip size model used by patients with chronic conditions where they may have trouble getting oxygen into their body, like COPD. Typically, in this usage, the patient checks their oxygenation a few times a day.

Pulse oximeters used in the personal health device space may also be used to study oxygenation during sleep, to detect sleep apnoea. In this case, a night’s worth of data is often stored within the device and retrieved later. Continuous monitoring of a patient using set thresholds to alert a caregiver when a metric has crossed a boundary may also be used in a personal health setting.

15.2.1.2 Intended Actors

The intended actors of the pulse oximeter are

Manufacturer, Health care professional, and End-user (i.e., patient and caregiver).

15.2.1.3 Exchanged Data

The exchanged data of the pulse oximeter include

Device identification and information: Identifying the device e.g., serial number, Settings: Options for pulse oximeter behavior, connection information such as pin keys,

etc., Firmware: firmware moved into the device during manufacturing, Physiological observations: Live or stored-and-forwarded measurements related to

oxygenation, pulse rate, pulse occurrence, etc. Also includes whether any physiological metrics have gone outside pre-set boundaries (“thresholds”) if supported by the device, and

Device status: Information about the measurement and device status.

15.2.1.4 Actors Mapped to Assets

Table 15-28 depicts the intended actor access to assets.

Table 15-28 – Mapping pulse oximeter actors to assets

Asset \ Actor End-User(i.e., patient, caregiver)

Health Care Professional Manufacturer

Device Identification R R CRUSettings RU RU -Firmware U U CRUDPhysiological Observations RD RD DDevice Status R R -

C = Create, R = Read, U = Update, D = Delete

2665

2666

2667

2668266926702671

2672267326742675

2676

2677

267826792680

2681

2682

268326842685268626872688268926902691

2692

2693

2694

2695

15.2.2 Threat Model

The pulse oximeter threat model is provided in Figure 15-17 and Table 15-29 describes the data flows.

s

Figure 15-17 Pulse oximeter threat model

Table 15-29 – Description of PAM threat model data flows

Data Flow ID Description

1 Patient - Read - Connected Device - Device Configuration/Therapy Setting/Observation

2 Patient - Update/Delete - Connected Device - Device Configuration/Therapy Setting/Observation

3 Patient - Create - Physiological Signals4 Patient - Read - Pulse Oximeter - Therapy Setting/Observation5 HCP - Read - Pulse Oximeter - Status/Observation6 HCP - Update/Delete - Connected Device - Device Configuration/Observation/Firmware7 HCP - Read - Connected Device - Device Configuration/Observation/Firmware8 Manufacturer - Read - Pulse Oximeter - Device Configuration/Therapy Setting/Firmware

9 Manufacturer - Create/Update - Pulse Oximeter - Device Configuration/Therapy Setting/Firmware

10 Connected Device - Wired/Wireless - Read - Pulse Oximeter - Device Configuration/ Observation

11 Connected Device – Wired/Wireless - Create/Update/Delete - Physical Activity Monitor -

1

Copyright © 2018 IEEE. All rights reserved.

2696

26972698

2699

27002701

2702

Device Configuration/Therapy Setting/Observation

2

Copyright © 2018 IEEE. All rights reserved.

15.2.3 Pre- & Post-Mitigation Vulnerability Assessment

See Section 11.2.4 for additional details on eCVSS pre- or post-mitigation vectors.

Device Type Scoring

Name Classification Confidentiality Requirement Integrity Requirement Availability Requirement Moderate-Risk Threshold High-Risk Threshold

Pulse Oximeter Class II Medium High Low 3.5 7

Potential Vulnerability Assessment

Name Category Pre-Mitigation Vector Pre-Score Post-Mitigation Vector Post-Score

Potential Process Crash or Stop for Controller Pulse Oximeter Denial Of Service ISE:Y AV:A AC:M Au:N C:N I:N A:C CDP:LM Aw:C 0.8 ISE:Y AV:A AC:M Au:N C:N I:N A:C CDP:LM Aw:C 0.8

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5

Potential Data Repudiation by Controller Pulse Oximeter Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0

Potential Lack of Input Validation for Controller Pulse Oximeter Tampering ISE:Y AV:L AC:H Au:N C:N I:P A:P CDP:H Aw:N 6.4 ISE:Y AV:L AC:H Au:S C:N I:N A:P CDP:L Aw:N 1.0

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:H Aw:N 5.0 ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:N Aw:N 0.0

Potential Data Repudiation by Controller Pulse Oximeter Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:A 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:A 0.0

Spoofing the Manufacturer External Entity Spoofing ISE:Y AV:L AC:H Au:N C:N I:C A:P CDP:H Aw:N 8.1 ISE:Y AV:L AC:H Au:S C:N I:C A:P CDP:H Aw:U 3.9

Spoofing the Controller Pulse Oximeter Process Spoofing ISE:N AV:L AC:H Au:N C:N I:N A:N CDP:H Aw:N 5.0 ISE:N AV:L AC:H Au:N C:N I:N A:N CDP:N Aw:A 0.0

External Entity Manufacturer Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:N Aw:N 0.0

Elevation by Changing the Execution Flow in Controller Pulse Oximeter Elevation Of Privilege ISE:Y AV:A AC:H Au:N C:N I:P A:P CDP:H Aw:N 6.7 ISE:Y AV:A AC:H Au:S C:N I:P A:P CDP:H Aw:U 3.2

Controller Pulse Oximeter May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:A AC:H Au:N C:N I:P A:P CDP:H Aw:N 6.7 ISE:Y AV:A AC:H Au:S C:N I:P A:P CDP:H Aw:U 3.2

Data Flow CD Update/Delete Wireless PO Device Configuration/Observation Is Potentially Interrupted Denial Of Service ISE:Y AV:A AC:M Au:N C:N I:P A:P CDP:H Aw:U 3.6 ISE:Y AV:A AC:M Au:N C:N I:P A:P CDP:H Aw:A 2.3

Potential Process Crash or Stop for Controller Pulse Oximeter Denial Of Service ISE:Y AV:A AC:L Au:N C:N I:N A:C CDP:LM Aw:C 0.9 ISE:Y AV:A AC:L Au:N C:N I:N A:C CDP:LM Aw:C 0.9

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0

Potential Data Repudiation by Controller Pulse Oximeter Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0

Potential Lack of Input Validation for Controller Pulse Oximeter Tampering ISE:Y AV:A AC:H Au:N C:N I:P A:P CDP:H Aw:N 6.7 ISE:N AV:A AC:H Au:S C:N I:N A:P CDP:L Aw:N 1.4

Spoofing the Controller Pulse Oximeter Process Spoofing ISE:Y AV:A AC:H Au:N C:N I:P A:N CDP:H Aw:N 6.4 ISE:Y AV:A AC:H Au:S C:N I:P A:N CDP:H Aw:U 3.1

Spoofing the Connected Device Process Spoofing ISE:Y AV:A AC:L Au:N C:N I:P A:P CDP:H Aw:N 7.5 ISE:Y AV:A AC:H Au:S C:N I:P A:P CDP:H Aw:U 3.2

Data Flow CD Read Wireless PO Device Configuration/ObservationIs Potentially Interrupted Denial Of Service ISE:N AV:A AC:M Au:N C:N I:N A:P CDP:L Aw:U 1.3 ISE:N AV:A AC:M Au:N C:N I:N A:P CDP:L Aw:U 1.3

Potential Process Crash or Stop for Connected Device Denial Of Service ISE:N AV:A AC:M Au:N C:N I:N A:C CDP:LM Aw:U 2.5 ISE:N AV:A AC:M Au:N C:N I:N A:C CDP:LM Aw:U 2.5

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0

Spoofing the Controller Pulse Oximeter Process Spoofing ISE:Y AV:L AC:H Au:N C:N I:P A:N CDP:H Aw:N 6.1 ISE:Y AV:L AC:H Au:S C:N I:P A:N CDP:H Aw:U 2.9

Spoofing the Connected Device Process Spoofing ISE:Y AV:L AC:H Au:N C:N I:P A:N CDP:H Aw:N 6.1 ISE:Y AV:L AC:H Au:S C:N I:P A:N CDP:H Aw:U 2.9

Potential Data Repudiation by Connected Device Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:P CDP:L Aw:N 2.0 ISE:N AV:L AC:L Au:N C:N I:N A:P CDP:L Aw:N 2.0

2703

2704

Potential Lack of Input Validation for Connected Device Tampering ISE:Y AV:A AC:H Au:N C:N I:P A:N CDP:H Aw:N 6.4 ISE:N AV:A AC:H Au:S C:N I:N A:P CDP:L Aw:N 1.4

Spoofing the Connected Device Process Spoofing ISE:N AV:A AC:L Au:N C:N I:N A:P CDP:L Aw:N 3.1 ISE:N AV:A AC:L Au:S C:N I:N A:P CDP:L Aw:A 0.8

Spoofing the Controller Pulse Oximeter Process Spoofing ISE:Y AV:A AC:H Au:N C:N I:C A:P CDP:H Aw:N 8.4 ISE:Y AV:A AC:H Au:S C:N I:C A:P CDP:H Aw:A 2.7

Data Flow CD Read Wired PO Device Configuration/ObservationIs Potentially Interrupted Denial Of Service ISE:N AV:L AC:L Au:N C:N I:N A:P CDP:LM Aw:U 1.9 ISE:N AV:L AC:L Au:N C:N I:N A:P CDP:LM Aw:U 1.9

Spoofing of the Health Care Professional External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:N Aw:U 0.0

Potential Process Crash or Stop for Connected Device Denial Of Service ISE:N AV:L AC:M Au:N C:N I:N A:C CDP:LM Aw:U 2.2 ISE:N AV:L AC:M Au:N C:N I:N A:C CDP:LM Aw:U 2.2

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5

Potential Data Repudiation by Connected Device Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:P CDP:L Aw:N 2.0 ISE:N AV:L AC:L Au:N C:N I:N A:P CDP:L Aw:N 2.0

Potential Lack of Input Validation for Connected Device Tampering ISE:Y AV:L AC:H Au:N C:N I:P A:N CDP:H Aw:N 6.1 ISE:N AV:L AC:H Au:S C:N I:N A:N CDP:L Aw:N 1.0

Spoofing the Connected Device Process Spoofing ISE:N AV:L AC:L Au:N C:N I:N A:P CDP:L Aw:N 2.0 ISE:N AV:L AC:M Au:S C:N I:N A:P CDP:L Aw:A 0.5

Spoofing the Controller Pulse Oximeter Process Spoofing ISE:Y AV:A AC:H Au:N C:N I:C A:P CDP:H Aw:N 8.4 ISE:Y AV:A AC:H Au:S C:N I:C A:P CDP:H Aw:A 2.7

Elevation by Changing the Execution Flow in Controller Pulse Oximeter Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:N I:P A:P CDP:H Aw:N 6.4 ISE:Y AV:L AC:H Au:S C:N I:P A:P CDP:H Aw:U 3.1

Controller Pulse Oximeter May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:N I:P A:P CDP:H Aw:N 6.4 ISE:Y AV:L AC:H Au:S C:N I:P A:P CDP:H Aw:U 3.1

Data Flow CD Update/Delete Wired PO Device Configuration/Observation Is Potentially Interrupted Denial Of Service ISE:Y AV:L AC:L Au:N C:N I:P A:P CDP:H Aw:U 3.4 ISE:Y AV:L AC:L Au:N C:N I:N A:P CDP:H Aw:U 2.7

External Entity Health Care Professional Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0

Spoofing of the Health Care Professional External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:N 5.5 ISE:N AV:L AC:M Au:S C:C I:N A:N CDP:L Aw:A 1.6

Elevation by Changing the Execution Flow in Connected Device Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:P A:P CDP:H Aw:N 7.6 ISE:Y AV:L AC:H Au:S C:P I:P A:P CDP:H Aw:U 3.3

Connected Device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:P A:P CDP:H Aw:N 7.6 ISE:Y AV:L AC:H Au:S C:P I:P A:P CDP:H Aw:U 3.3

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:P A:P CDP:H Aw:N 7.6 ISE:Y AV:L AC:H Au:S C:P I:P A:P CDP:H Aw:U 3.3

Potential Process Crash or Stop for Connected Device Denial Of Service ISE:N AV:L AC:M Au:N C:N I:N A:C CDP:LM Aw:U 2.2 ISE:N AV:L AC:M Au:N C:N I:N A:C CDP:LM Aw:U 2.2

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:H Aw:N 5.0 ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:N Aw:N 0.0

Potential Data Repudiation by Connected Device Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5

Spoofing the Health Care Professional External Entity Spoofing ISE:Y AV:L AC:L Au:N C:N I:P A:N CDP:H Aw:N 6.6 ISE:Y AV:L AC:H Au:S C:N I:P A:N CDP:H Aw:A 1.9

Spoofing the Connected Device Process Spoofing ISE:Y AV:L AC:L Au:N C:N I:P A:P CDP:H Aw:N 6.9 ISE:Y AV:L AC:M Au:S C:N I:P A:P CDP:H Aw:U 3.2

External Entity Health Care Professional Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:U 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:U 0.0

Spoofing of the Patient External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:N Aw:N 0.0

External Entity Patient Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5

Spoofing the Controller Connected Device Process Spoofing ISE:Y AV:L AC:L Au:N C:N I:P A:P CDP:H Aw:N 6.9 ISE:Y AV:L AC:M Au:S C:N I:P A:N CDP:H Aw:U 3.1

Spoofing the Patient External Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:N 5.5 ISE:N AV:L AC:M Au:S C:P I:N A:N CDP:L Aw:A 0.8

Potential Data Repudiation by Controller Connected Device Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0

Potential Process Crash or Stop for Controller Connected Device Denial Of Service ISE:N AV:L AC:M Au:N C:N I:N A:C CDP:LM Aw:U 2.2 ISE:N AV:L AC:M Au:N C:N I:N A:C CDP:LM Aw:U 2.2

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:A AC:H Au:N C:C I:N A:P CDP:LM Aw:N 6.5 ISE:N AV:A AC:H Au:S C:P I:N A:P CDP:L Aw:N 3.0

Controller Connected Device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:A AC:H Au:N C:C I:N A:P CDP:LM Aw:N 6.5 ISE:N AV:A AC:H Au:S C:P I:N A:P CDP:L Aw:N 3.0

Elevation by Changing the Execution Flow in Controller Connected Device Elevation Of Privilege ISE:N AV:A AC:H Au:N C:C I:N A:P CDP:LM Aw:N 6.5 ISE:N AV:A AC:H Au:S C:P I:N A:P CDP:L Aw:N 3.0

1

Copyright © 2018 IEEE. All rights reserved.

Spoofing of the Patient External Destination Entity Spoofing ISE:Y AV:L AC:L Au:N C:N I:P A:P CDP:H Aw:N 6.9 ISE:Y AV:L AC:M Au:S C:N I:P A:P CDP:H Aw:A 2.1

External Entity Patient Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:U 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:U 0.0

2

Copyright © 2018 IEEE. All rights reserved.

2705

15.3 Sleep Apnoea Breathing Therapy Equipment

15.3.1 System Context

15.3.1.1 Use Case Description

A sleep apnoea breathing therapy equipment (SABTE) device (also called sleep therapy device) is intended to alleviate the symptoms of patients who suffer from sleep apnoea by delivering therapeutic breathing pressure support to the patient. Sleep apnoea is the clinically significant intermittent absences of normal respiration occurring during sleep indicated by apnoea and hypopnea events.

15.3.1.2 Intended Actors

The intended actors of the SABTE are

manufacturer, technical support, distributor, patient, counselor, and physician.

15.3.1.3 Exchanged Data

The exchanged data of the SABTE include

Device identification: Information identifying the device, Compliance monitoring: Accumulated data provided as a set of information on an offline

basis providing evidence of patient compliance with the sleep apnoea therapy, Efficacy monitoring: In addition to compliance monitoring data, providing parameters

related to the effectiveness of the treatment, Service monitoring: Indicators relating to preventative and corrective maintenance of

the device and its accessories, Device settings: Configuration of the SATD behavior and usage, Therapy settings: Configuration of the different therapy modes provided by SATD, and Firmware: Software that operates the SATD.

15.3.1.4 Actors Mapped to Assets

Table 15-30 depicts the intended actor access to assets.

Table 15-30 – Mapping SABTE actors to assets

Asset \ Actor Patient Counselor Distributor Technical Support Physician Manufacturer

Device Identification R R R R R CRUDCompliance Monitoring - R RD - R -Efficacy Monitoring R R RD - R -Service Monitoring - - R RU - RUDDevice Settings RU# RU# RU RU RU RUTherapy Settings R^ R^ RU RUD RU RU

2706

2707

2708

27092710271127122713

2714

2715

271627172718271927202721

2722

2723

2724272527262727272827292730273127322733

2734

2735

2736

Device Firmware - - - R - CRUDC = Create, R = Read, U = Update, D = Delete# = The device settings that can be changed by some actors may be restricted^ = The therapy settings that can be viewed by some actors may be restricted

15.3.2 Threat Model

The SABTE threat model is provided in Figure 15-18 and Table 15-31 describes the data flows.

Figure 15-18 Sleep apnoea breathing therapy equipment threat model

Table 15-31 – Description of SABTE threat model data flows

Data Flow ID Description

1 Patient - Create/Update/Delete - Connected Device - Device Configuration/Therapy Setting/Observation

2 Patient - Create/Update/Delete - SABTE - Device Configuration/Therapy Setting/Observation3 Patient - Receive - Airflow

4 Non-Patient - Create/Update/Delete - SABTE - Device Configuration/Therapy Setting/Observation

5 Non-Patient - Create/Update/Delete - Connected Device - Device Configuration/Therapy Setting/Observation

6 Manufacturer - Read - SABTE - Device Configuration/Therapy Setting/Firmware

7 Manufacture - Create/Update/Delete - SABTE - Device Configuration/Therapy Setting/Firmware

8 Connected Device - Wired/Wireless - Read - SABTE - Device Configuration/Therapy Setting/Observation

1

Copyright © 2018 IEEE. All rights reserved.

273727382739

2740

2741

27422743

2744

9 Connected Device - Wired/Wireless - Create/Update/Delete - SABTE - Device Configuration/Therapy Setting/Observation

10 SABTE - Read - Memory Card - Therapy Setting

11 SABTE - Create/Update/Delete - Memory Card - Device Configuration/Therapy Setting/Observation

12 Connected Device - Read - Memory Card - Device Configuration/Therapy Setting/Observation

13 Connected Device - Create/Update/Delete - Memory Card - Device Configuration/Therapy Setting

2

Copyright © 2018 IEEE. All rights reserved.

15.3.3 Pre- & Post-Mitigation Vulnerability Assessment

See Section 11.2.4 for additional details on eCVSS pre- or post-mitigation vectors.

Device Type Scoring

Name Classification Confidentiality Requirement Integrity Requirement Availability Requirement Moderate-Risk Threshold High-Risk Threshold

SABTE Class II Medium High Medium 3.5 7

Potential Vulnerability Assessment

Name Category Pre-Mitigation Vector Pre-Score Post-Mitigation Vector Post-Score

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:A AC:H Au:N C:N I:P A:N CDP:LM Aw:N 5.0 ISE:N AV:A AC:H Au:S C:N I:N A:N CDP:LM Aw:U 1.5

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:A AC:M Au:N C:N I:C A:N CDP:LM Aw:N 8.5 ISE:N AV:A AC:H Au:S C:N I:P A:N CDP:LM Aw:U 2.4

Spoofing the Patient External Entity Spoofing ISE:N AV:L AC:L Au:N C:P I:N A:N CDP:N Aw:N 2.1 ISE:N AV:L AC:L Au:N C:P I:N A:N CDP:N Aw:N 2.1

Spoofing the Controller Flow Generator Process Spoofing ISE:N AV:L AC:H Au:N C:P I:C A:C CDP:LM Aw:N 7.3 ISE:N AV:L AC:H Au:M C:N I:N A:C CDP:LM Aw:U 2.7

Controller Connected Device Wireless Process Memory Tampered Tampering ISE:N AV:A AC:H Au:N C:N I:C A:C CDP:H Aw:N 8.4 ISE:N AV:A AC:H Au:S C:N I:N A:P CDP:H Aw:U 2.8

Replay Attacks Tampering ISE:N AV:A AC:H Au:N C:N I:C A:N CDP:LM Aw:N 3.0 ISE:N AV:A AC:H Au:N C:N I:C A:N CDP:LM Aw:N 3.0

Collision Attacks Tampering ISE:N AV:A AC:H Au:N C:N I:C A:N CDP:LM Aw:N 3.0 ISE:N AV:A AC:H Au:N C:N I:C A:N CDP:LM Aw:N 3.0

Replay Attacks Tampering ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:LM Aw:N 5.2 ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:LM Aw:N 5.2

Collision Attacks Tampering ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:LM Aw:N 5.2 ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:LM Aw:N 5.2

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:L AC:L Au:N C:N I:C A:N CDP:LM Aw:N 8.0 ISE:N AV:L AC:L Au:S C:N I:N A:P CDP:LM Aw:U 2.1

Controller Connected Device Wired/Memory Card Process Memory Tampered Tampering ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:LM Aw:N 7.3 ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:L AC:L Au:N C:N I:C A:C CDP:LM Aw:N 8.0 ISE:N AV:L AC:L Au:S C:N I:N A:P CDP:LM Aw:U 2.1

Elevation by Changing the Execution Flow in Controller Flow Generator Elevation Of Privilege ISE:N AV:A AC:H Au:N C:N I:C A:C CDP:H Aw:N 8.4 ISE:N AV:A AC:H Au:S C:N I:N A:P CDP:H Aw:U 2.8

Controller Flow Generator May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:A AC:H Au:N C:N I:C A:C CDP:H Aw:N 8.4 ISE:N AV:A AC:H Au:S C:N I:N A:P CDP:H Aw:U 2.8

Data Flow CD Create/Update/Delete Wireless SABTE Device Configuration/Therapy Setting/Firmware Is Potentially Interrupted Denial Of Service ISE:N AV:A AC:L Au:N C:N I:P A:N CDP:L Aw:U 2.4 ISE:N AV:A AC:L Au:N C:N I:P A:N CDP:L Aw:U 2.4

Potential Process Crash or Stop for Controller Flow Generator Denial Of Service ISE:N AV:A AC:H Au:N C:N I:P A:C CDP:MH Aw:U 3.6 ISE:N AV:A AC:H Au:N C:N I:P A:C CDP:MH Aw:U 3.6

Spoofing the Patient External Entity Spoofing ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:N Aw:N 3.1 ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:N Aw:N 3.1

Elevation by Changing the Execution Flow in Controller Flow Generator Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:L Aw:N 6.6 ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:L Aw:N 6.6

Controller Flow Generator May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:L Aw:N 6.6 ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:L Aw:N 6.6

Data Flow Patient Controls SABTE Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:MH Aw:U 3.8 ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:MH Aw:U 3.8

Potential Process Crash or Stop for Controller Flow Generator Denial Of Service ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:MH Aw:U 3.8 ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:MH Aw:U 3.8

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:U 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:U 0.0

Potential Data Repudiation by Controller Flow Generator Repudiation ISE:N AV:L AC:H Au:N C:N I:N A:P CDP:L Aw:U 1.0 ISE:N AV:L AC:H Au:N C:N I:N A:P CDP:L Aw:U 1.0

Potential Lack of Input Validation for Controller Flow Generator Tampering ISE:N AV:L AC:H Au:N C:N I:P A:P CDP:L Aw:U 2.0 ISE:N AV:L AC:H Au:N C:N I:P A:P CDP:L Aw:U 2.0

2745

2746

Spoofing the Controller Flow Generator Process Spoofing ISE:N AV:L AC:H Au:S C:P I:C A:C CDP:LM Aw:N 7.2 ISE:N AV:L AC:H Au:M C:N I:N A:C CDP:LM Aw:U 2.7

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0

Potential Data Repudiation by Controller Flow Generator Repudiation ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:LM Aw:U 2.8 ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:LM Aw:U 2.8

Potential Lack of Input Validation for Controller Flow Generator Tampering ISE:N AV:A AC:H Au:N C:N I:C A:C CDP:MH Aw:N 8.1 ISE:N AV:A AC:H Au:S C:N I:N A:P CDP:MH Aw:U 2.4

Spoofing the Controller Flow Generator Process Spoofing ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:LM Aw:N 5.7 ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:LM Aw:N 5.7

Spoofing the Controller Connected Device Wireless Process Spoofing ISE:N AV:A AC:M Au:N C:N I:C A:C CDP:H Aw:N 9.0 ISE:N AV:A AC:M Au:S C:N I:N A:P CDP:H Aw:U 3.0

Elevation by Changing the Execution Flow in Controller Connected Device Wireless Elevation Of Privilege ISE:N AV:A AC:H Au:N C:N I:C A:N CDP:MH Aw:N 8.1 ISE:N AV:A AC:H Au:S C:N I:N A:N CDP:MH Aw:U 2.0

Controller Connected Device Wireless May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:A AC:H Au:N C:N I:C A:N CDP:MH Aw:N 8.1 ISE:N AV:A AC:H Au:S C:N I:N A:N CDP:MH Aw:U 2.0

Data Flow CD Read Wireless SABTE Device Configuration/Therapy Setting/Observation Is Potentially Interrupted Denial Of Service ISE:N AV:A AC:L Au:N C:N I:C A:N CDP:L Aw:U 4.2 ISE:N AV:A AC:L Au:N C:N I:C A:N CDP:L Aw:U 4.2

Potential Process Crash or Stop for Controller Connected Device Wireless Denial Of Service ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:U 0.5

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:A AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0

Potential Data Repudiation by Controller Connected Device Wireless Repudiation ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:L Aw:U 2.2 ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:L Aw:U 2.2

Potential Lack of Input Validation for Controller Connected Device Wireless Tampering ISE:N AV:A AC:H Au:N C:N I:C A:N CDP:MH Aw:N 8.1 ISE:N AV:A AC:H Au:S C:N I:N A:P CDP:MH Aw:U 2.4

Authenticated Data Flow Compromised Tampering ISE:N AV:L AC:H Au:S C:N I:P A:N CDP:LM Aw:U 2.2 ISE:N AV:L AC:H Au:S C:N I:P A:N CDP:LM Aw:U 2.2

Spoofing the Controller Connected Device Wireless Process Spoofing ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:L Aw:N 4.5 ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:L Aw:N 4.5

Spoofing the Controller Flow Generator Process Spoofing ISE:N AV:A AC:M Au:N C:N I:C A:N CDP:LM Aw:N 8.5 ISE:N AV:A AC:M Au:S C:N I:P A:P CDP:LM Aw:U 3.0

Data Flow SABTE Create/Update/Delete MC Device Configuration/Therapy Setting/Observation Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0

Data Store Denies Memory Card Potentially Writing Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5

The Memory Card Data Store Could Be Corrupted Tampering ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5

Spoofing the Controller Flow Generator Process Spoofing ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:MH Aw:N 4.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:MH Aw:N 4.0

Spoofing the Manufacturer External Entity Spoofing ISE:Y AV:L AC:M Au:N C:N I:N A:C CDP:LM Aw:N 6.3 ISE:Y AV:L AC:M Au:N C:N I:N A:C CDP:LM Aw:N 6.3

Spoofing the Councellor/Distributor/Technical Support/Physician External Entity Spoofing ISE:N AV:L AC:L Au:S C:P I:P A:N CDP:N Aw:N 3.9 ISE:N AV:L AC:L Au:S C:P I:P A:N CDP:N Aw:N 3.9

Elevation by Changing the Execution Flow in Controller Flow Generator Elevation Of Privilege ISE:N AV:L AC:H Au:S C:N I:C A:C CDP:L Aw:N 6.4 ISE:N AV:L AC:H Au:S C:N I:C A:C CDP:L Aw:N 6.4

Controller Flow Generator May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:L AC:H Au:S C:N I:C A:C CDP:L Aw:N 6.4 ISE:N AV:L AC:H Au:S C:N I:C A:C CDP:L Aw:N 6.4

Data Flow Non-Patient Controls SABTE Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8 ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8

Potential Process Crash or Stop for Controller Flow Generator Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:MH Aw:U 3.1 ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:MH Aw:U 3.1

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:S C:N I:N A:N CDP:N Aw:U 0.0 ISE:N AV:L AC:L Au:S C:N I:N A:N CDP:N Aw:U 0.0

Potential Data Repudiation by Controller Flow Generator Repudiation ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8 ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8

Potential Lack of Input Validation for Controller Flow Generator Tampering ISE:N AV:L AC:H Au:S C:N I:C A:P CDP:LM Aw:U 3.5 ISE:N AV:L AC:H Au:S C:N I:C A:P CDP:LM Aw:U 3.5

Elevation by Changing the Execution Flow in Controller Flow Generator Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:LM Aw:N 7.3 ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8

Controller Flow Generator May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:LM Aw:N 7.3 ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8

Data Flow CD Create/Update/Delete Wired SABTE Device Configuration/Therapy Settings/Firmware Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:L Aw:U 1.9 ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:L Aw:U 1.9

Potential Process Crash or Stop for Controller Flow Generator Denial Of Service ISE:N AV:L AC:M Au:N C:N I:C A:P CDP:LM Aw:U 3.8 ISE:N AV:L AC:M Au:N C:N I:C A:P CDP:LM Aw:U 3.8

1

Copyright © 2018 IEEE. All rights reserved.

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:P I:N A:N CDP:L Aw:N 2.9 ISE:N AV:L AC:L Au:N C:P I:N A:N CDP:L Aw:N 2.9

Potential Data Repudiation by Controller Flow Generator Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:LM Aw:U 2.5 ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:LM Aw:U 2.5

Potential Lack of Input Validation for Controller Flow Generator Tampering ISE:N AV:L AC:L Au:N C:N I:C A:C CDP:MH Aw:N 8.3 ISE:N AV:L AC:L Au:S C:N I:N A:P CDP:MH Aw:U 2.5

Spoofing the Controller Flow Generator Process Spoofing ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:N Aw:N 3.1 ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:N Aw:N 3.1

Spoofing the Controller Connected Device Wired/Memory Card Process Spoofing ISE:N AV:L AC:L Au:N C:N I:C A:C CDP:MH Aw:N 8.3 ISE:N AV:L AC:L Au:S C:N I:N A:P CDP:MH Aw:U 2.5

Elevation by Changing the Execution Flow in Controller Connected Device Wired/Memory Card Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:LM Aw:N 7.3 ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8

Controller Connected Device Wired/Memory Card May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:C A:N CDP:MH Aw:N 7.7 ISE:N AV:L AC:H Au:S C:N I:N A:N CDP:MH Aw:U 2.0

Data Flow CD Read Wired SABTE Device Configuration/Therapy Setting/Observation Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:M Au:N C:N I:C A:N CDP:LM Aw:U 3.8 ISE:N AV:L AC:M Au:N C:N I:C A:N CDP:LM Aw:U 3.8

Potential Process Crash or Stop for Controller Connected Device Wired/Memory Card Denial Of Service ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:LM Aw:U 2.5 ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:LM Aw:U 2.5

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:P I:N A:N CDP:L Aw:N 2.9 ISE:N AV:L AC:L Au:N C:P I:N A:N CDP:L Aw:N 2.9

Potential Data Repudiation by Controller Connected Device Wired/Memory Card Repudiation ISE:N AV:L AC:L Au:N C:N I:C A:N CDP:LM Aw:N 8.0 ISE:N AV:L AC:L Au:N C:N I:C A:N CDP:LM Aw:U 3.9

Potential Lack of Input Validation for Controller Connected Device Wired/Memory Card Tampering ISE:N AV:L AC:L Au:N C:N I:C A:N CDP:LM Aw:N 8.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:LM Aw:N 3.0

Spoofing the Controller Connected Device Wired/Memory Card Process Spoofing ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0

Spoofing the Controller Flow Generator Process Spoofing ISE:N AV:L AC:L Au:N C:N I:C A:N CDP:LM Aw:N 8.0 ISE:N AV:L AC:L Au:S C:N I:P A:N CDP:LM Aw:U 2.4

Elevation by Changing the Execution Flow in Controller Flow Generator Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:P A:P CDP:L Aw:N 4.1 ISE:N AV:L AC:H Au:N C:N I:P A:P CDP:L Aw:N 4.1

Controller Flow Generator May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:P A:P CDP:L Aw:N 4.1 ISE:N AV:L AC:H Au:N C:N I:P A:P CDP:L Aw:N 4.1

Data Store Inaccessible Denial Of Service ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:L Aw:N 3.8 ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:L Aw:N 3.8

Data Flow SABTE Read MC Therapy Setting Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:L Aw:N 3.8 ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:L Aw:N 3.8

Potential Process Crash or Stop for Controller Flow Generator Denial Of Service ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:MH Aw:U 3.8 ISE:N AV:L AC:H Au:N C:N I:C A:C CDP:MH Aw:U 3.8

Potential Data Repudiation by Controller Flow Generator Repudiation ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:L Aw:N 3.0 ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:L Aw:N 3.0

Spoofing the Controller Flow Generator Process Spoofing ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:N Aw:N 2.2 ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:N Aw:N 2.2

Data Store Inaccessible Denial Of Service ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:L AC:H Au:S C:N I:N A:N CDP:LM Aw:N 3.0 ISE:N AV:L AC:H Au:S C:N I:N A:N CDP:LM Aw:N 3.0

Spoofing the Councellor/Distributor/Technical Support/Physician External Entity Spoofing ISE:N AV:L AC:L Au:S C:P I:P A:N CDP:N Aw:N 3.9 ISE:N AV:L AC:L Au:S C:P I:P A:N CDP:N Aw:N 3.9

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0

Potential Data Repudiation by Controller Connected Device Wireless Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6

Potential Lack of Input Validation for Controller Connected Device Wireless Tampering ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6

Spoofing the Controller Connected Device Wireless Process Spoofing ISE:N AV:L AC:H Au:N C:P I:C A:N CDP:MH Aw:N 7.7 ISE:N AV:L AC:H Au:M C:N I:N A:N CDP:MH Aw:U 2.0

Potential Process Crash or Stop for Controller Connected Device Wireless Denial Of Service ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:L Aw:U 1.5 ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:L Aw:U 1.5

Controller Connected Device Wireless May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:H Au:N C:N I:N A:N CDP:L Aw:N 1.0

Elevation by Changing the Execution Flow in Controller Connected Device Wireless Elevation Of Privilege ISE:N AV:L AC:H Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:H Au:N C:N I:N A:N CDP:L Aw:N 1.0

Spoofing the Controller Connected Device Wireless Process Spoofing ISE:N AV:L AC:H Au:S C:P I:N A:N CDP:MH Aw:N 4.6 ISE:N AV:L AC:H Au:S C:P I:N A:N CDP:MH Aw:N 4.6

Potential Lack of Input Validation for Controller Connected Device Wireless Tampering ISE:N AV:L AC:H Au:S C:N I:N A:N CDP:LM Aw:U 1.5 ISE:N AV:L AC:H Au:S C:N I:N A:N CDP:LM Aw:U 1.5

Potential Data Repudiation by Controller Connected Device Wireless Repudiation ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:L Aw:U 0.5

2

Copyright © 2018 IEEE. All rights reserved.

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:S C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:L Au:S C:N I:N A:N CDP:N Aw:N 0.0

Potential Process Crash or Stop for Controller Connected Device Wireless Denial Of Service ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:L Aw:U 0.5

Data Flow Non-Patient Controls Connected Device Wireless Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:L Aw:U 0.5

Elevation by Changing the Execution Flow in Controller Connected Device Wireless Elevation Of Privilege ISE:N AV:L AC:H Au:S C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:H Au:S C:N I:N A:N CDP:L Aw:N 1.0

Data Flow CD Create/Update/Delete MC Device Configuration/Therapy Setting Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0

The Memory Card Data Store Could Be Corrupted Tampering ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:N Aw:N 3.1 ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:N Aw:N 3.1

Spoofing the Controller Connected Device Wired/Memory Card Process Spoofing ISE:N AV:L AC:M Au:N C:N I:P A:P CDP:N Aw:N 4.1 ISE:N AV:L AC:M Au:N C:N I:P A:P CDP:N Aw:N 4.1

Elevation by Changing the Execution Flow in Controller Connected Device Wired/Memory Card Elevation Of Privilege ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:LM Aw:N 3.0 ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:LM Aw:N 3.0

Controller Connected Device Wired/Memory Card May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:LM Aw:N 3.0 ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:LM Aw:N 3.0

Data Flow Uses Controller Connected Device Wired/Memory Card Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8 ISE:N AV:L AC:H Au:S C:N I:N A:P CDP:LM Aw:U 1.8

Potential Process Crash or Stop for Controller Connected Device Wired/Memory Card Denial Of Service ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:LM Aw:U 1.5 ISE:N AV:L AC:M Au:S C:N I:N A:N CDP:LM Aw:U 1.5

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:S C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:L Au:S C:N I:N A:N CDP:N Aw:N 0.0

Potential Data Repudiation by Controller Connected Device Wired/Memory Card Repudiation ISE:N AV:L AC:H Au:S C:P I:N A:N CDP:L Aw:U 0.9 ISE:N AV:L AC:H Au:S C:P I:N A:N CDP:L Aw:U 0.9

Potential Lack of Input Validation for Controller Connected Device Wired/Memory Card Tampering ISE:N AV:L AC:H Au:S C:P I:N A:N CDP:MH Aw:N 4.6 ISE:N AV:L AC:H Au:S C:P I:N A:N CDP:MH Aw:N 4.6

Spoofing the Controller Connected Device Wired/Memory Card Process Spoofing ISE:N AV:L AC:H Au:S C:P I:N A:N CDP:MH Aw:N 4.6 ISE:N AV:L AC:H Au:S C:P I:N A:N CDP:MH Aw:N 4.6

Data Store Inaccessible Denial Of Service ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0

Spoofing the Controller Connected Device Wired/Memory Card Process Spoofing ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:N Aw:N 0.0

Potential Data Repudiation by Controller Connected Device Wired/Memory Card Repudiation ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:LM Aw:N 3.0 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:LM Aw:N 3.0

Potential Process Crash or Stop for Controller Connected Device Wired/Memory Card Denial Of Service ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:L Aw:N 1.0

Data Flow CD Read MC Device Configuration/Therapy Setting/Observation Is Potentially Interrupted Denial Of Service ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:LM Aw:N 3.0 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:LM Aw:N 3.0

Data Store Inaccessible Denial Of Service ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:LM Aw:N 3.0 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:LM Aw:N 3.0

Controller Connected Device Wired/Memory Card May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:MH Aw:N 4.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:MH Aw:N 4.0

Elevation by Changing the Execution Flow in Controller Connected Device Wired/Memory Card Elevation Of Privilege ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:MH Aw:N 4.0 ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:MH Aw:N 4.0

Spoofing of the Manufacturer External Destination Entity Spoofing ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:LM Aw:N 3.0 ISE:N AV:L AC:M Au:N C:N I:N A:N CDP:LM Aw:N 3.0

Potential Process Crash or Stop for Controller Flow Generator Denial Of Service ISE:Y AV:L AC:M Au:N C:N I:N A:C CDP:MH Aw:U 3.3 ISE:Y AV:L AC:M Au:N C:N I:N A:C CDP:MH Aw:U 3.3

Controller Flow Generator May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:M Au:N C:N I:N A:C CDP:MH Aw:N 6.8 ISE:Y AV:L AC:M Au:N C:N I:N A:C CDP:MH Aw:N 6.8

Elevation by Changing the Execution Flow in Controller Flow Generator Elevation Of Privilege ISE:Y AV:L AC:M Au:N C:N I:N A:C CDP:MH Aw:N 6.8 ISE:Y AV:L AC:M Au:N C:N I:N A:C CDP:MH Aw:N 6.8

3

Copyright © 2018 IEEE. All rights reserved.

2747

15.4 Insulin Delivery Device

15.4.1 System Context

15.4.1.1 Use Case Description

An insulin delivery device (also called insulin pump) administers insulin at a defined amount to the human body. Insulin delivery device devices are primarily used in the continuous subcutaneous insulin infusion (CSII) therapy of type 1 diabetes mellitus. This type of diabetes mellitus is characterized by loss of the insulin-producing beta cells of the islets of Langerhans in the pancreas. Insulin delivery devices typically inject insulin into the subcutaneous layer of fat tissue under the skin through an infusion set. Preferred sites for the cannula are the abdomen, lumbar region, thighs, buttocks and the upper arms.

15.4.1.2 Intended Actors

The intended actors of the insulin delivery device are Manufacturer, Distributor, Health care professional, and End-user (i.e., patient and caregiver).

15.4.1.3 Exchanged Data

The exchanged data of the insulin delivery device include

Device identification: Information identifying the device, Delivery History: Parameters related to the effectiveness of the infusion (i..e basal

delivered, bolus delivered), Service monitoring: Indicators relating to preventative and corrective maintenance of

the device and its accessories, Device settings: Configuration of the behaviour and usage (i.e. I:C ratio), Therapy settings: Configuration of the different therapy (i.e. basal rate settings, bolus

settings, etc.), and Firmware: The application running on the device.

15.4.1.4 Actors Mapped to Assets

Table 15-32 depicts the intended actor access to assets.

Table 15-32 – Mapping insulin delivery device actors to assets

Asset \ Actor End-User (i.e., patient, caregiver)

Health Care Professional Manufacturer Distributor

Device Identification R R CR RDevice History R R RD -Service Monitoring - - R -Device Settings RU# RU# RU# RU#

Therapy Settings RU RU RU -Device Firmware - - CRUD -

C = Create, R = Read, U = Update, D = Delete# = The device settings that can be updated by some actors may be restricted

2748

2749

2750

2751275227532754275527562757

2758

27592760276127622763

2764

2765

276627672768276927702771277227732774

2775

2776

2777

27782779

15.4.2 Threat Model

The insulin delivery device threat model is provided in Figure 15-19 and Table 15-33 describes the data flows.

Figure 15-19 Insulin delivery device threat model

Table 15-33 – Description of insulin delivery device threat model data flows

Data Flow ID Description

1 Patient - Read - Connected Device - Therapy Setting/Observation2 Patient - Update/Delete - Connected Device - Therapy Setting/Observation3 Patient - Update/Delete - Insulin Delivery Device - Therapy Setting/Observation4 Patient - Read - Insulin Delivery Device - Therapy Setting/Observation5 Patient - Receive - Insulin Delivery6 HCP - Read - Insulin Delivery Device - Therapy Setting/Observation7 HCP - Update/Delete - Insulin Delivery Device - Therapy Setting/Observation8 HCP - Read - Connected Device - Therapy Setting/Observation9 HCP - Update/Delete - Connected Device - Therapy Setting/Observation

10 Manufacturer - Create/Update/Delete - Insulin Delivery Device - Device Configuration/Therapy Setting/Observation

11 Manufacturer - Read - Insulin Delivery Device - Device Configuration/Therapy

1

Copyright © 2018 IEEE. All rights reserved.

2780

27812782

27832784

2785

Setting/Firmware

12 Connected Device - Wired/Wireless - Read - Insulin Delivery Device - Device Configuration/Therapy Setting/Observation

13 Connected Device - Wired/Wireless - Create/Update/Delete - Insulin Delivery Device - Device Configuration/Therapy Setting/Observation

2

Copyright © 2018 IEEE. All rights reserved.

15.4.3 Pre- & Post-Mitigation Vulnerability Assessment

See Section 11.2.4 for additional details on eCVSS pre- or post-mitigation vectors.

Device Type Scoring

Name Classification Confidentiality Requirement Integrity Requirement Availability Requirement Moderate-Risk Threshold High-Risk Threshold

Insulin Delivery Device Class IIb Medium High Medium 3.5 7

Potential Vulnerability Assessment

Name Category Pre-Mitigation Vector Pre-Score Post-Mitigation Vector Post-Score

Spoofing the Person with Diabetes External Entity Spoofing ISE:Y AV:L AC:L Au:N C:C I:C A:C CDP:MH Aw:U 4.1 ISE:Y AV:L AC:M Au:M C:P I:P A:P CDP:LM Aw:U 2.9

Elevation by Changing the Execution Flow in Connected Device Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:N I:C A:N CDP:MH Aw:N 8.3 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:L Aw:N 2.7

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:C I:C A:C CDP:MH Aw:U 4.1 ISE:Y AV:L AC:M Au:M C:P I:P A:P CDP:LM Aw:U 2.9

Connected Device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:A AC:H Au:N C:C I:C A:C CDP:MH Aw:N 8.1 ISE:Y AV:A AC:H Au:M C:P I:P A:P CDP:L Aw:N 4.9

Data Flow CD Read Wireless IP Device Configuration/Therapy Setting/Observation Is Potentially Interrupted Denial Of Service ISE:N AV:A AC:L Au:N C:N I:N A:P CDP:L Aw:C 0.6 ISE:N AV:A AC:L Au:M C:N I:N A:P CDP:L Aw:C 0.5

Potential Process Crash or Stop for Connected Device Denial Of Service ISE:N AV:A AC:H Au:N C:N I:N A:C CDP:L Aw:U 2.5 ISE:N AV:A AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.3

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:L Aw:N 6.5 ISE:N AV:A AC:M Au:M C:C I:N A:N CDP:L Aw:N 5.3

Potential Lack of Input Validation for Connected Device Tampering ISE:Y AV:A AC:M Au:N C:P I:P A:N CDP:LM Aw:N 6.6 ISE:Y AV:A AC:M Au:M C:P I:P A:N CDP:LM Aw:N 5.9

Spoofing the Connected Device Process Spoofing ISE:N AV:A AC:M Au:N C:C I:N A:N CDP:L Aw:N 6.1 ISE:N AV:A AC:H Au:M C:C I:N A:N CDP:L Aw:N 4.7

Spoofing the Controller Insulin Pump Process Spoofing ISE:Y AV:A AC:M Au:N C:N I:C A:N CDP:MH Aw:N 8.7 ISE:Y AV:A AC:H Au:M C:N I:C A:N CDP:L Aw:U 3.2

Elevation by Changing the Execution Flow in Controller Insulin Pump Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:N I:C A:N CDP:MH Aw:U 4.1 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:L Aw:U 1.3

Controller Insulin Pump May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:U 3.8 ISE:Y AV:L AC:H Au:M C:N I:P A:P CDP:LM Aw:U 2.5

Data Flow CD Create/Update/Delete Wireless IP Device Configuration/Therapy Setting/Observation Is Potentially Interrupted Denial Of Service ISE:Y AV:A AC:L Au:N C:N I:N A:P CDP:LM Aw:C 0.8 ISE:Y AV:A AC:L Au:M C:N I:N A:P CDP:L Aw:C 0.5

Potential Process Crash or Stop for Controller Insulin Pump Denial Of Service ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8 ISE:Y AV:L AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.1

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:L Aw:N 6.5 ISE:N AV:A AC:M Au:M C:C I:N A:N CDP:L Aw:N 5.3

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:C I:C A:C CDP:MH Aw:U 4.1 ISE:Y AV:L AC:M Au:M C:C I:C A:P CDP:LM Aw:U 3.6

Potential Data Repudiation by Controller Insulin Pump Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:U 1.8 ISE:N AV:L AC:M Au:M C:N I:P A:N CDP:L Aw:U 1.5

External Entity Person with Diabetes Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:L Au:M C:N I:N A:N CDP:L Aw:U 0.5

Spoofing of the Person with Diabetes External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:M Au:M C:P I:N A:N CDP:L Aw:U 1.1

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:M Au:M C:C I:N A:N CDP:L Aw:U 2.3

External Entity Health Care Professional Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:L Au:M C:N I:N A:N CDP:L Aw:N 1.0

Spoofing of the Health Care Professional External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:M Au:M C:P I:N A:N CDP:L Aw:U 1.1

Elevation by Changing the Execution Flow in Controller Insulin Pump Elevation Of Privilege ISE:Y AV:L AC:M Au:N C:N I:C A:C CDP:MH Aw:U 4.0 ISE:Y AV:L AC:H Au:M C:N I:P A:C CDP:LM Aw:U 3.1

Potential Lack of Input Validation for Controller Insulin Pump Tampering ISE:Y AV:A AC:M Au:N C:P I:P A:N CDP:LM Aw:N 6.6 ISE:Y AV:A AC:M Au:M C:P I:P A:N CDP:LM Aw:N 5.9

2786

2787

Controller Insulin Pump May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:U 3.8 ISE:Y AV:L AC:H Au:M C:P I:P A:P CDP:LM Aw:U 2.8

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:C I:C A:C CDP:MH Aw:U 4.1 ISE:Y AV:L AC:M Au:M C:P I:P A:P CDP:LM Aw:U 2.9

Potential Process Crash or Stop for Controller Insulin Pump Denial Of Service ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8 ISE:Y AV:L AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.1

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:L Au:M C:P I:N A:N CDP:L Aw:U 1.2

Potential Data Repudiation by Controller Insulin Pump Repudiation ISE:Y AV:L AC:M Au:N C:N I:C A:C CDP:MH Aw:U 4.0 ISE:Y AV:L AC:M Au:M C:N I:C A:C CDP:LM Aw:U 3.6

Elevation by Changing the Execution Flow in Controller Insulin Pump Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:N I:C A:N CDP:MH Aw:U 4.1 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:LM Aw:U 2.1

Insulin Pump Controller May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:U 3.8 ISE:Y AV:L AC:H Au:M C:P I:P A:P CDP:LM Aw:U 2.8

Potential Data Repudiation by Connected Device Repudiation ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:L Aw:N 4.5 ISE:N AV:A AC:M Au:M C:N I:P A:N CDP:L Aw:N 3.6

Potential Process Crash or Stop for Controller Insulin Pump Denial Of Service ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8 ISE:Y AV:L AC:H Au:M C:N I:N A:C CDP:LM Aw:U 2.7

Spoofing the Health Care Professioinal External Entity Spoofing ISE:Y AV:L AC:L Au:N C:N I:C A:N CDP:MH Aw:N 8.3 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:LM Aw:N 4.3

Spoofing the Health Care Protessional External Entity Spoofing ISE:Y AV:L AC:L Au:N C:C I:C A:C CDP:MH Aw:U 4.1 ISE:Y AV:L AC:M Au:M C:P I:P A:P CDP:LM Aw:U 2.9

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:L Au:M C:P I:N A:N CDP:L Aw:U 1.2

Potential Data Repudiation by Controller Insulin Pump Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:U 1.8 ISE:N AV:L AC:M Au:M C:N I:P A:N CDP:L Aw:U 1.5

Spoofing the Controller Insulin Pump Process Spoofing ISE:N AV:L AC:M Au:N C:P I:N A:N CDP:L Aw:N 2.7 ISE:N AV:L AC:H Au:M C:P I:N A:N CDP:L Aw:N 1.8

Spoofing the Connected Device Process Spoofing ISE:Y AV:L AC:M Au:N C:N I:C A:C CDP:MH Aw:N 8.1 ISE:Y AV:L AC:H Au:M C:N I:C A:P CDP:LM Aw:U 3.5

External Entity Health Care Professional Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:N Aw:N 0.0 ISE:N AV:L AC:L Au:M C:N I:N A:N CDP:L Aw:N 1.0

Spoofing of the Health Care Professional External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:N 5.5 ISE:N AV:L AC:M Au:M C:P I:N A:N CDP:L Aw:N 2.2

Elevation by Changing the Execution Flow in Connected Device Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:N I:C A:N CDP:MH Aw:N 8.3 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:L Aw:N 2.7

Connected Device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:N 7.7 ISE:Y AV:L AC:H Au:M C:P I:P A:P CDP:L Aw:N 4.5

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:C I:C A:C CDP:MH Aw:N 8.3 ISE:Y AV:L AC:M Au:M C:P I:P A:P CDP:LM Aw:N 6.0

Potential Process Crash or Stop for Connected Device Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:L Aw:U 2.3 ISE:N AV:L AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.1

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:N 5.5 ISE:N AV:L AC:L Au:M C:P I:N A:N CDP:L Aw:N 2.4

Potential Data Repudiation by Connected Device Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:M Au:M C:N I:P A:N CDP:L Aw:N 3.1

Elevation by Changing the Execution Flow in Connected Device Elevation Of Privilege ISE:N AV:L AC:M Au:N C:C I:N A:N CDP:L Aw:U 2.5 ISE:N AV:L AC:H Au:M C:P I:N A:N CDP:L Aw:U 0.9

Connected Device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:MH Aw:U 3.1 ISE:Y AV:L AC:H Au:M C:N I:N A:P CDP:LM Aw:U 1.8

Elevation Using Impersonation Elevation Of Privilege ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:L Aw:N 6.5 ISE:N AV:A AC:H Au:M C:C I:N A:N CDP:L Aw:N 4.7

Potential Process Crash or Stop for Connected Device Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:L Aw:U 2.3 ISE:N AV:L AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.1

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:M Au:M C:C I:N A:N CDP:L Aw:U 2.3

Potential Data Repudiation by Connected Device Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:M Au:M C:N I:P A:N CDP:L Aw:N 3.1

Potential Lack of Input Validation for Connected Device Tampering ISE:Y AV:L AC:M Au:N C:P I:P A:N CDP:LM Aw:N 5.9 ISE:Y AV:L AC:M Au:M C:P I:P A:N CDP:LM Aw:N 5.5

Spoofing the Connected Device Process Spoofing ISE:N AV:L AC:M Au:N C:C I:N A:N CDP:L Aw:N 5.2 ISE:N AV:L AC:H Au:M C:C I:N A:N CDP:L Aw:N 4.3

Spoofing of the Person with Diabetes External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:N 5.5 ISE:N AV:L AC:M Au:M C:P I:N A:N CDP:L Aw:N 2.2

Elevation by Changing the Execution Flow in Connected Device Elevation Of Privilege ISE:N AV:A AC:H Au:N C:C I:N A:N CDP:L Aw:N 5.1 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:L Aw:N 2.1

Connected Device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:N A:C CDP:MH Aw:N 7.4 ISE:Y AV:L AC:H Au:M C:P I:N A:P CDP:LM Aw:N 4.6

1

Copyright © 2018 IEEE. All rights reserved.

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:C I:C A:C CDP:MH Aw:N 8.3 ISE:Y AV:L AC:M Au:M C:P I:P A:P CDP:LM Aw:N 6.0

Potential Process Crash or Stop for Connected Device Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:L Aw:U 2.3 ISE:N AV:L AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.1

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:N 5.5 ISE:N AV:L AC:L Au:M C:P I:N A:N CDP:L Aw:N 2.4

Potential Data Repudiation by Connected Device Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:M Au:M C:N I:P A:N CDP:L Aw:N 3.1

Spoofing the Person with Diabetes External Entity Spoofing ISE:Y AV:L AC:L Au:N C:N I:C A:N CDP:MH Aw:N 8.3 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:LM Aw:N 4.3

Spoofing the Controller Insulin Pump Process Spoofing ISE:N AV:L AC:M Au:N C:N I:C A:N CDP:L Aw:N 7.2 ISE:N AV:L AC:H Au:M C:N I:C A:N CDP:L Aw:U 3.1

Elevation by Changing the Execution Flow in Controller Insulin Pump Elevation Of Privilege ISE:Y AV:L AC:M Au:N C:N I:C A:N CDP:MH Aw:U 4.0 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:LM Aw:U 2.1

Controller Insulin Pump May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:A AC:M Au:N C:C I:C A:C CDP:MH Aw:N 8.7 ISE:Y AV:A AC:H Au:M C:N I:P A:P CDP:LM Aw:N 5.4

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:A AC:M Au:N C:C I:P A:P CDP:MH Aw:N 8.3 ISE:Y AV:A AC:H Au:M C:C I:P A:P CDP:LM Aw:N 6.9

Potential Process Crash or Stop for Controller Insulin Pump Denial Of Service ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8 ISE:Y AV:L AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.1

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:M Au:M C:C I:N A:N CDP:L Aw:U 2.3

Potential Data Repudiation by Controller Insulin Pump Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:M Au:M C:N I:P A:N CDP:L Aw:N 3.1

Potential Lack of Input Validation for Controller Insulin Pump Tampering ISE:Y AV:L AC:M Au:N C:P I:P A:N CDP:LM Aw:N 5.9 ISE:Y AV:L AC:M Au:M C:P I:P A:N CDP:LM Aw:N 5.5

Spoofing the Controller Insulin Pump Process Spoofing ISE:N AV:A AC:M Au:N C:P I:N A:N CDP:L Aw:N 3.6 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:L Aw:N 2.1

Spoofing the Connected Device Process Spoofing ISE:Y AV:A AC:M Au:N C:N I:C A:C CDP:MH Aw:N 8.7 ISE:Y AV:A AC:H Au:M C:N I:P A:P CDP:LM Aw:N 5.4

Elevation by Changing the Execution Flow in Controller Insulin Pump Elevation Of Privilege ISE:Y AV:A AC:H Au:N C:N I:C A:N CDP:MH Aw:N 8.1 ISE:Y AV:A AC:H Au:M C:N I:P A:C CDP:LM Aw:N 6.5

Controller Insulin Pump May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:N 7.7 ISE:Y AV:L AC:H Au:M C:P I:P A:P CDP:LM Aw:U 2.8

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:M Au:N C:C I:C A:C CDP:MH Aw:N 8.1 ISE:Y AV:L AC:H Au:M C:P I:P A:C CDP:LM Aw:U 3.2

Potential Process Crash or Stop for Controller Insulin Pump Denial Of Service ISE:Y AV:A AC:H Au:N C:N I:N A:C CDP:LM Aw:U 3.0 ISE:Y AV:A AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.3

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:LM Aw:U 3.2 ISE:N AV:L AC:M Au:M C:P I:N A:N CDP:L Aw:U 1.1

Potential Data Repudiation by Controller Insulin Pump Repudiation ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:L Aw:N 4.5 ISE:N AV:A AC:M Au:M C:N I:P A:N CDP:L Aw:N 3.6

Potential Lack of Input Validation for Controller Insulin Pump Tampering ISE:Y AV:L AC:M Au:N C:P I:P A:N CDP:LM Aw:N 5.9 ISE:Y AV:L AC:M Au:M C:P I:P A:N CDP:LM Aw:N 5.5

Spoofing the Manufacturer External Entity Spoofing ISE:Y AV:L AC:M Au:N C:C I:C A:C CDP:MH Aw:N 8.1 ISE:Y AV:L AC:H Au:M C:P I:P A:C CDP:LM Aw:U 3.2

External Entity Person with Diabetes Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:N 1.0 ISE:N AV:L AC:L Au:M C:N I:N A:N CDP:L Aw:N 1.0

Spoofing of the Manufacturer External Destination Entity Spoofing ISE:N AV:L AC:M Au:N C:C I:N A:N CDP:L Aw:N 5.2 ISE:N AV:L AC:M Au:M C:P I:N A:N CDP:L Aw:N 2.2

External Entity Manufacturer Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:N A:N CDP:L Aw:U 0.5 ISE:N AV:L AC:L Au:M C:N I:N A:N CDP:L Aw:U 0.5

2

Copyright © 2018 IEEE. All rights reserved.

2788

15.5 Continuous Glucose Monitor

15.5.1 System Context

15.5.1.1 Use Case Description

A CGM device uses a tiny sensor inserted under the skin to check glucose levels in interstitial tissue fluid. The sensor stays in place for several days up to a few months then is replaced. A transmitter sends information about glucose levels via radio waves from the sensor to a pager-like wireless monitor. For most CGMs, the end-user checks capillary blood samples with a glucose meter to calibrate the devices. Users are recommended to confirm glucose levels with a capillary blood sample using a glucose meter before making a change in treatment. The system aids in the detection of episodes of hyperglycemia and hypoglycemia, facilitating both acute and long-term therapy adjustments.

15.5.1.2 Intended Actors

The intended actors of the CGM are Manufacturer, Health care professional, and End-user (i.e., patient and caregiver).

15.5.1.3 Exchanged Data

The exchanged data of the CGM include Device identification and information: Identifying the device e.g., serial number, Settings: Options for CGM controller behavior (bi-hormonal control, single hormonal

control, threshold suspend), Firmware: firmware moved into the device during manufacturing, Physiological monitoring: Patch - Live or stored-and-forwarded data related to glucose

signal, time, etc. Controller – glucose result, time, date, etc. Also includes whether any physiological metrics have gone outside pre-set boundaries (“thresholds”) if supported by the device, and

Device status: Information about the measurement and device status.

15.5.1.4 Actors Mapped to Assets

Table 15-34 depicts the intended actor access to assets.

Table 15-34 – Mapping CGM actors to assets

Asset \ Actor End-User (i.e., patient, caregiver)

Health Care Professional Manufacturer

Device Identification R R CRUSettings RU RU -Firmware U U CRUDPhysiological Monitoring RD RD DDevice Status R R -

C = Create, R = Read, U = Update, D = Delete

15.5.2 Threat Model

2789

2790

2791

27922793279427952796279727982799

2800

2801280228032804

2805

2806280728082809281028112812281328142815

2816

2817

2818

2819

2820

The CGM threat model is provided in Figure 15-20 and Table 15-35 describes the data flows.

Figure 15-20 CGM threat model

Table 15-35 – Description of CGM threat model data flows

Data Flow ID Description

1 Patient - Read - Connected Device - Therapy Setting/Observation2 Patient - Update/Delete - Connected Device - Therapy Setting/Observation3 Patient - Update/Delete - CGM - Therapy Setting/Observation4 Patient - Read - CGM - Therapy Setting/Observation5 Patient - Create - Physiological Signals6 HCP - Read - Connected Device - Device Configuration/Therapy Setting/Observation

7 HCP - Update/Delete - Connected Device - Device Configuration/Therapy Setting/Observation/Firmware

8 HCP - Update/Delete - CGM - Therapy Setting/Observation9 HCP - Read - CGM - Therapy Setting/Observation

10 Manufacturer - Read - CGM Sensor - Device Configuration/Therapy Setting/Observation/Firmware

11 Manufacturer - Create/Update/Delete - CGM Sensor - Device Configuration/Therapy

1

Copyright © 2018 IEEE. All rights reserved.

2821

28222823

2824

Setting/Firmware12 Manufacturer - Create/Update - CGM - Device Configuration/Therapy Setting/Firmware13 Manufacturer - Read - CGM - Device Configuration/Therapy Setting/Firmware

14 Connected Device - Wired/Wireless - Read - CGM - Device Configuration/Therapy Setting/Observation

15 Connected Device - Wired/Wireless - Create/Update/Delete - CGM - Device Configuration/Therapy Setting/Observation

2

Copyright © 2018 IEEE. All rights reserved.

15.5.3 Pre- & Post-Mitigation Vulnerability Assessment

See Section 11.2.4 for additional details on eCVSS pre- or post-mitigation vectors.

Device Type Scoring

Name Classification Confidentiality Requirement Integrity Requirement Availability Requirement Moderate-Risk Threshold High-Risk Threshold

CGM Class III Medium High Medium 3.5 7

Potential Vulnerability Assessment

Name Category Pre-Mitigation Vector Pre-Score Post-Mitigation Vector Post-Score

External Entity Manufacturer Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:H Au:N C:N I:P A:N CDP:L Aw:U 1.5 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:L Aw:U 1.3

Spoofing of the Manufacturer External Destination Entity Spoofing ISE:N AV:L AC:M Au:N C:C I:N A:N CDP:L Aw:N 5.2 ISE:N AV:L AC:H Au:M C:P I:N A:N CDP:L Aw:N 1.8

Elevation by Changing the Execution Flow in Controller CGM Sensor Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:N CDP:MH Aw:N 7.7 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:LM Aw:U 2.1

Controller CGM Sensor May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:N 7.7 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:LM Aw:U 2.1

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:N 7.7 ISE:Y AV:L AC:H Au:M C:N I:P A:N CDP:LM Aw:U 2.1

Controller Connected Device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:P A:C CDP:L Aw:U 3.1 ISE:Y AV:L AC:H Au:N C:C I:P A:C CDP:L Aw:U 3.1

Controller Connected Device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:A AC:H Au:N C:C I:P A:C CDP:L Aw:N 6.9 ISE:Y AV:L AC:H Au:N C:C I:P A:C CDP:L Aw:N 6.4

Potential Process Crash or Stop for Controller CGM Sensor Denial Of Service ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8 ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:LM Aw:N 7.3 ISE:N AV:A AC:H Au:M C:C I:N A:N CDP:L Aw:N 4.7

Potential Data Repudiation by Controller CGM Sensor Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:H Au:M C:N I:P A:P CDP:L Aw:A 1.2

Potential Lack of Input Validation for Controller CGM Sensor Tampering ISE:Y AV:L AC:M Au:N C:P I:P A:N CDP:LM Aw:N 5.9 ISE:Y AV:A AC:H Au:M C:P I:P A:P CDP:MH Aw:U 3.2

Spoofing the Manufacturer External Entity Spoofing ISE:Y AV:L AC:M Au:N C:C I:C A:C CDP:MH Aw:N 8.1 ISE:Y AV:L AC:H Au:M C:P I:P A:C CDP:LM Aw:U 3.2

Spoofing the Controller CGM Process Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:P CDP:MH Aw:U 3.6 ISE:N AV:L AC:L Au:N C:C I:N A:P CDP:MH Aw:U 3.6

Elevation by Changing the Execution Flow in Controller CGM Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:C I:C A:P CDP:MH Aw:U 4.1 ISE:Y AV:A AC:H Au:M C:N I:P A:P CDP:MH Aw:U 2.9

Spoofing the Controller CGM Process Spoofing ISE:N AV:A AC:M Au:N C:N I:C A:N CDP:MH Aw:N 8.7 ISE:N AV:L AC:H Au:M C:N I:C A:N CDP:L Aw:U 3.1

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:M Au:N C:C I:C A:C CDP:MH Aw:N 8.1 ISE:Y AV:A AC:H Au:M C:P I:P A:C CDP:LM Aw:U 3.4

Potential Process Crash or Stop for Controller CGM Denial Of Service ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8 ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:LM Aw:N 6.5 ISE:N AV:L AC:H Au:M C:P I:N A:N CDP:LM Aw:N 3.6

Spoofing the Manufacturer External Entity Spoofing ISE:Y AV:L AC:L Au:N C:C I:C A:C CDP:MH Aw:N 8.3 ISE:Y AV:L AC:H Au:M C:P I:P A:C CDP:MH Aw:U 3.5

Potential Data Repudiation by Controller CGM Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:H Au:M C:N I:P A:P CDP:L Aw:A 1.2

Elevation by Changing the Execution Flow in Controller Connected Device Elevation Of Privilege ISE:Y AV:A AC:H Au:N C:C I:C A:N CDP:L Aw:N 7.1 ISE:Y AV:A AC:H Au:M C:N I:P A:N CDP:LM Aw:U 2.2

Controller Connected Device May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:A AC:H Au:N C:C I:C A:C CDP:MH Aw:U 4.0 ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:U 3.8

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:A AC:H Au:N C:C I:C A:C CDP:MH Aw:U 4.0 ISE:Y AV:A AC:H Au:M C:P I:P A:P CDP:MH Aw:U 3.2

Potential Lack of Input Validation for Controller CGM Tampering ISE:Y AV:L AC:M Au:N C:P I:P A:N CDP:LM Aw:U 2.9 ISE:Y AV:L AC:M Au:N C:P I:P A:N CDP:LM Aw:U 2.9

Potential Process Crash or Stop for Controller Connected Device Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:L Aw:U 2.3 ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:L Aw:U 2.3

Elevation by Changing the Execution Flow in Controller CGM Elevation Of Privilege ISE:Y AV:A AC:H Au:N C:C I:C A:C CDP:MH Aw:N 8.1 ISE:Y AV:A AC:H Au:M C:N I:P A:N CDP:MH Aw:U 2.6

Controller CGM May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:A AC:M Au:N C:C I:C A:C CDP:MH Aw:U 4.3 ISE:Y AV:A AC:H Au:M C:P I:P A:P CDP:MH Aw:U 3.2

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:LM Aw:U 3.6 ISE:N AV:A AC:M Au:M C:P I:N A:N CDP:LM Aw:U 2.2

Potential Data Repudiation by Controller Connected Device Repudiation ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:L Aw:N 4.5 ISE:N AV:A AC:H Au:M C:N I:P A:N CDP:L Aw:A 1.0

Spoofing the Health Care Professional External Entity Spoofing ISE:N AV:A AC:L Au:N C:C I:C A:P CDP:MH Aw:U 4.4 ISE:N AV:A AC:M Au:M C:P I:P A:P CDP:LM Aw:U 3.2

Potential Process Crash or Stop for Controller CGM Denial Of Service ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8 ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8

Potential Data Repudiation by Controller CGM Repudiation ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:L Aw:N 4.5 ISE:N AV:A AC:H Au:M C:N I:P A:P CDP:L Aw:A 1.3

Potential Lack of Input Validation for Controller CGM Tampering ISE:Y AV:A AC:M Au:N C:P I:P A:N CDP:LM Aw:U 3.2 ISE:Y AV:A AC:H Au:M C:P I:P A:P CDP:LM Aw:U 2.9

Spoofing the Controller CGM Process Spoofing ISE:Y AV:L AC:M Au:N C:P I:N A:P CDP:L Aw:N 4.0 ISE:Y AV:L AC:M Au:N C:P I:N A:P CDP:L Aw:N 4.0

External Entity Health Care Professional Potentially Denies Receiving Data Repudiation ISE:N AV:A AC:L Au:N C:N I:P A:N CDP:L Aw:N 4.9 ISE:N AV:A AC:M Au:M C:N I:P A:N CDP:L Aw:N 3.6

Spoofing of the Health Care Professional External Destination Entity Spoofing ISE:N AV:A AC:M Au:N C:C I:N A:N CDP:L Aw:N 6.1 ISE:N AV:A AC:M Au:M C:P I:P A:N CDP:L Aw:N 4.7

2825

2826

Elevation by Changing the Execution Flow in Controller CGM Elevation Of Privilege ISE:Y AV:L AC:M Au:M C:C I:C A:P CDP:MH Aw:U 3.8 ISE:Y AV:A AC:H Au:M C:N I:P A:P CDP:MH Aw:U 2.9

Controller CGM May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:M C:C I:C A:P CDP:L Aw:U 3.1 ISE:Y AV:L AC:H Au:M C:P I:P A:P CDP:MH Aw:U 3.1

Elevation by Changing the Execution Flow in Controller Connected Device Elevation Of Privilege ISE:Y AV:L AC:L Au:M C:C I:N A:P CDP:MH Aw:U 3.4 ISE:Y AV:A AC:H Au:M C:N I:P A:N CDP:LM Aw:U 2.2

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:M Au:N C:C I:N A:P CDP:MH Aw:U 3.5 ISE:Y AV:L AC:H Au:M C:C I:N A:N CDP:MH Aw:U 3.0

Potential Process Crash or Stop for Controller CGM Denial Of Service ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8 ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8

Potential Data Repudiation by Controller CGM Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:H Au:M C:N I:P A:P CDP:L Aw:A 1.2

Potential Lack of Input Validation for Controller CGM Tampering ISE:Y AV:L AC:M Au:N C:P I:P A:N CDP:LM Aw:N 5.9 ISE:Y AV:A AC:H Au:M C:P I:P A:P CDP:LM Aw:U 2.9

Potential Process Crash or Stop for Controller Connected Device Denial Of Service ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:L Aw:U 2.3 ISE:N AV:L AC:H Au:N C:N I:N A:C CDP:L Aw:U 2.3

Spoofing the Controller CGM Process Spoofing ISE:Y AV:L AC:L Au:N C:C I:N A:C CDP:L Aw:N 6.9 ISE:Y AV:L AC:H Au:M C:P I:N A:N CDP:L Aw:N 1.8

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7

Potential Data Repudiation by Controller Connected Device Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:L Aw:A 0.9

Potential Lack of Input Validation for Controller Connected Device Tampering ISE:Y AV:L AC:M Au:N C:P I:P A:N CDP:LM Aw:N 5.9 ISE:Y AV:L AC:H Au:M C:P I:P A:P CDP:MH Aw:U 3.1

Spoofing the Controller Connected Device Process Spoofing ISE:Y AV:L AC:M Au:N C:P I:N A:N CDP:L Aw:N 2.7 ISE:Y AV:L AC:M Au:M C:P I:N A:N CDP:L Aw:N 2.2

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:M Au:N C:C I:C A:C CDP:MH Aw:U 4.0 ISE:Y AV:L AC:H Au:M C:P I:N A:N CDP:MH Aw:U 2.2

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7

Spoofing the Controller Connected Device Process Spoofing ISE:Y AV:L AC:M Au:N C:N I:C A:C CDP:MH Aw:N 8.1 ISE:Y AV:A AC:H Au:M C:N I:C A:C CDP:MH Aw:U 3.8

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:A AC:M Au:N C:C I:N A:C CDP:L Aw:U 3.7 ISE:Y AV:A AC:H Au:M C:C I:N A:C CDP:L Aw:U 3.0

External Entity Health Care Professional Potentially Denies Receiving Data Repudiation ISE:N AV:A AC:L Au:N C:N I:P A:N CDP:L Aw:N 4.9 ISE:N AV:A AC:M Au:M C:N I:P A:N CDP:L Aw:N 3.6

Spoofing of the Health Care Professional External Destination Entity Spoofing ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:N 5.5 ISE:N AV:L AC:M Au:M C:P I:P A:N CDP:L Aw:N 4.2

External Entity Manufacturer Potentially Denies Receiving Data Repudiation ISE:N AV:L AC:L Au:N C:N I:P A:N CDP:L Aw:U 1.9 ISE:N AV:L AC:H Au:M C:N I:P A:N CDP:L Aw:U 1.3

Spoofing of the Manufacturer External Destination Entity Spoofing ISE:N AV:L AC:M Au:N C:C I:N A:N CDP:L Aw:N 5.2 ISE:N AV:L AC:H Au:M C:P I:P A:N CDP:L Aw:N 3.8

Elevation by Changing the Execution Flow in Controller Connected Device Elevation Of Privilege ISE:N AV:A AC:H Au:M C:C I:N A:P CDP:L Aw:N 5.2 ISE:Y AV:A AC:H Au:M C:N I:P A:N CDP:LM Aw:U 2.2

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:A AC:H Au:M C:C I:N A:C CDP:L Aw:U 3.0 ISE:N AV:A AC:H Au:M C:C I:N A:N CDP:L Aw:N 4.7

Data Flow CD Read Wireless CGM Device Configuration/Therapy Setting/Observation Is Potentially Interrupted Denial Of Service ISE:N AV:A AC:L Au:N C:N I:N A:P CDP:L Aw:C 0.6 ISE:N AV:A AC:L Au:N C:N I:N A:P CDP:L Aw:C 0.6

Potential Process Crash or Stop for Controller Connected Device Denial Of Service ISE:N AV:A AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.3 ISE:N AV:A AC:H Au:M C:N I:N A:C CDP:L Aw:U 2.3

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:L Au:N C:C I:N A:N CDP:L Aw:N 6.5 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:L Aw:N 2.1

Potential Data Repudiation by Controller Connected Device Repudiation ISE:N AV:A AC:M Au:N C:N I:P A:N CDP:L Aw:N 4.5 ISE:N AV:A AC:H Au:M C:N I:P A:P CDP:L Aw:A 1.3

Potential Lack of Input Validation for Controller Connected Device Tampering ISE:Y AV:A AC:M Au:N C:P I:P A:N CDP:L Aw:N 5.6 ISE:Y AV:A AC:H Au:M C:P I:P A:P CDP:MH Aw:U 3.2

Spoofing the Controller Connected Device Process Spoofing ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:L Aw:N 2.1 ISE:N AV:A AC:M Au:M C:P I:N A:N CDP:L Aw:N 2.8

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:U 3.8 ISE:Y AV:L AC:H Au:M C:C I:C A:C CDP:MH Aw:U 3.7

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:A AC:H Au:M C:C I:N A:N CDP:L Aw:N 4.7 ISE:Y AV:A AC:H Au:M C:P I:P A:N CDP:LM Aw:N 5.4

Data Flow CD Create/Update/Delete Wireless CGM Device Configuration/Therapy Setting/Observation Is Potentially Interrupted Denial Of Service ISE:Y AV:A AC:L Au:M C:N I:N A:P CDP:LM Aw:C 0.7 ISE:Y AV:A AC:L Au:N C:N I:N A:P CDP:LM Aw:C 0.8

Data Flow Sniffing Information Disclosure ISE:N AV:A AC:L Au:M C:C I:N A:N CDP:L Aw:N 5.5 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:L Aw:N 2.1

Spoofing the Controller Connected Device Process Spoofing ISE:N AV:L AC:H Au:M C:N I:C A:N CDP:N Aw:N 5.9 ISE:N AV:A AC:M Au:M C:C I:P A:N CDP:L Aw:U 3.0

Spoofing the Controller CGM Process Spoofing ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:L Aw:N 2.1 ISE:N AV:A AC:H Au:M C:P I:N A:N CDP:L Aw:N 2.1

Spoofing the Health Care Professional External Entity Spoofing ISE:Y AV:L AC:M Au:M C:P I:P A:P CDP:L Aw:U 2.4 ISE:N AV:A AC:M Au:M C:P I:P A:P CDP:MH Aw:U 3.4

Potential Data Repudiation by Controller CGM Repudiation ISE:N AV:L AC:M Au:N C:N I:P A:N CDP:L Aw:N 3.6 ISE:N AV:L AC:H Au:M C:N I:P A:P CDP:L Aw:A 1.2

Data Flow Sniffing Information Disclosure ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7 ISE:N AV:L AC:L Au:N C:C I:N A:N CDP:L Aw:U 2.7

Potential Process Crash or Stop for Controller CGM Denial Of Service ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8 ISE:Y AV:L AC:H Au:N C:N I:N A:C CDP:LM Aw:U 2.8

Elevation Using Impersonation Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:C I:C A:C CDP:MH Aw:U 4.1 ISE:Y AV:L AC:M Au:M C:C I:P A:N CDP:MH Aw:U 3.5

Controller CGM May be Subject to Elevation of Privilege Using Remote Code Execution Elevation Of Privilege ISE:Y AV:L AC:H Au:N C:C I:C A:C CDP:MH Aw:U 3.8 ISE:Y AV:L AC:H Au:M C:P I:P A:P CDP:MH Aw:U 3.1

Elevation by Changing the Execution Flow in Controller CGM Elevation Of Privilege ISE:Y AV:L AC:L Au:N C:C I:C A:P CDP:MH Aw:U 4.1 ISE:Y AV:A AC:H Au:M C:N I:P A:P CDP:MH Aw:U 2.9

1

Copyright © 2018 IEEE. All rights reserved.

2827