78
Project Number: 318786 Model and Inference Driven Automated testing of Services architectures Deliverable D9.3 Assessment report for adopting and for contributing to testing standards in the MIDAS Platform as a Service

Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Project Number: 318786

Model and Inference Driven Automated testing of Services architectures

Deliverable D9.3

Assessment report for adopting and for contributing to testing standards in the MIDAS Platform as a Service

Page 2: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 2 of 78

Revision Chart

Version Date Author (Partner) Description

0.1 - SIN version under 1st review

0.2 26.06.2013 FF Revision and initial content for FF sections

0.3 28.06.2013 FF

CNR

Added Security Testing standardization activities

Review and Test Management and Execution

0.4 09.07.2013 SIN contribution for chapter 1

0.5 23.07.2013 ITA Contribution to Supply Chain Management Sections

0.6 1.8. 2013 SIN 2nd

review of the document. Review from SEF considered in the revision of the document

0.7 3.8.2013 DEDA Contribution to eHealth Sections

0.8 7.8. 2013 SIN Executive Summary, Conclusions and Summary added

0.9 9.8. 2013 SIN 1st draft version for internal review

1.0 25.8.2013 SIN UGOE and UPMC Reviewer comments accepted; Published version

Page 3: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 3 of 78

EXECUTIVE SUMMARY

The aim of this document is to review key standardization practices, standards and recommendations which are related to SOA testing and may have an impact on the MIDAS test framework design. In addition, we also identify standardization activities with potential MIDAS contribution. First, we have identified standards that shall be considered when specifying the SOA SUT that is compliant with the MIDAS testing model. Next, we compile the comprehensive list of testing standards that the MIDAS framework relies on and/or may have an impact on those standards in terms of advancing the standards specifications, test standard making practices and best practices in using the existing standards

A comprehensive analysis of the standardization activities that are related to the SOA testing has been made. We applied top down approach, starting from the very generic recommendations from OASIS Group on SOA testing approaches, which consider SOA services as largely software artefacts and can leverage the body of testing experience around software testing. The evolving and comprehensive ISO/IEC 29119 Software Testing standard has been taken into account as a generic guideline in tailoring the MIDAS test framework.

More specifically, in assessing the impact on test standardization activities three major MIDAS use cases, namely Test Execution, Manual Test Design and Automated Test Design have been considered. In particular, we have considered the standardization activity within standardization bodies such as the ETSI TC-MTS (ETSI Methods for Testing and Specification Technical Committee), OMG, OASIS, ISO/IEC and IEEE. The objective is to assess the mutual impact between the aforementioned standard activities and the MIDAS project. From the gaps identification between the MIDAS requirements and current standardization practices, potential contributions to standardization activities have been defined.

In identifying the compatibility requirements of the SOA systems/services with the MIDAS test model, we rely on a set of stable standards defined by OASIS Group, OMG, W3C and IETF. The document also defines standards for the e-Health and Supply Chain Management SOA services that will serve as SOA System under test during the pilot

Page 4: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 4 of 78

activities, where the compatibility with MIDAS UML Model will be examined.

Finally, this document presents a list of clearly identified impacts on standardization activities and proposes concrete actions on reaching the alignment between MIDAS project and standardization activities.

Page 5: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 5 of 78

Abstract

Project Number 318786 Project MIDAS

Project Title Model and Inference Driven Automated testing of Services architectures

Project website http://www.midas-project.eu

Document URL

Deliverable Number D9.3 Title Assessment report for adopting and for contributing to testing standards in the MIDAS Platform as a Service

Work Package Number 9 Title Standardization and Dissemination

Task Number 9.1 Title Standardization

Date of Delivery Contractual M12 Actual M12

Version draft under review final

Dissemination Level

Public Confidential

Summary for dissemination

The deliverable assesses the alignment issues of the MIDAS testing framework with the requirements for test standards development of the standardization bodies and interoperability initiatives.

Keywords MIDAS, Test automation, Test standardization, Model-driven testing

Contact Person Roman Kužnar (SINTESIO)

Authors (Partner) Roman Kužnar and Boštjan Pintar from SINTESIO

Marc-Florian Wendland and Christian John from FF

Libero Maesano and Fabio de Rosa from SEF

Nicola Tonellotto from CNR

Segio Di Bona and Davide Guerri from DEDA

Miguel Angel Barcelona from ITA

Reviewers UGOE and UPMC

Page 6: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 6 of 78

Page 7: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 7 of 78

TERMS AND DEFINITIONS

The terms used in this document are defined in the MIDAS Glossary [51].

Acronyms used in this document

API Application Programming Interface ASN.1 Abstract Syntax Notation One ATS Abstract Test Suite BPM Business Process Management CDA Clinical Document Architecture CIM Computation Independent Model CTS2 Common Terminology Services 2 ETSI European Telecommunications Standards Institute ETSI TC-MTS ETSI Methods for Testing and Specification Technical

Committee GS1 Global system of supply chain standards GUI Graphical User Interface HOST High Stakes Online Secure Testing HSSP Healthcare Service Specification Program IaaS Infrastructure as a Service ICS Implementation Conformance Statement ICS/IFS Implementation Conformance or Interoperable Functions

Statement IDL Interactive Data Language IEC International Engineering Consortium IEEE Institute of Electrical and Electronics Engineers IETF Internet Engineering Task Force ISO International Standards Organization ISTQB International Software Testing Qualifications Board IT Information Technologies IUT Implementation Under Test IXIT Implementation eXtra Information for Testing IXIT Implementation eXtra Information for Testing IXS Identity Cross-Reference Service JEE Java Execution Environment

Page 8: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 8 of 78

JMS Java Message Service JSON JavaScript Object Notation LIM Logistics Interoperability Model MBT Model Based Testing MDA Model Driven Architecture MIDAS TAM MIDAS Test Analysis Model MIDAS TC MIDAS TTCN-3 Code MIDAS TDM MIDAS Test Design Model MIDAS TMM MIDAS Test Management Model NIST National Institute of Standards and Technology OASIS Organization for the Advancement of Structured

Information Standards OMG Object Management Group OSI Open Systems Interconnection PaaS Platform as a Service PIM Platform Independent Model PSM Platform Specific Model REST Representational State Transfer RFC IETF Request for Comments RLUS Retrieve, Locate, and Update Service SaaS Software as a Service SAML Security Assertion Markup Language SAUT Services Architecture Under Test SCC Supply Chain Council SCM Supply Chain Management SCOR Supply Chain Operations Reference SCSMS Supply Chain Security Management System SOA Service-oriented Architecture SOAP Simple Object Access Protocol SOA-RAF SOA Reference Architecture Foundation SSL Security Socket Layer SUT System Under Test TaaS Testing as a Service TPaaS Testing Platform as a Service TTCN-3 Testing and Test Control Notation Version 3 TTCN3 TCI-TL TTCN-3 Test Control Interface-Test Logging

Page 9: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 9 of 78

UTP UML Testing Profile W3C World Wide Web Consortium WADL Web Application Description Language WSDL Web Service Definition Language XML Extensible Markup Language XSD XML Schema Definitions

Page 10: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 10 of 78

TABLE OF CONTENTS

EXECUTIVE SUMMARY ........................................................................................................................... 3

ABSTRACT .............................................................................................................................................. 5

TERMS AND DEFINITIONS ...................................................................................................................... 7

ACRONYMS USED IN THIS DOCUMENT .................................................................................................. 7

TABLE OF CONTENTS ............................................................................................................................ 10

1 INTRODUCTION ................................................................................................................................. 12

1.1 PROJECT DESCRIPTION ................................................................................................ 12

1.2 PURPOSE OF THE DOCUMENT........................................................................................ 12

1.3 DOCUMENT STRUCTURE .............................................................................................. 14

2 OVERVIEW OF THE STANDARDIZATION RELATED TO SOA TESTING ................................................... 16

2.1 GENERIC RECOMMENDATIONS ON SOA TESTING .............................................................. 19 2.1.1 What to be tested .................................................................................................................... 19 2.1.2 How Testing is to be Done ........................................................................................................ 21 2.1.3 How Testing Results are reported ............................................................................................ 22

2.2 ARCHITECTURAL IMPLICATIONS FOR SOA TESTING ........................................................... 22

2.3 STANDARDIZATION IN THE FIELD OF SOFTWARE TESTING METHODOLOGIES ............................ 23

2.4 TTCN-3 BASED STANDARDIZATION FRAMEWORK FOR SOA TESTING .................................... 26

2.5 MODEL-BASED APPROACH TO SOA AUTOMATED TESTING ................................................ 29

3 STANDARDS CONSIDERED BY MIDAS PROJECT .................................................................................. 32

3.1 TEST EXECUTION USE CASE .......................................................................................... 32 3.1.1 TTCN-3 SOA Tools Comparison ................................................................................................ 35 3.1.2 TTworkbench v.15 platform limitations and bugs ................................................................... 36

3.2 MANUAL TEST DESIGN USE CASE .................................................................................. 39 3.2.1 Service/System Model Compatibility Standards for MIDAS UML Model ................................. 43

3.3 AUTOMATED TEST DESIGN USE CASE ............................................................................. 44

3.4 TEST MANAGEMENT, EXECUTION AND DOCUMENTATION .................................................. 46 3.4.1 Test Management and Execution ............................................................................................ 46 3.4.2 Documentation ........................................................................................................................ 47

3.5 COMPATIBILITY STANDARDS FOR E-HEALTH SERVICES DESCRIPTIONS .................................... 49 3.5.1 The HSSP standards.................................................................................................................. 50 3.5.2 Other eHealth standards used in the MIDAS Pilot ................................................................... 53

3.6 COMPATIBILITY STANDARDS FOR SCM SERVICES DESCRIPTIONS .......................................... 56

4 CONTRIBUTING TO TEST STANDARDS MAKING PRACTICES ............................................................... 65

4.1 SOA TEST STANDARDS MAKING PRACTICES .................................................................... 65

4.2 MODEL-BASED TESTING (MBT) PRACTICES AND EXPERIENCE IN ETSI .................................. 65

4.3 E-HEALTH TEST STANDARDS MAKING PRACTICES ............................................................. 66 4.3.1 IHE Testing Process .................................................................................................................. 67 4.3.2 HL7 initiatives ........................................................................................................................... 68 4.3.3 The European project HITCH .................................................................................................... 68

4.4 SCM TEST STANDARDS MAKING PRACTICES ................................................................... 70

Page 11: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 11 of 78

5 CONCLUSION AND NEXT STEPS ......................................................................................................... 71

5.1 SUMMARY OF THE STANDARDIZATION ASSESSMENT .......................................................... 71

5.2 NEXT STEPS .............................................................................................................. 74

6 REFERENCES ...................................................................................................................................... 76

Page 12: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 12 of 78

1 INTRODUCTION

1.1 Project Description

The MIDAS project aims at designing and building an integrated framework for SOA testing automation that will be available as a Software as a Service (SaaS) on a Cloud infrastructure and that spans all the testing activities: test generation, execution, evaluation and scheduling, on the functional, interaction, fault tolerance, security and usage-based testing aspects. MIDAS is focused on SOA testing, i.e. on black box testing of single services and on grey-box testing of services architectures. The testing methods and technologies that are investigated and prototyped in the project are beyond the state of the art, particularly on model-based testing, model checking of choreographies for sound interaction test scenarios, fuzzing for security testing, usage-based testing, probabilistic inference reasoning about test evaluation and scheduling.

1.2 Purpose of the document

This deliverable assesses standardization framework related to the MIDAS project, i.e. standards related to the testing of Service Oriented Architectures. In particular, we have considered the standardization activity within standardization bodies such as the ETSI TC-MTS (ETSI Methods for Testing and Specification Technical Committee), OMG, OASIS, ISO/IEC and IEEE. The objective is to assess the mutual impact between the aforementioned standard activities and the MIDAS project.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [30].

The main objective of this document is to review key standardization practices and recommendations which are related to SOA testing and may have an impact on MIDAS test framework design or/and vice versa. The impact is assessed in two directions. First, we have identified standards that shall be considered when specifying the SOA system under test compliant with the MIDAS testing approach. Next, we compile the comprehensive list of testing standards that the MIDAS framework relies on and/or may have an impact to those standards in terms to advance the standards specifications. Additional considered aspect is the use of those standards. During the assessment of the

Page 13: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 13 of 78

standardization framework we observed, that test standards, recommendations or standardization guidelines loosely define the use of model-based testing methods, as the methodology has gain greater standardization attention only lately.

In the first part an overview of standardization related to Service-Oriented Architecture (SOA) testing is given, using the recommendations of OASIS Group as a starting point. Service-Oriented Architecture is a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations [1]. Testing for SOA combines the typical challenges of software testing and certification with the addition of accommodating the distributed nature and independence of the resources, the greater access of a more unbounded consumer population, and the desired flexibility to create new solutions from existing components over which the solution developer has little if any control [2]. As SOA services are considered as largely software artefacts the OASIS Group recommendation on SOA testing [2] is to rely on IEEE 829 standard [3] that specifies the basic set of software test documents, while allowing flexibility for tailored use. As such, IEEE-829 provides very generic guidance to SOA testing and a point of reference for additional test concerns introduced by a SOA approach.

The MIDAS test framework relies on the model-based testing approach and on the use of a TTCN-3 (Testing and Test Control Notation version 3) [10] based automated test execution engine as a reference technology for the proof of concept. TTCN-3 is a strongly typed test scripting language used in conformance and interoperability testing of communicating systems and a specification of test infrastructure interfaces that glue abstract test scripts with concrete communication environments. We will analyse all the potential advantages, disadvantages, limitations, drawbacks and suitability of TTCN-3 language for automatic SOA testing. Another important point is to analyse what is the impact of model-based approach to SOA testing to the technical standardization and dissemination activities (in terms of best practices, modelling patterns, modelling guidelines, and future revisions).

The document also identifies and describes the test standards and test standards making practices that are relevant for MIDAS pilot projects in the area of e-Health and Supply Chain Management SOA services. With

Page 14: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 14 of 78

the experimentation performed during the pilot projects, we will gather important feedbacks on the use of model-based SOA testing approach, and the experiences will be forwarded to the standardization bodies in terms of technical reports and/or technical guidelines.

1.3 Document structure

The document is structured in 5 sections.

Section 1: Introduction The section provides an overview of the MIDAS Project outcomes, the purpose of the present document and its structure as well.

Section 2: Overview of standardization related to SOA testing This section gives generic recommendations on SOA testing and reviews key standardization practices, such as model-based testing or testing with TTCN-3 testing system, and recommendations, that are written in IEEE 829 and ISO/IEC 29119 Software Testing standard, and are related to SOA testing and may have an impact on MIDAS test framework design or/and vice versa.

Section 3: Standards considered by MIDAS project The compliance requirements, limitations or restrictions for a SOA

system, acting as SUT, are presented. The mutual impact between the standardization framework and the MIDAS project is discussed from the perspective of the three major use cases: Test Execution Use Case, Manual Test Design Use Case, and Automated Test Design Use Case.

It is described which standards are considered for test planning, management, execution, and test reporting, to what extent the MIDAS architecture and framework complies to those standards, which compatibility standards will be considered, when describing e-Health pilot services, which standards are already available for e-Health testing, and, finally, which compatibility standards will be considered, when describing SCM pilot services.

Section 4: Contributing to test standards making practices This section provides an overview of the SOA test standards making practices and assesses potential MIDAS impacts and contributions to the test standards making practices. The focus is given to Model-Based Testing (MBT) standardization practices in ETSI and in OMG, e-Health test

Page 15: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 15 of 78

standardization practices and Supply Chain Management test standards making practices.

Section 5: Conclusions and next steps A brief summary of the potential contributions to SOA test standard, test standards making practices and methodology is provided.

Page 16: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 16 of 78

2 OVERVIEW OF THE STANDARDIZATION RELATED TO SOA TESTING

Testing for SOA combines the typical challenges of software testing and certification with the addition of accommodating the distributed nature and independence of the offered services, the greater access of a more unbounded service user population, and the desired flexibility to create new solutions from available services whose implementation cannot be controlled by the developer. The purpose of testing is to demonstrate a required level of reliability, correctness, and effectiveness that enable prospective service users to have adequate confidence in using a service. Adequacy is defined by the service user based on her/his needs and context of use. Testing cannot prove absolute correctness and completeness. For SOA, however, it is critical for the prospective service user to perform service testing directly, to know what testing has been performed, how it has been performed, and what the results were.

Given the main objective of the MIDAS project as defined in MIDAS Deliverable D2.1, which is to design and build a test automation facility for service components and service component architectures available on demand, i.e. on self-provisioning, pay-per-use, elastic basis, this section reviews key standardisation practices and recommendations which are related to SOA testing and may have an impact on MIDAS test framework design or/and vice versa.

As a starting point, we follow the generic statement on the SOA Testing model recently published by the OASIS group in the Reference Architecture Foundation for SOA Version 1.0 (SOA-RAF in brief) [1]. Testing of SOA services should be consistent with the SOA paradigm. In particular, testing resources and artefacts should be visible in support of service interaction between providers and consumers, where in the case of SOA testing the interaction is between the testing resource, e.g. the resource performing the test on behalf of the tester and the tester. In addition, the idea of opacity of the implementation should limit the details that need to be available for effective use of the test resources. Key recommendations raised out of SOA-RAF that should be followed by MIDAS test automation platform are compiled in Section 2.1.

In SOA-RAF, the SOA services are considered as largely software artefacts and can leverage the body of experience that has evolved around software testing. Recommendation IEEE 829 [3] specifies the basic set of software test documents while allowing flexibility for tailored use.

Page 17: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 17 of 78

While there are many testing frameworks available, the SOA-RAF does not prescribe the use of any. As such, IEEE 829 provides guidance to SOA testing and a point of reference for additional test concerns introduced by a SOA approach.

ISO/IEC is currently developing the new standard, ISO/IEC 29119 Software Testing standard [7], which aim is to provide one definitive standard for software testing that defines vocabulary, processes, documentation, techniques and a process assessment model for software testing that can be used within any software development life cycle. The standard will replace a number of existing IEEE and BSI standards for software testing, e.g. IEEE 829 Test Documentation, IEEE 1008 Unit Testing, BS 7925-1 Vocabulary of Terms in Software Testing, and BS 7925-2 Software Component Testing Standard.

The MIDAS Glossary is currently aligned with the IEEE 610.12, ETSI MTS Terminology, ISO 9000, ISO 9126, the ISTQB Glossary and the NIST standard.

When the ISO/IEC 29119 vocabulary will officially published, the definitions and concepts in the MIDAS Glossary shall be checked against the ISO/IEC 29119 vocabulary in order to harmonize the common understanding of the terms used.

A brief summary of the IEEE 829 guidelines and ISO/IEC 29119 Software Testing standard and theirs relevance and alignment to/with the MIDAS requirements is provided in Section 2.3.

For more than four decades, an OSI Seven Layer reference model helped align the network industry. Vendors were able to clearly explain what layer they offered product in, network engineers were able to determine the responsibilities of each layer and avoid architecting networks with functionality duplicated amongst multiple components. The objective of OSI has not been completely achieved until systems can be tested to determine whether they conform to the relevant protocol and profile specifications. Therefore, a set of ISO/IEC 9646 standards [6] has been developed to define the methodology to provide framework for specifying conformance test suites, and to define the procedures to be followed during testing. The test methodology has been heavily utilized by the telecom industry. ETSI has made a step further in applying the ISO/IEC 9646 methodology by developing the TTCN-3 [10] based approach to testing, which enables the development of the test suites in a well-defined

Page 18: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 18 of 78

formal test language and facilitates test automation. The aim of TTCN-3 is to provide a well-defined syntax for the definition of tests independent of any application domain. The abstract nature of a TTCN-3 test system makes it possible to adapt a test system to any test environment. This separation significantly reduces the effort required for maintenance and allows experts to concentrate on what to test and not on how.

Nowadays, SOA standards are shaping the Service delivery and Internet industry. From an OSI model standpoint, Web services technology is a distributed computing system that runs on top of networking technology. Therefore, all Web services run in OSI Layer 7 (Application Layer). Since the abstract nature of a TTCN-3 test system makes it possible to adapt a test system to any test environment, and as the TTCN-3 is being the only known well-standardized testing language, we will, in the course of this project, examine the use TTCN-3 test automation framework as a basis for cloud-based SOA testing automation. Best practices in the utilization of the TTCN-3 based approach are discussed in Section 2.4.Section.

A step further represents the model-based testing approach. In model-based test development, an engineer starts from a set of requirements of a system to be tested, usually given in a specification written in natural language. The engineer authors a model using a modelling notation that fulfils the requirements stated in the presented document. The model encodes these requirements and describes the aspects of the functional behaviour as well as the interfaces via which these are to be tested. The model is then instrumented for the purpose of test generation by adding or selecting test selection criteria, i.e. coverage goals or test purposes specifying what is to be covered, and heuristics specifying how these goals are to be covered. Test selection is necessary since from every non-trivial model, an infinite or huge amount of tests can be derived. A model-based testing tool then automatically generates an abstract test suite that complies with these criteria. This resulting abstract test suite may need to be adapted to enable test execution against the SUT. First practical experiences in adopting the MBT approach have already been exercised within ETSI and the standardization body has published a set of very generic recommendations for MBT approach. A compilation of the MBT testing requirements and its relevance to MIDAS is discussed in the Section 2.5.

Page 19: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 19 of 78

2.1 Generic recommendations on SOA testing

Generic recommendations on SOA testing approach extracted from [1], that are further elaborated below, shall be considered and may have a crucial impact on the design of the MIDAS platform or as a Testing Platform as a Service (TPaaS). If properly designed, the MIDAS TPAAS may facilitate and enable testing of the operational SOA environment, which is one of the key generic recommendations of the SOA-RAF on the SOA testing approach.

SOA-RAF emphasises that testing of SOA artefacts for use in the SOA ecosystem differs from traditional software testing for several reasons. These include a difference in what constitutes the service user community and what constitutes the evolving environment that comprises the SOA ecosystem. In response, testing must include considerations for making a service testable throughout its lifetime.

The distributed, unbounded nature of the SOA ecosystem makes it unlikely to have an isolated test environment that duplicates the operational environment. A traditional testing approach often makes use of a test system that is identical to the eventual operational system but isolated for testing. After testing is successfully completed, the tested entity would be migrated to the operational environment, or the test environment may be delivered as part of the system to become operational. This is not feasible for the SOA ecosystem as a whole.

SOA services must be testable in the environment and under the conditions that that are similar to those that can be encountered in the operational SOA ecosystem. As the ecosystem is in constant change, so some level of testing is continuous through the lifetime of the service, leveraging utility services used by the ecosystem infrastructure to monitor its own health and respond to situations that could lead to degraded performance. This implies the test resources must incorporate aspects of the SOA paradigm, and a category of services may be created to specifically support and enable effective monitoring and continuous testing for resources participating in the SOA ecosystem.

2.1.1 What to be tested

The Service Description model (Figure 1) elaborates on described aspects of a service:

Page 20: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 20 of 78

The service functionality and technical assumptions that underlie the functionality;

The policies that describe conditions of use;

The service interface that defines information exchange with the service;

Service reachability that identifies how and where message exchange is to occur, and

Metrics access for any participant to have information on how a service is performing.

The aspects represent joint concerns of all the stakeholders, and service testing must provide adequate assurance that each of these aspects is operational as defined. In particular:

Service functionality is an early and on-going focus of testing to ensure the service accurately reflects the described functionality and the described functionality accurately addresses the consumer needs.

Policies constraining service development, such as coding standards and best practices, require appropriate testing and auditing during development to ensure compliance. Policies that define conditions of use are initially tested during service development and are continuously monitored during the operational lifetime of the service.

At any point where the interface is modified or exposes a new resource, the message exchange should be monitored both to ensure the message reaches its intended destination and it is parsed correctly once received.

The service interface is also tested when the service endpoint changes. Functioning of a service endpoint at one time does not guarantee it is functioning at another time, e.g. the server with the endpoint address may be down, making testing of service reachability a continual monitoring function through the life of the service’s use of the endpoint.

Metrics are a key indicator for consumers to decide if a service is adequate for their needs. For instance, the average response time or the recent availability can be determining factors even if there are no rules or regulations promulgated through the governance process against which

Page 21: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 21 of 78

these metrics are assessed. Testing will ensure that the metrics access indicated in the service description is accurate.

Figure 1: Service description model as defined by SOA-RAF.

Metrics are a key indicator for consumers to decide if a service is adequate for their needs. For instance, the average response time or the recent availability can be determining factors even if there are no rules or regulations promulgated through the governance process against which these metrics are assessed. Testing will ensure that the metrics access indicated in the service description is accurate.

2.1.2 How Testing is to be Done

Testing should follow well-defined methodologies and, if possible, should reuse test artefacts that have proven generally useful for past testing activities. For example, IEEE 829 notes that test cases are separated from test designs to allow for reuse in more than one design test specification and to allow for reuse in other situations. Test design specification is the first stage in developing the tests for a software testing project. It records what needs to be tested, and is derived from the documents such as requirements and designs. It records which features of a test item are to be tested. The test design does not record the values to be entered for a test, but describes the requirements for defining those values. Test cases or test artefacts specify for each testing requirement, the exact input values that will be input and the values of any standing data that is required, the exact output values and changes of value of the internal system state that are expected, and any special steps for setting up the tests. A feature from the Test Design may be tested in more than

Page 22: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 22 of 78

one Test Case, and a Test Case may test more than one feature. The aim of a set of test cases is to test each feature from the Test Design at least once.

As with description of a service in the SOA ecosystem, description of testing artefacts enables awareness of the artefact and describes how the artefact may be accessed or used.

Automated testing and regression testing may be more important in the SOA ecosystem in order to re-verify a service is still acceptable when incorporated in a new use.

2.1.3 How Testing Results are reported

For any SOA service, an accurate reporting of the testing a service has undergone and the results of the testing is vital to service users deciding whether a service is appropriate for intended use. Appropriateness may be defined by a service user organization and require specific test regiments culminating in a certification; appropriateness could be established by accepting testing and certifications that have been conferred by others. An example of the SOA testing reporting practice is a GS1 certification schema for Supply Chain Management services, which is briefly described in 3.6.

Service user testing and the reporting of results raise additional issues. While stating who did the testing is mandatory, there may be formal requirements for authentication of the tester to ensure traceability of the testing claims. In some circumstances, persons or organizations would not be allowed to state testing claims unless the tester was an approved entity. In other cases, ensuring the tester had a valid email may be sufficient. In either case, it would be at the discretion of the potential consumer to decide what level of authentication was acceptable and which testers are considered authoritative in the context of their anticipated use.

Testing information, as with other elements of description, may require special access controls to ensure appropriate access and use.

2.2 Architectural Implications for SOA Testing

The discussion of SOA Testing indicates numerous architectural implications that have to be considered for testing of resources and interactions within the SOA ecosystem:

Page 23: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 23 of 78

[D9.3#1] SOA services MUST be testable in the environment and under the conditions that can be encountered in the operational SOA ecosystem.

[D9.3#2] The distributed, boundary-less nature of the SOA ecosystem makes it infeasible to create and maintain a single testing substitute of the entire ecosystem to support testing activities. Test protocols MUST recognize and accommodate changes to and activities within the ecosystem.

[D9.3#3] A standard suite of monitoring services SHOULD be defined, developed, and maintained. This SHOULD be done in a manner consistent with the evolving nature of the ecosystem.

[D9.3#4] Services SHOULD provide interfaces that support access in a test mode.

[D9.3#5] Testing resources MUST be described and their descriptions MUST be catalogued in a manner that enables their discovery and access.

[D9.3#6] Guidelines for testing and ecosystem access MUST be established and the ecosystem MUST be able to enforce those guidelines asserted as policies.

[D9.3#7] Services SHOULD be available to support automated testing and regression testing.

[D9.3#8] Services SHOULD be available to facilitate updating service description by authorized participants who has performed testing of a service.

2.3 Standardization in the field of Software Testing Methodologies

IEEE 829-2008, also known as the 829 Standard for Software and System Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgment outside the purview of the standard. The documents are:

Page 24: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 24 of 78

Test Plan: a management planning document that shows:

o How the testing will be done (including System Under Test (SUT) configurations)?

o Who will do the testing?

o What will be tested?

o How long it will take (although this may vary, depending upon resource availability)?

o What the test coverage will be, i.e. what quality level is required?

Test Design Specification: detailing test conditions and the expected results as well as test pass criteria.

Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification.

Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed.

Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next.

Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed.

Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run incorrectly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, whether possible, an assessment of the impact of an incident upon testing.

Test Summary Report: a management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the SUT, and statistics derived from Incident Reports. The report also records

Page 25: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 25 of 78

what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the SUT is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.

ISO/IEC is currently developing the new, ISO/IEC 29119 Software Testing standard (Figure 2), which aim is to provide one definitive standard for software testing that defines vocabulary, processes, documentation, techniques and a process assessment model for software testing that can be used within any software development life cycle.

The standard is centred around a three-tier risk-based process model for software testing that provides guidance on the development of organisational test strategies and policies, the management of testing projects including the design of project/level test strategies and plans and monitoring and controlling testing, and a dynamic test process that provides guidance for test analysis and design, test environment set-up and maintenance, test execution, and reporting. It is currently being developed, trialled, and reviewed by practitioners and academics around the world, with 27 nations represented in the working group that is responsible for developing the standard.

Figure 2: Structure of the ISO/IEC 29119 standard.

ISO/IEC 29119 comprises 5 parts:

Part 1: Definitions & Vocabulary

Part 2: Test Process

Part 3: Test Documentation

Page 26: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 26 of 78

Part 4: Test Techniques

ISO/IEC 33063 Process Assessment Model for Software testing processes (dual standard number pending)

The standard will replace a number of existing IEEE and BSI standards for software testing:

IEEE 829 Test Documentation

IEEE 1008 Unit Testing

BS 7925-1 Vocabulary of Terms in Software Testing BS 7925-2 Software Component Testing Standard

2.4 TTCN-3 based standardization framework for SOA testing

The TTCN-3 official web site states, that the Testing and Test Control Notation Version 3 (TTCN-3) is a standardized testing technology [10] developed and maintained by the European Telecommunication Standards Institute (ETSI) and specifically designed for testing and certification. The International Telecommunication Union (ITU-T) in the Z.160 series has also adopted the ETSI TTCN-3 standards.

TTCN-3 is a test specification language that applies to a variety of application domains and types of testing. It has been used since 2000 in standardization as well as in industry, research, international projects and academia. In response to the demands of the user community TTCN-3 is being continuously improved and extended.

TTCN-3 provides all the constructs and features necessary for black-box testing. It embodies a rich typing system and powerful matching mechanisms, support for both message-based and procedure-based communication, timer handling, dynamic test configuration including concurrent test behaviour, the concept of verdicts and verdict resolution and much more.

As a result of its intrinsic extensibility, TTCN-3 is able to import external data and type specifications directly and external implementations can be integrated in order to extend the functionality specified in the TTCN-3 standards. Several mappings of external data and type specifications such as ASN.1, IDL and XML are already standardized. Others can easily be added.

Page 27: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 27 of 78

The abstract definition of test cases is an important feature of TTCN-3. It makes it possible to specify a non-proprietary test system, e.g., test system that uses well-defined standardized test scripting language, which is independent of both platform and operating system. The abstract definitions can be either compiled or interpreted for execution.

The TTCN-3 reference architecture defines standardized interfaces for test control, for encoding and decoding of data, and for test execution.

TTCN-3 can be used in many types of testing. It can be used for:

valid, invalid and inopportune testing;

software module, unit, layer, protocol, integration and laboratory testing;

functional, load, distributed testing; and

regression, certification and approval testing.

The TTCN-3 based testing methodology follows the testing methodology as defined in ISO/IEC 9646 test standards for OSI distributed systems. The motivation for the standards development was an international definition and acceptance of a common testing methodology, together with appropriate testing methods and procedures, i.e. to provide a framework for specifying conformance test suites, and to define the procedures to be followed during testing.

To a great extent, one can draw parallels between ISO/IEC 9646 methodology and the IEEE 829 standard. A brief summary of the alignment is provided in the Table 1.

ISO/IEC 9646 documentation IEEE 829 documentation Abstract test architecture, PICS (Protocol Implementation Conformance Statement), Interoperability Feature statement

Test Plan

Test Suite Structure (TSS), PIXIT (Protocol Implementation eXtra Information for Testing)

Test Design Specification

TP (Test Purposes) Test Case Specification Abstract Test Suite (ATS) Test Procedure Specification

Test Report with verdict Test Item Transmittal Report Test Log

Page 28: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 28 of 78

Test Incident Report

Test Summary Report

Table 1: Comparison between ISO/IEC 9646 and IEEE 829 test methodology.

As a black-box testing methodology, TTCN-3 is mainly used in the telecom, networking, automotive, and other industries (e.g., medicine, power transmission and distribution, financial, avionics) [11] for two types of testing: conformance and interoperability testing.

Conformance testing involves testing both the capabilities and behaviour of an implementation. Conformance testing is testing to determine whether a product or system meets some specified standard that has been developed for efficiency or interoperability. Conformance testing includes neither assessment of the performance nor the robustness or reliability of an implementation.

The purpose of conformance testing is to increase the probability that different implementations are able to interwork. However, it should be born in mind that the complexity of communication systems makes exhaustive testing impractical on both technical and economic grounds. Also, testing cannot guarantee conformance to a specification, neither the inter-operability. What it does is to give confidence that an implementation has the required capabilities and that its behaviour conforms consistently in representative instances of communication.

On the other hand, interoperability testing can demonstrate that a product will work with other like products, and therefore assesses the ability of the implementation to support the required trans-network functionality between itself and another, similar implementation to which it is connected. The purpose of interoperability testing is not only to show that products from different manufacturers can work together but also to show that these products can interoperate using a specific protocol. Without this additional aspect, interoperability testing could be considered to be almost meaningless. Within the context of standardization, it is of little interest to know that two products can interoperate unless there is a guarantee that they are connected together by means of a standardized protocol. It is, therefore, advisable to test the conformance of an implementation before testing for interoperability with other (similarly tested) implementations.

From an OSI model standpoint, SOA services technology is a distributed computing system that runs on top of networking technology.

Page 29: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 29 of 78

Therefore, all of SOA services run in OSI Layer 7 (Application Layer). At its simplest level, SOA services are applications that communicate by exchanging XML messages, by means of SOAP [8] protocol, which is used for exchanging XML-based messages in a distributed environment. WSDL [12], [13], [14] is an XML-based format for the description of Web services that specifies the exposed interface and location of a Web service as well as how to access it. As demonstrated in [4], a Web service can be tested by using a test specification language like TTCN-3, such that the skeleton of the TTCN-3 Abstract Test Suite (ATS) is derived from the interface definition that is given in WSDL. For the test execution, a SUT adapter and codec have to be provided to instantiate the ATS for the target system.

From the ISO/IEC 9646 and ETSI test standard making practice perspective and with respect to SOA testing, unit testing of a single SOA component, functional testing of the software module or SOA component, testing of a conformity of the SOA interface and/or SOA protocol can be considered as a conformance testing, while the integration testing, where the behaviour of message exchange of more SOA components are observed from different intra-component monitoring points and/or usage based testing of the part or a whole SOA system where the test stimulus is generated from the user’s end points, can be considered as an interoperability testing.

2.5 Model-Based Approach to SOA Automated Testing

In model-based test development, an engineer starts from a set of requirements of a SUT, usually given in a specification written in natural language. The engineer authors a model using a modelling notation which fulfils the requirements stated in the [27]. The model encodes these requirements and describes the aspects of the functional and extra-functional behaviour as well as the interfaces via which these have to be tested. The model is then instrumented for the purpose of test generation by adding or selecting test selection criteria, i.e. instructions of what is to be covered, and heuristics specifying how these goals are to be covered. Test selection is typically necessary since from every non-trivial model, an infinite or huge amount of test cases can be derived. A test generator then automatically generates a set of logical test case specifications that may or may not be abstract with regard to test data. The set of test case specifications is often referred to as ATS. It is not prescribed in which format the ATS will be generated. In a fully model-based testing scenario,

Page 30: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 30 of 78

the resulting ATS is expressed within the model in a platform-independent representation. However, often the ATSs are directly transferred into some sort of script language such as TTCN-3 or even into a more informal format such as documents or spread sheets. Eventually, the resulting ATS may need to be adapted to ultimately enable test execution against the SUT.

One concrete notation for doing model-based testing as described above is the UML Testing Profile (UTP) [24]. It defines a language for designing, visualizing, specifying, analysing, constructing, and documenting the artefacts of test systems. It is a test modelling language that can be used with all major object and component technologies and applied to testing systems in various application domains. The UTP extends UML with test-specific concepts like test components, verdicts, defaults, etc. These concepts are grouped into concepts for test architecture, test data, test behaviour, and time. The UTP is defined such that mappings onto TTCN-3 are possible in order to enable the reuse of existing test infrastructures. As defined in [25], MIDAS test framework will rely heavily on the utilization of UTP for expressing abstract and platform-independent test case specifications, that are later on translated into TTCN-3 scripts. UTP fulfils the requirements on modelling notations for model-based testing as stated by ETSI [27].

The Model-Driven Architecture (MDA) approach defines business services by using a Computation Independent Model (CIM) that expresses the system functionality. The CIM is then translated into a platform-independent model (PIM). Further on, the PIM is then translated to one or more platform-specific models (PSMs) that are executable by a computer. A standardized way to describe MDA models is by using the UML. In the case of SOA services, the UML profile for SOA, e.g. SOA Modeling Language (SoaML), which has been standardized by Object Management Group (OMG) [5], is typically used. In general, UML modelling techniques can be used for developing CIM, PIM and PSM models. PSM implementation may rely on different technologies such as JMS, JEE, WSDL [12], [13], [14], BPEL [22], [23], and XML schema [15], [16], [17], [18], [19], [20], [21].

In cases where PSM models are written in Web Service Description Language (WSDL) and the abstract test suites are written in TTCN-3, it is possible to directly make use of the WSDL description. The basic concept has been presented in [4], and is shown in Figure 3.

Page 31: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 31 of 78

Figure 3: Web service test framework using WSDL models for ATS generation.

The specification of ATS simplifies reusability and portability, but also implies that the mapping is involved at two different levels in a TTCN-3 test system. First, the information provided by WSDL documents must be translated into TTCN-3, and the TTCN-3 type system has to enable the development of Web service test suites in the TTCN-3 core language. Second, the TTCN-3 adaptation layer must implement the reversed mapping to realize compatibility with the original interface definition.

Page 32: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 32 of 78

3 STANDARDS CONSIDERED BY MIDAS PROJECT

In order to identify key standards, recommendations and/or open specifications considered by MIDAS project, as well as potential impact on those standards and feedback provided to standardization bodies as a result of MIDAS project activities, we shall understand the ways the MIDAS test framework will be designed and utilized.

As shown in Figure 4, the MIDAS platform has three major use cases [52] namely:

1. Test Execution Use Case

2. Manual Test Design Use Case

3. Automated Test Design Use case

Figure 4: MIDAS platform Use Cases

Each user scenario implementation relies on a set of standards, recommendations and/or open specifications. Their application, potential extensions and other impact of the MIDAS project on standardization activities are further discussed below from the use case perspective.

3.1 Test Execution Use Case

Figure 5 depicts the most fundamental use case of the MIDAS platform. The user employs the MIDAS platform as execution engine of already existing (legacy, standardized) TTCN-3 test suites.

Page 33: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 33 of 78

Figure 5: Test execution use case

In this use case, the only prerequisite for the execution of test cases is that the test suites are written in a standardized formal test description language, namely TTCN-3, as specified in [10]. These test suites might be slightly amended to comply with the MIDAS platform.

Test Execution use case matches current ETSI standards making practice for the development of conformance and interoperability test specifications. In the development of both conformance and interoperability test specifications, ETSI has traditionally followed a stepwise approach based on the methodology defined in ISO/IEC 9646-1 [6] and resulting in a number of different test specification deliverables. Figure 6 illustrates this approach. These steps can be understood as different levels of abstraction that bridge the large intellectual gap between a base specification and the final executable test suite. They not only form an essential framework for test specification but also enable a common understanding of the complete test specification for different target audiences, e.g., standardization experts, technology experts, managers, and test engineers.

In the first step, requirements are identified from one or more base specifications. These may be catalogued and published in a Requirements Catalogue. Then the Implementation Conformance or Interoperable Functions Statement (ICS/IFS) is constructed. These two are both essentially high-level check lists for features and capabilities in a standard. The checklists are filled out by system vendors, according to which features they implement in their products. This information can help to determine whether two implementations of the same standard have potential to interoperate.

Page 34: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 34 of 78

Figure 6: ETSI (Test) Specification Development

In the next step one or more test purposes are specified for each testable identified requirement, either in English prose, or in the TPLan notation [26]. A test purpose formulates (an aspect of) a requirement as a set of IUT pre-conditions, stimuli, and responses, and specifies test verdict criteria. The testability of a requirement is affected by the type of testing to be carried out, e.g. requirements related to error management cannot be assessed using interoperability tests because many error conditions cannot be triggered by conforming implementations.

After the definition of test purposes, an informal test description can be specified in English prose, as tables, or as message sequences covering usually one, but sometimes multiple test purposes. Test descriptions extend test purposes by providing more detailed information of preambles and post-ambles. Test descriptions are by definition not executable.

The preambles and post-ambles in test descriptions are not conceptually the focus of testing, even though they in practice depend on identified requirements in the same way as test bodies do. Therefore, conventionally preambles tend to be specified to invoke behaviour that is most likely to be correctly implemented. In addition, preambles tend to be reused across test descriptions as much as possible.

In the next step, executable test cases are potentially specified, usually in TTCN-3. Test cases tend to be specified using multiple concurrently executing test components for stimulating and observing different logical interfaces of the IUT, i.e. one test component per one abstract test interface. Another key concept in the specification of test

Page 35: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 35 of 78

cases is the use of IXIT (Implementation eXtra Information for Testing), which means testing parameters, such as parameterized message fields, that can be specified at test execution time. IXIT is frequently used for example to control TTCN-3 test execution based on the ICS or IFS.

Finally, each test description or test case has to be validated (usually outside of ETSI) to ensure that it is correctly specified, e.g., by executing a test description at an interoperability event or by running a test case from a conformance test tool against a number of different implementations of a given standard. So far transitions between the different steps, e.g., the specification of test descriptions from or for test purposes, have always been performed manually at ETSI.

For this particular use case, MIDAS may have no impact on standards, or standardization-related activities, beyond that it will enable the execution of already developed standardized abstract test suites produced by standardization bodies (e.g., ETSI).

At the current stage of the project, it is not possible to clearly identify potential lacks of TTCN-3 language for the MIDAS testing profile, especially for SOA services. The preliminary usability analyses of the commercial TTCN-3 test execution engines for SOA services testing has been performed, and its findings is further discussed in the section bellow.

3.1.1 TTCN-3 SOA Tools Comparison

The table below compares the most common TTCN-3 commercial tools1 against the fulfilled interoperability platform requirements for automatically testable services and services architectures reported within the deliverable D2.1. In particular, the following technological criteria grouped by the two different web services technologies, i.e. WS-* and REST, have been used in the discrimination and then in the choice of the platform:

WS-*: WSDL 1.1/XSD for SOAP 1.1 and SOAP 1.2 over HTTP/1.1, WS-

Addressing 1.0;

REST: WADL/XSD for REST/XML and REST/JSON over HTTP/1.1.

1 This list has been taken from the ETSI TTCN-3 official site, http://www.ttcn-3.org/index.php/tools. In

our assessment we have not taken into consideration non-commercial tools because all of them, up to now, do not support any web service technology.

Page 36: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 36 of 78

The choice falls on the TTworkbench v.15 platform since it has more “YES” values assigned in comparison to other platforms. Nevertheless, TTworkbench v.15 platform has several technology limitations that reduce the set of the actual services architectures that could be tested. The subsequent section reports on these limitations.

TTCN-3 Tool

CD:Codec / SA:System Adapter

WS-* REST

WSDL SOAP XSD WS-

Addressing

WADL

XML JSON

TestCast - MBT edition (elvior)

v.6.7.1

http://www.elvior.com/

NO NO YES (v.1.

0) NO NO NO NO

TTworkbench - (Testing

Technologies) v.15

http://www.testingtech.com/

YES (v.1.1)

YES

(v1.1 & v.1.2)

YES (v.1.

0) NO NO YES NO

TTCN-3 ToolBox (Devoteam)

http://www.devoteam.de/

NO NO YES (v.1.

0) NO NO NO NO

OpenTTCN Tester 2012 - (OpenTTCN)

v.4.2.5

http://www.openttcn.com/

NO NO YES (v.1.

0) NO NO YES NO

3.1.2 TTworkbench v.15 platform limitations and bugs

1. There are some bugs of generating TTCN-3 types and of configuring System Adapters starting from WSDL/XSD files of a web service. E.g., up to now, it is not possible to generate TTCN-3 types from the

Page 37: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 37 of 78

standard OMG RLUS2 example, since this generation fails. This bug is very critical as RLUS is one of the Pilot services being tested within the context of the proof of concept. Testing Technologies will fix these bugs with the release v.16.

2. TTworkbench does not fully support XSD "anyType". In particular, the type "anyType" can be used for call parameters but not for reply parameters. In other words, the type "anyType" can actually be encoded while invoking a web services but not decoded while receiving a reply from it. That means, there are some restrictions for implementing service emulators or interceptors within the TTworkbench platform, as such test components are not able to manage incoming messages containing elements with type “anyType”. That is a critical point as generic services (e.g., RLUS, CTS23, IXS4etc., that are some of services involved in the e-Health Pilot) heavily use the type "anyType". Currently, Testing Technologies has not yet planned to address this limitation.

3. TTworkbench does not support WS-Addressing 1.05. WS-Addressing is a W3C specification of transport-neutral mechanisms that allow Web services to communicate addressing information. It includes message routing metadata within SOAP headers, so that the network-level transport is only responsible for delivering messages to a dispatcher capable of reading such metadata. The WS-Addressing specification is the most used technology for putting in place asynchronous interactions among Web services. That means this limitation significantly reduces the set of the services architectures that might be tested by means of the TTworkbench platform. Currently, Testing Technologies has not yet planned to address this limitation.

4. TTworkbench does not fully support (HTTP) REST/XML web services implementation. There are some limitations in dealing with the headers of the HTTP protocol. In particular, HTTP headers are supported only when the TTCN-3 system works as a client, thus excluding test scenarios with emulations or interceptions. Currently,

2 http://www.omg.org/spec/RLUS/1.0.1/

3 http://www.omg.org/spec/CTS2/1.0/

4 http://www.omg.org/spec/IXS/1.0.1/

5 http://www.w3.org/TR/ws-addr-core/

Page 38: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 38 of 78

Testing Technologies has not yet planned to address this limitation. 5. TTworkbench does not support (HTTP) REST/JSON web services

implementation, i.e. there are no System Adapter and Codec that are able to enact a communication with the SUT through HTTP / JSON protocol. That inhibits to test Web services implemented in this technology. Currently, Testing Technologies has not yet planned to address this limitation.

6. TTworkbench does not support WS-Security (WSS) 1.0 and 1.1 protocols6, i.e. there are no System Adapter and Codec allowing one to put in place a message-based security testing, except through SSL7 (Security Socket Layer) protocol. WSS is the most used mechanism to apply security to SOAP messages. Indeed, it is an extension to SOAP protocol and specifies how integrity and confidentiality can be enforced on messages and allows the communication of various security token formats, such as SAML8, Kerberos9, and X.50910. Its main focus is the use of XML Signature11 and XML Encryption12 to provide end-to-end security.

In addition, the TTworkbench’s compiler is not able to generate TTCN-3 types and adequate System Adapter configurations starting from security assertions expressed by WS-SecurityPolicy 1.213 and attached to WSDL file of the Web service.

7. Currently, Testing Technologies has not yet planned to address this limitation. TTworkbench does not support WS-MetadataExchange14 protocol, i.e. there are no System Adapter and Codec that are able to enact such a communication with the SUT. WS-MetadataExchange

6 https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss

7 https://tools.ietf.org/html/rfc6101

8 https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=security

9 http://www.kerberos.org/

10 http://www.ietf.org/rfc/rfc2459.txt

11 http://www.w3.org/TR/xmldsig-core/

12 http://www.w3.org/TR/xmlenc-core/

13 http://docs.oasis-open.org/ws-sx/ws-securitypolicy/v1.2/ws-securitypolicy.html

14 http://www.w3.org/TR/ws-metadata-exchange/

Page 39: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 39 of 78

(which is part of the WS-Federation15 roadmap) is a Web services protocol specification designed to work in conjunction with WS-Addressing, WSDL and WS-Policy16 to allow retrieval of metadata about a web service endpoint. More specifically, it uses a SOAP message to request metadata, and so goes beyond the basic technique of appending "?wsdl" to a service name's URL. Currently, Testing Technologies has not yet planned to address this limitation.

8. TTworkbench does not support TTCN-3 type generation from WSDL 2.017 and WADL18. That means, up to now, there are no System Adapter and Codec allowing one to generate TTCN-3 types, and relative adapter and codec configurations, starting from some WSDL 2.0 file or WADL file related to a web service. Currently, Testing Technologies has not yet planned to address this limitation.

3.2 Manual Test Design Use Case

Figure 7 shows an extended scenario of the use case Test Execution, i.e., the user does not make use of the test generation capabilities of the MIDAS platform, but rather manually specifies the test cases in a MIDAS test model. The MIDAS test model is then passed to the MIDAS platform from which executable test code will be generated and executed by the MIDAS platform.

Figure 7: Use Case: Manual test case specification

15

http://docs.oasis-open.org/wsfed/federation/v1.2/ws-federation.html 16

http://www.w3.org/TR/ws-policy/ 17

http://www.w3.org/TR/wsdl20/ 18

http://www.w3.org/Submission/wadl/

Page 40: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 40 of 78

Compared to the Test Execution use case, in the Manual Test Design use case the number of manually generated test specifications developed in the step-wise approach, and usually written in English prose, is replaced with the formal description of the test model. Key difference between Test Execution and Manual Design use cases, from the perspective of the test specification development, is illustrated in Figure 8.

Figure 8: Key difference between Test Execution and Manual/Automated Test Design use cases in terms of test standards making practices.

Page 41: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 41 of 78

The MIDAS Test Model is a MIDAS UML Model that uses the UML Testing Profile (UTP) 1.2 [24], standardized by OMG, to describe structural and behavioural aspects of the test systems as well as the test cases.

The MIDAS Test Model is a MIDAS UML Model that uses the MIDAS Test Profile to describe structural and behavioural aspects of the test systems as well as the test cases. A MIDAS Test Model may be defined standalone or refer to other models, in particular to System/Service Models that use further profiles (e.g., SoaML).

Figure 9: MIDAS model information items.

MIDAS Test Analysis Model (MIDAS TAM) [52]: The MIDAS TAM is a MIDAS Test Model that contains the behavioural and structural definitions required for test generation. In addition, it contains the MIDAS Test Design Profile which consists of the test design directives to control the test generation. It may already contain the topology of the test environment, encoded as test configuration. The behaviour that will be used for test generation is not prescribed by MIDAS. The MIDAS TAM may consists of further artifacts for test generation, which are not specified in detail yet (such as (log) files being relevant for test generation).

MIDAS Test Design Model (MIDAS TDM) [52]: The MIDAS TDM is a MIDAS Test Model that contains test case specifications in a platform-independent manner. It may refer to the MIDAS TAM that was used to generate the test cases and test data. Manual test case specification does not require a MIDAS TAM.

Page 42: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 42 of 78

Figure 10: Conceptual view on the MIDAS TDM model.

MIDAS Test Management Model (MIDAS TMM) [52]: The MIDAS TMM is a MIDAS Test Model that contains a MIDAS TDM and specifies the required information for eventually executing the test cases organized in test suites (see Figure 11). An informative specification of the arbitration algorithm may be attached to test suites. Such an arbitration specification might or might not be considered during runtime.

Figure 11. Conceptual view on the MIDAS TMM model

A TestSuite groups a number (potentially one) of test cases for subsequent execution. A TestSuite may specify an informative ArbitrationSpecification that determines the rules of the arbitration of the TestCases contained in the TestSuite. TestSuites can be nested in each other (e.g., a TestSuite for Acceptance Testing may be divided into a conformance, interoperabilty and UI TestSuite. In case a nested TestSuite specifies an ArbitrationSpecification, the ArbitrationSpecification of the nesting TestSuite would be overridden (if such an ArbitrationSpecification is given at all for the nesting TestSuite). Each TestSuite may define a TestControl. A TestControl specifies in which order and under which conditions the TestCases contained in the TestSuite are executed. A TestControl is commonly utilized for static scheduling of TestCases, where the execution order of the TestCases is determined previous to execution.

Page 43: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 43 of 78

MIDAS TTCN-3 Code (MIDAS TC) [52]: MIDAS TC is MIDAS-compatible TTCN-3 code that is going to be executed by the MIDAS TAAS.

TTCN-3 Test Control Interface-Test Logging (TTCN3 TCI-TL) [52]: TTCN3 TCI-TL is an instance of the standardized TTCN-3 test log format created during test execution by a TTCN-3 execution environment.

MIDAS experiences on designing test case specifications in a model-based manner for subsequent execution may have an impact on the technical standardization and dissemination activities (in terms of best practices, modelling patterns, modelling guidelines, and future revisions) in the realm of UML and UML Testing Profile. Experiences are supposed to influence the requirements specification for the next version of UTP, i.e., UTP 2.

With respect to security testing, experiences made in MIDAS are supposed to be contributed to the efforts that are currently being undertaken at ETSI ([29], [31]).

3.2.1 Service/System Model Compatibility Standards for MIDAS UML Model

The system model is a pure model of the MIDAS target system, i.e. it represents an abstraction of the target system that is independent of any test scheme. It includes two distinct sub-models:

the service model - the model of the services and of the services architecture that are put into operation within the SUT, at the logical level (service PIM) and at the implementable level (service PSM);

the SUT model - the description of the deployed and accessible SUT topology and locations.

The current standards are fully sufficient in describing both the models. Indeed, the formulation for the essential service PIM that is compatible with the MIDAS Test Model must be based upon the UML extended by the SoaML [5] UML Profile and Meta-model.

On the other hand, the formulations of the service PSMs that are compatible with the MIDAS Test Model must be based upon:

WSDL 1.1/XSD [9] for SOAP 1.1 interoperability platform conformant to the WS-I Basic Profile (BP) Version 1.2.

Page 44: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 44 of 78

WSDL 1.1/XSD for SOAP 1.2 Interoperability Platform, conformant to the WS-I Basic Profile (BP) Version 2.0.

WADL/XSD for REST/XML interoperability platform.

In order to be compatible with the MIDAS Test Model, the system under test must be built over one or more of the technical interoperability platforms listed below:

SOAP 1.1 interoperability platform – transmission protocol SOAP 1.1 over HTTP/1.1 conformant to the WS-I Basic Profile (BP) Version 1.2.

SOAP 1.2 interoperability platform – transmission protocol SOAP 1.2 over HTTP/1.1 conformant to the WS-I Basic Profile (BP) Version 2.0.

REST/XML interoperability platform – transmission protocol HTTP/1.1 with as Content-Type value either ‘text/xml’ or ‘application/xml’.

REST/JSON interoperability platform – transmission protocol HTTP/1.1 with as Content-Type value ‘application/json’.

For the REST platforms, services must be designed following the constraint of the REST application architecture named Hypertext As The Engine Of Application State (HATEOAS).

The MIDAS Test Models consider the standards for services/system model description as they are. It is beyond the scope of the MIDAS project to make any thorough analyses of the suitability and usability of those standards, and therefore it is not expected, that the MIDAS project will have an impact on those standards and standardization efforts. Although it is not highly expected, if the modelling issues related to the Test modelling will be identified, the issues will be reported to the relevant standardization bodies.

3.3 Automated Test Design Use Case

Figure 12 shows the complete and main use case of the MIDAS platform, i.e., the use of all capabilities of the MIDAS platform. This use case is also the main use case within the MIDAS project. The MIDAS end-user (simply user hereinafter) utilizes the entire capabilities of the MIDAS platform, including test generation, test execution and reporting.

Page 45: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 45 of 78

The user provides a test model with test design directives to a test generator. The generated test cases are stored within the test model. The user specifies test suites, a scheduler and an arbiter for the test execution within the test model. The test model is then passed to the test script generator that generates executable TTCN-3 code according to the arbitration specification in the test model and submits the artefacts (i.e., test model and TTCN-3 scripts) to the test runner, which is responsible for executing the test scripts with the specified scheduler. Afterwards, the test log information will be returned.

Figure 12: Use Case: Automated Test Design

For this particular use case, the MIDAS approach may have an impact on the ETSI MBT recommendations regarding commonly accepted coverage criteria for automated test design. ETSI ES 202951 is currently under revision for being extended with coverage criteria that are commonly used and well accepted in the model-based testing area. MIDAS may also initiate or drive a discussion at ETSI MTS about a concrete MBT methodology and strategy, which is currently lacking.

Furthermore, experiences with modelling and executing test generation based on formalized coverage criteria will have an impact on the requirements specification of the future UTP version, i.e., UTP 2. A Request for Proposals is currently sketched by an unofficial UTP working group at OMG.

Finally, the way SOA models need to be modelled in order to be suitable for automated test generation might influence common modelling guidelines of SoaML.

Page 46: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 46 of 78

With respect to security testing, experiences made in MIDAS are supposed to be contributed to the efforts that are currently being undertaken at ETSI ([29], [31]).

3.4 Test Management, Execution and Documentation

3.4.1 Test Management and Execution

The ISO/IEC/IEEE Standard for Software and system engineering [7] – Software testing – Part 2: Test process [ISO/IEC/IEEE 29119-2.2] aims at defining a generic process model for software testing that can be used by any organization when performing any form of software testing. The document is intended for testers, test managers, developers and project managers. As such, it is a broad document that encompasses all steps in a testing methodology, and it is mainly targeting test engineers. In fact, it defines the testing activities that may be performed during the life cycle of a software system, and these activities are grouped into three process activities:

1. Organizational Test Process, for the creation and maintenance of documents providing information about testing for an organization, e.g., providing details on how testing is to be performed on all projects run within an organization.

2. Test Management Process, covering the management of testing for a whole project or any test phases or test type within a test process.

3. Dynamic Test Process, defining generic processes for performing dynamic testing, i.e., testing that requires the execution of program code.

The first process activity lies outside the scope of the MIDAS project, as it is targeting specifically the structure and description of testing procedures at organization level. As the MIDAS platform does not impose any specific testing methodology to the users, the second process cannot be considered mandatory for the MIDAS platform as well.

The dynamic test process is, to some extent, a core process in the MIDAS platform. Its main processes are detailed in the following figure.

Page 47: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 47 of 78

Figure 13: Dynamic test processes as defined in [7]

The dynamic test process interacts with the users by receiving a test plan together with control directives, and communicating the test measures at the end of the testing activities. The Test Design and Implementation process requires testers (named also scheme developers) to apply one or more test techniques to derive test cases and test procedures with the ultimate aim of achieving test completion criteria. This process will be part of the MIDAS platform, but no further activities are specified. The Test environment Set-Up & Maintenance Processes, used to establish and maintain the environment in which tests are executed, the Test Execution Process (composed by the Execute Test Procedures, Compare Test Results and Record Test Execution activities) are planned to be implemented within the MIDAS platform. The last process, namely the Test Incident Reporting Process, it is not mandatory in the current MIDAS architecture.

The MIDAS platform is thus expected to implement some standards for the Dynamic Test Process as in ISO/IEC/IEEE 29119-2.2, but it will not completely comply with it, in particular with what concerns the testing activity as a whole and the interactions of the users with the dynamic test processes.

3.4.2 Documentation

The IEEE Standard for Software Test Documentation [3] provides a starting point for identifying deliverables for any testing effort. As reported in [CMU/SEI-2010-TR-011], the SOA-specific implementation shall be guided with the following set of documentation:

Documents related to test specifications

T e st Pl a n C o n t ro l D i re ct i ve s

T e st Me a su re s

T e st D e si g n & I mp l e me n t a t i o n

Est a b l i sh T e st En vi ro n me n t

Exe cu t e T e st Pro ce d u re (s)

C o mp a re T e st R e su l t

R e co rd T e st Exe cu t i o n

An a l yse T e st R e su l t s

C re a t e / U p d a t e I n ci d e n t R e p o rt

I ssu e n o t i ce d N o

i ssu e s

Maintain Test Environment

Page 48: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 48 of 78

Test design specifications identifying the approach, test procedures and cases, and pass/fail criteria. An SOA test design specification needs to clearly identify unique features of SOA testing that affect the testing approach, tasks, and environment. The testing approach should identify appropriate strategies for testing the SOA infrastructure, services with and without source code availability, SOA services deployed on ―local‖ infrastructures and ―remote infrastructures, and end-to-end testing of predefined and dynamically composed mission threads. The desired characteristics of the testing environment and the responsibilities of all participants in that environment (e.g., loading of test data, capturing of metrics) should be clearly defined.

Test case specifications identifying the input, output, contextual data, and constraints on procedures. SOA test case specifications should include flexibility to incorporate automated testing techniques such as fuzzing that produce multiple executions based on variations of input parameters and attachments. In addition, test case specifications should address desired quality of service attributes; pay particular attention to the hardware and software configurations; and identify appropriate network, SOA environment, and service loads at the time of testing.

Test procedure specifications identifying steps to operate the environment and run the tests. SOA test procedures are likely to be complex due to the distributed and dynamic nature of SOA environments. There may be extensive procedures for establishing the environment context prior to testing (e.g., to load appropriate data in remote databases), for executing tests, and for capturing the results of the test. For service and end-to-end testing, verification of a test may require access to information on remote machines and back-end data stores.

Documents related to test reporting:

Test item transmittal reports identifying the communication mechanism between implementation and testing groups. The transmittal reports should reflect the range of involved parties (i.e., for SOA infrastructure, individual services, composites, end-to-end threads).

Test log records of what occurred. Test logs should capture to the extent possible the exact environment context at the point of the test.

Page 49: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 49 of 78

This includes, loads on networks, SOA infrastructure and services identification and version records, and specific hardware and software configurations and versions. This information will be essential for analysing test results and anomalous events that occur. Logs may also have to address issues such as lack of universal time across the distributed environment (i.e., inconsistent timestamps).

Test incident reports that record incidents that require further investigation. Incident reports are often given inadequate attention even in traditional development. This leads to difficulty in isolating errors and reproducing problems. For SOA implementations, failure to capture complete descriptions of problems and the context in which they occur may be even more damaging due to the distributed nature of SOA applications. Test summary reports summarizing the testing activity. These reports may contain findings and recommendations about all elements of the SOA implementation.

3.5 Compatibility Standards for e-Health Services Descriptions

In the framework of the MIDAS project, the eHealth has been selected as interesting and challenging domain for evaluating the effectiveness and the usability of the MIDAS platform facilities. In particular, in the framework of the work package related to the pilots (WP7), the MIDAS framework will be configured and applied for testing a healthcare service architecture. The WP7 leader is Dedalus, who is operating in the eHealth domain since 1990.

An innovative challenge in SOA research and industrial development is represented by the Healthcare Service Specification Program (HSSP) [37], whose aim is to specify, by means of a model-driven approach, a number of generic services for the healthcare sector. Dedalus is involved in a research project (HealthSOAF [38]) partially funded by the Italian Ministry of University and Research, whose objective is to implement a HSSP compliant service framework based on its X1.V1 solution [40];[x1.v1]; X1.V1 is an interoperability platform able to support the sharing of information and processes among cross-enterprises healthcare actors, facilities and settings and according to the guidelines of regional and national Electronic Health Records (EHRs).

HSSP will be therefore the reference standard that the MIDAS project will take into account for the pilot in the health sector. In particular, by

Page 50: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 50 of 78

exploiting the results of the HealthSOAF project, the MIDAS pilot activity will aim at:

defining a real healthcare scenario that will involve different care

environments and at least two different healthcare organizations.

The scenario will implement a care process where the management

of the patient will be shared among the different healthcare

organizations and in different care environments;

developing a “Generic HSSP Services Testing” layer on the basis of

MIDAS general testing framework; In particular, this layer will

provide standard test cases according to the MIDAS compatibility

requirements defined in the deliverable D2.1 [25];

using this layer to test instantiated healthcare services architectures

based on HSSP generic services and built upon the HealthSOAF

service implementations, and possibly other HSSP service

implementations.

Regarding possible contributions to the standard, in this context the MIDAS project could promote the Generic HSSP Services Testing layer in order to assess the interoperability of the HSSP services implementations; in particular, the Generic HSSP Services Testing” layer could become a candidate for interoperability testing session, i.e. for session testing the interoperability of different implementations of the HSSP standards.

Taking into account this scenario, the objective of this section is to:

describe the state of the art of the HSSP standards,

explain what of these standards Dedalus will include in the MIDAS

pilots, and

underline the connection with other ICT standards in the eHealth

domain.

3.5.1 The HSSP standards

The standardization cycle of HSSP is based upon a division of work between the HL7 [42] and the OMG [43] organizations. On each project, HL7 is in charge of the functional specifications that are produced through its own standardization process. When the functional specifications are sufficiently consolidated, the OMG starts its own cycle in order to produce

Page 51: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 51 of 78

the technical specifications that are taken into account by HL7 in the final process of functional specification approbation. This cycle guarantees that the produced standards are, on one side, congruent with the healthcare needs and constraints and, on the other side, are technically sustainable.

HSSP produces generic services specifications. Roughly speaking, a generic service is a service that

(i) exposes a reduced number of generic operations, i.e. operations

whose definitions are independent from the types of the

operation arguments, result, and handled objects, and

(ii) specifies a procedure that allows detailing specific types of the

operation arguments, result, handled objects (Semantic

Signifiers). This collection of Semantic Signifiers is conventionally

referred to as a Semantic Profile. The generic service endowed

with a Semantic Profile is an executable service. A HSSP service

provider is able to enact as a provider of at least one Semantic

Profile.

The HSSP approach takes into account basic healthcare services that are involved in each healthcare process. Interoperability, which is the top-level goal in a distributed world such as healthcare, is achieved by the compliance of the implementations with standard services. A HSSP executable service is easily usable by any application that is aware of its Semantic Profile.

The list below reports the projects about basic services of the HSSP:

Spec.

Type Acron.

Specification Name

Short Description Issuing

Body Link

Tech. RLUS

Retrieve, Locate, Update Service Technical Specification

Technical specification for the Retrieve, Locate, Update Service (RLUS), including platform bindings for SOAP/WSDL/HTTP and REST

OMG

http://www.omg.org/spec/RLUS/ http://www.omg.org/cgi-bin/doc?health/11-09-04/

Funct. RLUS Retrieve, Locate, Update Service Functional Model

Adopted Draft Standard for Trial Use enumerating functionality for RLUS

HL7

http://www.hl7.org/documentcenter/ballots/2006SEP/support/SUPPORT_POOL_V3_RLUS_R1_D1_2006SEP_20061220112743.pdf

Page 52: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 52 of 78

Tech.

IXS

Entity Identification Service: Technical Specification

Technical specification for the Entity Identification Service, including platform bindings for SOAP/WSDL/HTTP.

OMG http://www.omg.org/spec/IXS/

Funct.

IXS

Entity Cross-Reference Service (IXS) Functional Model (Normative Edition)

HL7 Normative Standard enumerating functionality for identity management in a SOA service.

HL7

Available for purchase as part of the HL7 V3 Normative Specification.

http://www.hl7.org/implement/standards/v3messages.cfm

Tech. CTS2

Common Terminology Services Release 2 Request for Proposal

RFP soliciting technical specifications submissions for Common Terminology Services

OMG http://www.omg.org/rfp-rfi/cts2.htm

Funct. CTS2

Common Terminology Services Release 2 Draft Standard for Trial Use

Defines the functional requirements of a terminology management and maintenance service.

HL7 http://www.hl7.org/dstucomments/showdetail.cfm?dstuid=50

Funct. DSS Decision Support Service Functional Model

Adopted Draft Standard for Trial Use enumerating functionality for DSS

HL7

http://www.hl7.org/documentcenter/ballots/2006SEP/support/AUDIT_SDO_CDS_DSS_R1_D1_2006SEP_20070129061919.pdf

Techn. CDSS (formerly DSS)

Decision Support Service Technical Specification

Defines standard interface for interacting with decision-support SOA services.

OMG

Available for purchase as part of HL7 Version 3 Normative Edition http://www.hl7.org/implement/standards/v3messages.cfm

Funct. HCSPDIR

Healthcare and Community Services Provider Directory Service Functional Model

A key support capability underpinning a Referral Service. Defines requirements needed to discover healthcare providers and other service providers based upon skill and geography.

HL7 http://www.omg.org/cgi-bin/doc?health/2011-3-6

Techn. HCSPDIR

Healthcare and Community Services Provider Directory Request for Proposel

RFP soliciting technical specifications for HCSPDIR

OMG

PASS Access Control

Providing Access Control services to protected resources in a distributed healthcare environment and handling the recording and

HL7

Page 53: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 53 of 78

maintenance of service events from other services

PASS

Audit

There are interesting implementations for each HSSP service in Europe, USA, and Australia. Dedalus is developing RLUS, hData, IXS and CTS2 services in the context of the Italian national research project HealthSOAF [38](first prototypal implementation are available).

Regarding the implementation used in the MIDAS Pilot, Dedalus will provide the following service bricks for each HSSP service:

Service consumer proxy library, allowing applications, systems,

intelligent devices to access service providers;

Service provider skeleton library, allowing systems endowed with

enabling capabilities to provide the service;

Service provider system that is a full implementation of the service

provider.

To conclude this section in the framework of the eHealth Pilot, the project will instantiate a HSSP-based services architecture for evaluating the functioning, the reliability and the effectiveness of the MIDAS testing framework. The test campaigns will be defined by exploiting the “Generic HSSP Services Testing” layer, realized in the task T7.1. With the implementation and the application of this layer the MIDAS project aims also at contributing to the HSSP initiative, by proposing it as tool for testing the interoperability of different implementations of the HSSP standards.

The implementation of the healthcare services architecture used in the pilot will adopt other international standards, in particular IHE and HL7.

Regarding these standards the MIDAS project will use them without contributing to them.

In the next section we underline how the implementation of the HSSP services will include the adoption of other eHealth standards.

3.5.2 Other eHealth standards used in the MIDAS Pilot

Page 54: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 54 of 78

The research strategy of Dedalus in the HealthSOAF project is to implement the HSSP services as new functionalities and interfaces of its interoperability platform X1.V1. X1.V1 is based on international eHealth standards as IHE [39] and HL7 [42].

Moreover, also for instantiating in the MIDAS project a healthcare services architecture, based on HSSP generic services, Dedalus will adopt a semantic profile that will include the standards IHE and HL7.

Below we briefly introduce these standardization initiatives and their use in the healthcare Pilot of the MIDAS project.

Integegrating the Healthcare Enterprise (IHE)

The IHE [39] initiative is both a process and a forum for encouraging integration efforts in the eHealth domain. It defines a set of technical frameworks (TF) for the implementation of established messaging standards to achieve specific clinical goals. For each TF, IHE defines specific implementations of established standards to achieve integration goals that promote appropriate sharing of medical information to support optimal patient care. Each IHE TF identifies a subset of the functional components of the healthcare enterprise, called IHE Actors, and specifies their interactions in terms of a set of coordinated standards-based transactions.

The approach employed in the IHE initiative is not to define new integration standards, but rather to support the use of existing standards, e.g., HL7, DICOM, and IETF, as appropriate in their respective domains in an integrated manner, defining configuration choices when necessary.

Health Level Seven International (HL7)

Health Level Seven International (HL7) [42] provides standards for interoperability that improve care delivery, optimize workflow, reduce ambiguity and enhance knowledge transfer among all of the stakeholders, including healthcare providers, government agencies, the vendor community, fellow SDOs and patients.

Use of standards in the MIDAS healthcare services architecture

Concerning the HSSP implementation, we have to take into account that in the HEALTHSOAF project Dedalus is developing service provider skeletons for encapsulating the following IHE integration profiles (see Figure 14):

Page 55: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 55 of 78

an IHE compliant MPI (Master Patient Index), based on the

integration profiles PIX and PDQ; in order to provide IXS service

functions; and

an IHE XDS registry/repository in order to provide RLUS service

functions.

Figure 14: Schema of the HSSP implementation

Concerning the healthcare services architecture that will be instantiated in the MIDAS pilot, Dedalus will include in its implementation the following semantic signifiers:

RLUS services

o IHE XDW (Cross Enterprises Document Workflow), integration

profile included in the IHE IT Infrastructure Technical

Framework [41] XDW enables participants in a multi-

organizational environment to manage and track the tasks

related to patient-centric workflows as the systems hosting

workflow management applications coordinate their activities

for the health professionals and patients they support.

o Specific CDA2 reports for hospital discharge and several

clinical reports. CDA2 is an XML-based markup standard

intended to specify the encoding, structure and semantics of

Page 56: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 56 of 78

clinical documents for exchange. CDA is part of the HL7 v3

standard; it is based on the HL7 Reference Information Model

(see below) [46].

IXS services

o IHE Patient Identifier Cross-referencing (PIX): supports the

cross-referencing of patient identifiers from multiple Patient

Identifier Domains.

o IHE Patient Demographics Query (PDQ): provides ways for

multiple distributed applications to query a patient

information server for a list of patients, based on user-

defined search criteria.

o HL7 V2 messages. It defines a series of electronic messages to

support administrative, logistical, financial as well as clinical

processes [44].

o Data structures related to the HL7 V3 Reference Information

Model (RIM) [45]. RIM is a critical component of the HL7 V3

family of standards. It is the root of all information models

and structures developed as part of the V3 development

process.

3.6 Compatibility Standards for SCM Services Descriptions

Since in 1982, Oliver and Webber [33] began to speak of the term Supply Chain Management (SCM), there has been much interest in both at industrial and scientific level. According to Supply Chain Council (SCC) [32], SCM is the process of planning, managing and efficiently control the flow of goods and associated information from the point of origin to point of consumption in order to meet customer requirements.

SCM is composed by a great number of activities like procurement, storage, transportation, distribution, and sale. Generally for each activity is a set of software solutions that support the planning, monitoring, tracing, verification, and, ultimately, to have an updated and comprehensive information to facilitate the decision making process.

In order to analyse technological trends, we have been in contact with international experts in the field of Information Technologies (IT) applied to the SCM. They have reported (see Figure 15) that Supply Chain

Page 57: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 57 of 78

Collaboration is included in the mature SCM technologies that, within the two- to five-year window, market will adopt. In this context Supply Chain Collaboration is defined as solutions model business processes that are defined by a business process management (BPM) engine or represented by business application logic. These forms of collaboration are structured and can use industry-specific data and/or process standards.

Considering these interviews and trends, we propose to define SCM scope in the MIDAS project as the integration of IT-based systems in order to achieve a collaborative decision-making, having a holistic vision of SC.

Figure 15: Hype Cycle for Supply Chain Management, Gartner 2012 [53]

Within the field of standardization, we have investigated those standards or recommendations for application in the field of SCM19.

Supply Chain Operations Reference (SCOR) Model that provides a unique framework that links business process, metrics, best practices, and technology features into a unified structure to support communication among supply chain partners and to improve the effectiveness of supply chain management and related supply chain improvement activities.

19

We have included International Adopted Supply Chain Standards, some initiatives have been created but not generally adopted. I.e. Supply chain management & risk British Standards[34]

Page 58: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 58 of 78

ISO 28000 is the formal international security standard against which organizations may seek independent certification of their supply chain security management system. It specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining, and improving a documented Supply Chain Security Management System (SCSMS), using a continual improvement approach.

GS1 (http://www.gs1.org) is an international not-for-profit association with Member Organisations in over 100 countries. GS1 is dedicated to the design and implementation of global standards and solutions to improve the efficiency and visibility of supply and demand chains globally and across sectors. The GS1 system of standards is the most widely used supply chain standards system in the world. GS1 standards bring together companies representing all parts of the supply chain: manufacturers, distributors, retailers, hospitals, transporters, customs organizations, software developers, local and international regulatory authorities, and more.

SCOR and ISO28000 standards are not applicable within the domain of software systems testing, as they focus on not computer-based systems. Although in both, SCOR and ISO28000 standard, the application of computer-based systems is considered, their scope (performance evaluation processes and procedures to ensure safety, respectively) is too far from the conception of architecture of services that can be validated within the MIDAS project. For this reason we only consider GS1 standards for the SCM Pilot.

In particular, to create a holistic view of the SC we will follow the standard GS1 Logistics Interoperability Model (GS1 LIM), which has the mission to lead the development and drive the implementation of the GS1 Logistics Solutions to achieve business benefits for global supply chains by fostering interoperability between the participants and overcoming barriers of scalability because of different business processes and different formats of data interchanges. The aim of the GS1 LIM is to establish business interoperability in the transport and warehousing business processes. Business interoperability is the capability to run business processes seamlessly across organisational boundaries. The LIM describes the high-level business processes and a comprehensive set of transactions that occur in these processes. The LIM covers the following business

Page 59: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 59 of 78

functions; procurement, planning, warehousing, transport and financial settlement. GS1 LIM SC vision is shown in Figure 16.

Figure 16: Supply Chain Vision, GS1 Logistics Interoperability Model

GS1 provides a standardised way to identify items and locations, to capture details about supply chain movements, and to share that information with authorised business partners. Furthermore, the GS1 system of standards is also a neutral global framework that ensures interoperability among all stakeholders.

Figure 17: GS1 Standards for Data Exchange

The actual model is composed of business processes or business process blocks, divided into business transactions. These business transactions will be mapped to electronic messages (GS1 eCOM). Relationship between transactions and messages is not by default one to one. Several transactions can be mapped to one message. However, all transactions will have a specific message implementation guide for explaining the use of the message in the context of the business transaction. The model consists of seven distinct business processes: Interoperation, Agreement, Master, Data Alignment, Logistic, Services Conditions and Planning Warehousing Transport Financial Settlement. A summary of GS1 LIM is shown in Figure 18.

Page 60: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 60 of 78

One of the fundamental aspects of the adoption of GS1 for MIDAS project is the fact that there is a certification model based on those standard solutions.

GS1 Logistics Interoperability Model (GS1 LIM) will be taken as the main standard to be adopted in the Supply Chain Management Pilot for MIDAS Project.

Figure 18: GS1 Logistics Interoperability Model in a nutshell

Page 61: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 61 of 78

GS1 has the responsibility to ensure that their standards are being used in a systematic and consistent way across the globe. A large number of GS1 Member Organisations have already created and implemented various forms of certification and accreditation programmes in their local markets. These excellent programmes demonstrate both the need for quality and reliability, and the measurable value and numerous benefits they bring to companies that take advantage of them. GS1 includes two testing models:

Conformance: minimum requirements that must be met to declare a eCom message in conformance with GS1 specifications

Certification: a formal process of testing that eCom messages conform to GS1 specifications.

This Certification Program started at 2007 with the aim to cover GS1 Barcodes and GS1 eComm message interchange, which is the basis of GS1 LIM. Unfortunately, it seems that only BarCodes certification has been successfully developed, including:

Conformance Criteria: The programme defines a succinct list of uniform minimum requirements that must be met to declare an item in conformance with GS1 standards. These criteria have been made very clear and easy to understand, to eliminate any risk of confusion or misinterpretation.

A methodology has been developed to explain what must be done, by whom and with what equipment.

Implementation guidelines have been elaborated to explain how the process should be enacted, step-by-step.

GS1 Barcode Testing Program, as it is shown by GS1 Australia, tests the barcodes for compliance to the GS1 standards through a testing process based on the ISO 15416 verification (testing) method. This method assesses size, colour, print quality, and quiet zones (light margins). Assessed are bar height and location/placement of the barcode. The barcode is also checked to ensure the right GS1 number and a correctly calculated Check Digit has been applied.

GS1 global does not define any methodology to certificate GS1 eCom solutions, but some National GS1 members offer this service. For this reason, ITA has contacted GS1 global in order to retrieve more information and we have been delegated to Spanish GS1 member AECOC.

Page 62: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 62 of 78

GS1 Spain offers a certification process that ensures that a company which implements a data exchange software can talk without any trouble with any party already sharing documents using standards promoted from AECOC (A GS1-compliant certification logo is shown in Figure 19). AECOC certification solution is focused on:

EANCOM and XML Standards Compliance.

Proper connectivity to communications networks.

Interoperability solutions for exchanges through Internet EANCOM and XML.

Figure 19: GS1 Certified Logo (by the Spanish GS1 Member)

GS1 Australia has defined a GS1net™ Certification Program in order to ensure product integration by using GS1 standards. While the areas are rigorously tested as part of the GS1net™ Certification Program, it is important to clarify that the program does not cover the following:

The ability to support any other attribute from those currently used by Australian sectors.

The ability to automate publication and other confirmation messages.

The alliance partner company itself.

Solutions available from the alliance partner, other than those specified.

The delivery of on-going application support by the alliance partner

Certification of specific staff or individual consultants.

Overall "usability or look and feel" of system.

Any other additional functionality (other than specific items noted above).

Back office integration capabilities.

The product is fit for any other purpose whatsoever.

Page 63: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 63 of 78

That the product meets the requirement of any other Certified Data Pool around the world (please read below for further detail).

GS1 does not define a global procedure in order to test GS1 eComm standards compliance.

GS1 LIM has no certification program.

Although GS1 LIM was created in 2007, the first successful implementations have been reported in mid 2013. In our opinion, contrasted with the opinion of some companies, the lack of an automated testing methodology to ensure standard compliance makes its adoption more difficult. GS1 standards have been successfully adopted in the following European Projects:

The CASSANDRA project (www.cassandra-project.eu) addresses the visibility needs of both business and government in the international flow of containerised cargo. It does so by developing a data-sharing concept that allows an extended assessment of risks, both by business and government, and thereby enabling improved supply chain visibility and cost-efficient and effective security enhancement. The three-year project started on June 1st 2011, involving 27 innovative industry leaders and receiving funding from the European Commission’s Seventh Framework programme for Security. The consortium is composed of leading companies in the fields of logistics and IT, amongst them worldwide players such as DHL, GS1, IBM, and Kühne+Nagel, Customs and other border inspection agencies, European research institutes as well as the port communities and trading partners of the European ports of Rotterdam, Bremerhaven, Barcelona, and Setúbal. All members utilise their specific expertise in this 15 million euro project, which builds on previous EU funded projects INTEGRITY, SMART-CM, and ITAIDE.

The e-Freight Integrated Project (European e-Freight capabilities for Co-modal transport, http://www.efreightproject.eu ) started 1st January 2010 bringing together 30 partners from 14 Member States and Norway for a program of work that will cover 4 years, addressing the development, validation and demonstration of innovative e-Freight capabilities. The e-Freight Framework has been designed for the paperless exchange of freight information. It is

Page 64: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 64 of 78

capable of converting message formats to make them suitable for any stakeholder.

We have contacted CASSANDRA and e-Freight project members in order to know how are currently SCM services tested.

Current GS1 LIM Services Implementations have been tested by handwriting test cases. This testing has covered only GS1 compliance message interchange. Back office integration, functional testing, security testing, service architectures have not been tested in practice.

In summary:

For the SCM pilots to be developed in the MIDAS project, we consider the adoption of the GS1 LIM standard.

As there is no GS1 LIM testing specification, we propose to adopt general software testing models that will be used in the MIDAS project and to use them for the SCM specific-domain. In particular, we propose:

- The description of SCM services by using WSDL

- The use of SoaML for describing SCM services architectures

- The use of UTP and TTCN-3 to generate and execute test scripts

Page 65: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 65 of 78

4 CONTRIBUTING TO TEST STANDARDS MAKING PRACTICES

4.1 SOA Test Standards Making Practices

To the best of our knowledge, there are no standards bodies or other standards making organisations that produce SOA test standards. On the contrary, there are a lot of open-source and commercial test tools, that SOA stakeholders (developers, providers, customers, etc.) can use to test SOA infrastructure, functionality, performance, conformance, security, interoperability, and end-to-end threads (usage-based testing). All of those test tools are not based on MBT approaches, so that, most of the time, test cases are manually generated (by writing XML / JSON request / response messages against the SUT) by test engineers. In addition, no automated test scheduling, execution and arbitration are supported. From that perspective, it seems that MIDAS may not contribute to the SOA test standards making practices, but will rather provide another approach and tool, based on more generic and to a great degree standardized tools/methods (UTP, TTCN-3). The demonstration of the generic approach for specific SOA services, e.g., for eHealth and SCM, will demonstrate key methods, relations to other (modelling) tools, and experiences that may be used in other (test) standardization domains.

MIDAS is not another SOA testing tool; it is a Generic Testing Architecture and Platform specialized for SOA testing and able to be run in the cloud. It will provide new approaches to (SOA) testing, e.g., automated scheduling, automated and interleaved scheduling and generation, automated scheduling and execution, automated arbitration, and so on.

Through the pilot projects, we will justify the hypothesis, whether MIDAS can contribute to provide a standardized Reference Framework and Architecture for SOA testing and certification.

4.2 Model-Based Testing (MBT) Practices and Experience in ETSI

MBT in ETSI is currently meant as a means for doing automated test design solely. Latest efforts at ETSI in the area of model-based testing were aiming at the specification of general requirements on notations suitable for being applied to model-based testing approaches. These efforts resulted in the ETSI standard [27][27][27] ES 202 951 “Requirements on Modelling Notation”. This standard does not prescribe what notation or language ought to be used in a MBT approach, nor how

Page 66: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 66 of 78

the notation shall be used in a certain methodology. It even does not describe how, where or what test selection (or test generation) criteria are commonly available or shall be used. A revision of the standard is currently underway targeting at least a rough specification of well-known and commonly accepted test selection (generation) criteria to specify a certain test design technique such as requirements coverage, structural coverage, and so forth. This leads to an, at least partial, shift of paradigms, where manually written specification documents are going to be replaced by (semi-) formal models.

Within the ETSI methodology, MBT is supposed to cover the process phases “Specification of Test Purposes”, “Specification of Test Descriptions”, and “Specification of Test Cases” by introducing (semi-) formal models into these phases and their activities. These (semi-) formal models are then exploited for doing automated test design. Any modelling notation that fulfils the requirements of ES 202951 is deemed appropriate for being used within the ETSI methodology.

The UTP is a standardized concrete modelling notation that allows the creation of test models based on UML. UTP fulfils the requirements stated by ES 202951 and, thus, is a candidate for being adopted in order to realize an ETSI-compliant MBT approach. However, except the notation and the concepts, UTP does not prescribe the methodology or the rigor of an MBT approach.

The MIDAS platform and the MIDAS approach for automated test design can be seen as an implementation of the ETSI test design activities in a model-based way, since MIDAS is using UTP which in turn is deemed appropriate to be used for the ETSI MBT recommendation on modelling notations. The experiences made in MIDAS may influence the creation of a standardized ETSI MBT methodology.

4.3 E-Health Test Standards Making Practices

First of all it is important to underline what we introduce in Section 3.5: the new service-based healthcare information systems will allow the interaction among different, heterogeneous and probably independent stakeholders for supporting the continuity of care, the interoperability among healthcare settings and the collaboration among all the healthcare operators. This trend will complicate the delivery, the integration and the

Page 67: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 67 of 78

test of those solutions. In particular, tools for black-box or grey-box testing of service-based healthcare solutions do not exist, yet.

The results of the MIDAS project can, therefore, represent a very important innovation for the testing in the healthcare service-based solutions.

To better disseminate the results of the MIDAS project, it is important to take into account the current eHealth test processes promoted and supported by the standardisation bodies. In particular, we report the experiences of IHE and HL7.

The MIDAS project will not contribute to IHE and HL7 test tools. In the framework of the WP7 we will study if it is possible to integrate the functionality of these testing tools (white-box test) with the MIDAS black-box testing facilities.

Finally we describe a roadmap for sustainable and effective deployment of testing and certification of eHealth systems, defined in the framework of the European Project HITCH.

4.3.1 IHE Testing Process

The IHE initiative includes a rigorous testing process to promote the adoption of standards-based interoperability by vendors and users of healthcare information systems. The process culminates in the Connect-a-thon, a weeklong interoperability-testing event, where each vendor can demonstrate the compliance and obtain conformance statements to a set of IHE integration profiles through the execution of point-to-point test among its system and the systems of the other vendors.

Because testing before IHE Connect-a-thons is a requirement in order to improve the effectiveness of these events, IHE provides a suite of tools for testing the interoperability of medical systems as well as the compliance with standards of the messages and documents produced by those systems. These tools also offer a test management solution for testing the interoperability of eHealth software components.

In particular, each system participating in an IHE Connect-a-thon is required to run a set of pre-Connect-a-thon tests based on the actors/profiles you registered for. For the management of the pre-Connect-a-thon testing phase, IHE Europe provides the Gazelle Environment. A guide to the pre-test process is reported [48].

Page 68: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 68 of 78

In general, IHE proposes a set of software tools for supporting the implementation and testing of IHE Integration Profiles in healthcare information technology systems. A list of these tools is reported [49].

4.3.2 HL7 initiatives

HL7 has recently announced that individuals may opt for computer-based testing from their home or workplace, or at an exam testing center. As a result, test-takers can schedule exams according to their personal availability and at a location of their choice. In particular, HL7 has chosen Kryterion as partner to administer its certification exams at over 400 High Stakes Online Secure Testing Centers (HOST) worldwide.

Individuals may also choose online proctored testing from the comfort of their own computers anywhere in the world, as long as they have Internet access and an external webcam that meets Kryterions specifications [50]. Software is downloaded from Kryterion’s website to enable secure exam delivery and each individuals identity is authenticated during the registration process. Testing occurs via webcam by a remote online proctor.

Results from either testing method are displayed immediately upon submittal of the completed test and certificates will be emailed directly to successful individuals; thus individuals will no longer have to wait weeks for their results and/or certificates.

HL7 offers certification exams in the following areas: HL7 Version 2.7 Control, HL7 RIM and Clinical Document Architecture (CDA).

4.3.3 The European project HITCH

The European Project HITCH [47] defined a Roadmap for sustainable and effective deployment of testing and certification of eHealth systems. HITCH is a supportive action and its consortium includes important standardization bodies: ETSI, EuroRec, IHE Europe, INRIA, MedCom, and OFFIS.

The HITCH project just concluded its 18 month study with important recommendations to the EU Commission on how to proceed with eHealth interoperability testing and certification in Europe. HITCH proposes a way forward balancing the urgency to defragment the current approach by eHealth projects, reduce cost of testing, raise the level of quality, and remain sufficiently flexible to account for local needs.

Page 69: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 69 of 78

HITCH considers testing and certification related but distinct processes. While testing represents the way one assesses/measures the level of interoperability, certification is a process where a certifying entity uses the positive test results. If profile specifications and testing tools are robust, certification becomes a formality and high‐level of eHealth interoperability is realized.

In the area of testing, HITCH stressed the need to continue and amplify the progress accomplished in the last few years, with initiatives such as IHE Connect-a-thons. HITCH identified two areas that need further improvement and more formalism. First, there is a need for a widely accepted quality guide for interoperability testing based on existing quality standards such as ISO 9001 and ISO 17025. Second, an organized collaboration is needed to reduce the fragmentation and lack of maturity in interoperability test tools and test plans in specific areas.

In the area of certification, HITCH surveyed various certification strategies and concluded that different variants would need to coexist and evolve with profile specification stability and market maturity.

HITCH proposes a five-year roadmap for testing and certification:

In 2012‐2013, HITCH recommends to develop and approve common guidance on testing quality and to organize a testing tool development infrastructure at the European level to fill the most critical gaps in tooling and test plans.

By the end of 2012, a European interoperability framework should be agreed for testing to proceed. This framework should contain an objective set of interoperability standards‐based profiles and become the reference against which testing may be piloted in 2013‐2014.

Based on feedback from the above testing and certification pilots, the first products could be tested and receive a certification in 2015 at the profile level in Europe.

HITCH proposes that national or regional eHealth projects across Europe organise their own testing and certification by building upon this common European testing and certification foundation. This foundation would leverage a common set of test tools, allow products to be tested once at the European level and further tested only for regional/national extensions to the European interoperability

Page 70: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 70 of 78

framework. Such a European foundation would increase consistency, reduce costs, and significantly accelerate deployment across multiple national and regional e-Health projects in contrast with the current inconsistent and fragmented approach.

HITCH is partially funded by the European Commission under the 7th Framework Programme as a supportive action performed by ETSI, EuroRec, IHE Europe, INRIA, MedCom, OFFIS.

4.4 SCM Test Standards Making Practices

As it was exposed in Section 3.6 GS1 LIM has no testing specification, and there is no organization or body in charge of certifying services compliance to the standard.

During the pilot project we will evaluate the option to define procedures, methodology and support infrastructure based on the MIDAS approach in a way to be compatible with the GS1 LIM certification program.

We will collaborate with GS1 Association and the European Projects that are using the standard (namely, CASSANDRA and e-Freight) in order to achieve a consensus about the topics this certification program should cover.

In particular, we propose to create a test-bed service infrastructure where organizations and SMEs can deploy their solutions as Software in-the-loop, and use the Spanish Public Demonstrator Centre ICT for logistics managed by Aragon Institute of Technology as Testing Laboratory [35].

Page 71: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 71 of 78

5 CONCLUSION AND NEXT STEPS

5.1 Summary of the Standardization Assessment

Taking into consideration the three major MIDAS use cases, namely Test Execution, Manual Test Design, and Automated Test Design, the assessment of the project’s impact on the standardization and vice versa has been reviewed from mainly two perspectives:

a) What are the standardization requirements for SOA system model that match the MIDAS Test Model and are there some mutual impact between the MIDAS project and the SOA standardization framework;

b) What is the standardization framework that defines the MIDAS Testing Framework and where are the areas of the (mutual) impacts?

In the search of the answer to the first question, we examined a wide standardization framework, covering standardizations activities of key SOA recommendation/standards developing organizations, such as OASIS Group, OMG, W3C, and IETF. These standards making organizations are developing well defined and stable SOA recommendations that serve as base standards that define well MIDAS Service/System Model requirements. In addition, the experimentation with the MIDAS test framework will be extensively applied in the scope of the pilot activities, where the set of e-Health and Supply Chain Management SOA services will be defined as used as MIDAS SUT. For both SOA services domains key base standards have been identified, and during the pilot activities, its compatibility with MIDAS UML Model (and sub-models) will be examined and findings of the experimentation reported to the relevant standardization bodies and/or industry forums/associations.

The standardization impact to the 1st question is summarized in the following standardization assessment statements:

s.1: The MIDAS Test Models consider the standards for services/system model description as they are. It is beyond the scope of the MIDAS project to make any thorough analyses of the suitability and usability of those standards, and therefore it is not expected, that the MIDAS project will produce impacts on those standards and standardization efforts. Although it is not highly expected, if the modelling issues related to the Test modelling will be identified, the issues will be reported to the relevant standardization bodies.

Page 72: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 72 of 78

s.2: The way SOA models need to be modelled in order to be suitable for automated test generation might influence common modelling guidelines of SoaML.

s.3: HSSP will be the reference standard that the MIDAS project will take into account for the pilot in the health sector. The MIDAS pilot activity will aim at developing a “Generic HSSP Services Testing” layer on the basis of MIDAS general testing framework that will provide standard test cases for the HSSP Services testing implemented according to the MIDAS compatibility requirements defined in the deliverable D2.1 [25]. The Generic HSSP Services Testing” layer could become a candidate for interoperability testing session,

s.4: GS1 LIM will be taken as the main standard to be adopted in the Supply Chain Management Pilot for MIDAS Project. As there is no GS1 LIM testing specification, we propose to adopt general software testing models which will be used in the MIDAS project and to use them for the SCM specific-domain. In particular, we propose:

a. - The description of SCM services by using WSDL

b. - The use of SoaML for describing SCM services architectures

c. - The use of UTP and TTCN-3 to generate and execute test scripts

During the pilot project we will evaluate the option to define procedures, methodology and support infrastructure based on the MIDAS approach in a way to be compatible with the GS1 LIM certification program.

A comprehensive analysis of the standardization activities that are related to the SOA testing has been made. We applied top down approach, starting from the very generic recommendations from OASIS Group on SOA testing approaches, which consider SOA services as largely software artefacts and can leverage the body of testing experience around software testing, summarized in the evolving ISO/IEC 29119 Software Testing standard [7], which is going to replace IEEE 829 [3] recommendations. Within the scope of the MIDAS Test framework, these standards/recommendations serves as guidelines for particular topics, covered in the MIDAS project:

s.5: The MIDAS Glossary shall be well aligned with the ISO/IEC 29119 vocabulary in order to harmonize the common understanding of the terms used.

Page 73: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 73 of 78

s.6: The MIDAS platform is expected to implement some standards for the Dynamic Test Process as in ISO/IEC/IEEE 29119-2.2, but it will not completely comply with it, in particular with what concerns the testing activity as a whole and the interactions of the users with the dynamic test processes.

s.7: The IEEE 829 Standard for Software Test Documentation [3] will be used as a guideline as it provides a starting point for identifying deliverables for any testing effort.

s.8: Through the pilot projects, we will justify the hypothesis, whether MIDAS can contribute to provide a standardized Reference Framework and Architecture for SOA testing and certification.

More specifically, the impact between the MIDAS test framework and standardization activities related to testing, such as development of new testing methodologies, testing standards, recommendations, usability reports and guidelines has been examined from the perspective of MIDAS use cases. The impact is summarized in the findings, outlined below:

s.9: For the Test Execution use case, MIDAS may have no impact on standards, or standardization-related activities, beyond that it will enable the execution of already developed standardized abstract test suites produced by standardization bodies (e.g., ETSI). At the current stage of the project, it is not possible to clearly identify potential lacks TTCN-3 for the MIDAS testing profile, especially for SOA services. A preliminary usability analysis of the commercial TTCN-3 test execution engines for SOA services testing has been performed, and its findings will be reported to the ETSI MTS technical committee.

s.10: MIDAS experiences on designing test case specifications in a model-based manner for subsequent execution may have an impact on the technical standardization and dissemination activities (in terms of best practices, modelling patterns, modelling guidelines, and future revisions) in the realm of UML and UTP. Experiences are supposed to influence the requirements specification for the next version of UTP, i.e. UTP 2.

s.11: The MIDAS platform and the MIDAS approach for automated test design can be seen as an implementation of the ETSI test design activities in a model-based way, since MIDAS is using UTP which in turn is deemed appropriate to be used for the ETSI MBT recommendation on

Page 74: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 74 of 78

modelling notations. The experiences made in MIDAS may influence the creation of a standardized ETSI MBT methodology.

s.12: For Automated Test Design use case, the MIDAS approach may have an impact on the ETSI MBT recommendations regarding commonly accepted coverage criteria for automated test design. ETSI ES 202951 is currently under revision for being extended with coverage criteria that are commonly used and well accepted in the model- based testing area.

s.13: Furthermore, experiences with modelling and executing test generation based on formalized coverage criteria will have an impact on the requirements specification of the future UTP version, i.e. UTP 2. A Request for Proposals is currently sketched by an unofficial UTP working group at OMG.

s.14: With respect to security testing, experiences made in MIDAS are supposed to be contributed to the efforts that are currently being undertaken at ETSI ([26], [27]).

5.2 Next Steps

In the assessment of the potential impact between the MIDAS project and the standardization activities, two standard delivery organizations, namely ETSI and OMG, have been identified as the ones, whose current standardization activities are closely related to the MIDAS project.

These two organizations shall be well informed with the progress of the MIDAS project, and the standardization activities in particular connected with the development of the model-based testing methods shall be well aligned with MIDAS Testing framework. The partners in MIDAS project shall be involved in the work of standards technical bodies, in particular:

a) In ETSI MTS activities:

regarding commonly accepted coverage criteria for automated test design, e.g. work on the revision of the ETSI ES 202951 standard;

ETSI MBT recommendations on modelling notations, using UTP;

on the potential extensions of the TTCN-3 language, such that it will be suitable for automated SOA testing; and

providing feedback from pilot activities on usability experiences of the MBT approach from SOA services.

Page 75: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 75 of 78

b) In OMG activities:

Contributing to requirements specification of the future UTP version, i.e. UTP 2.

c) Other:

Providing feedback from pilot activities on usability experiences and compatibility of the MIDAS Test framework with GS1 certification program for SCM service domain.

Providing feedback from pilot activities on usability experiences and compatibility of the MIDAS Test framework with current eHealth test processes promoted and supported by the standardization bodies. In particular, we will provide feedback on the experiences of IHE and HL7 based eHealth services.

Page 76: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 76 of 78

6 REFERENCES

[1] OASIS GROUP: Reference Architecture Foundation for Service Oriented Architecture Version 1.0. Committee Specification 01; December 2012

[2] OASIS GROUP: Reference Model for Service Oriented Architecture 1.0, OASIS Standard, 12 October 2006

[3] IEEE Standard for Software Test Documentation, Institute for Electrical and Electronics Engineers, 16 September 1998

[4] Edith Werner et al., A TTCN-3-based Web Service Test Framework, Institute for Computer Science, University of Gottingen, 2007

[5] SoaML (2012). Service oriented architecture Modeling Language (SoaML) Specification Version 1.0.1. formal/2012-05-10. Object Management Group. URL http://www.omg.org/spec/SoaML/1.0.1/

[6] ISO: ISO/IEC 9646-1—Information Technology—Open Systems Interconnection — Conformance Testing Methodology and Framework, Part 1: General Concepts. International Standards Organisation. ISO/IEC, first edition, 1994.

[7] ISO/IEC: ISO/IEC 29119 Software Testing, 2012

[8] SOAP Version 1.2 Part 0: Primer. World Wide Web Consortium (W3C) Recommendation, http://www.w3.org/TR/2003/REC-soap12-part0-20030624/, 2003.

[9] Web Services Description Language (WSDL) 1.1.WorldWideWeb Consortium (W3C) Note, http://www.w3.org/TR/2001/NOTE-wsdl-20010315, 2001.

[10] ETSI Standard (ES) 201 873 V3.2.1: The Testing and Test Control Notation version 3; Parts 1-9. European Telecommunications Standards Institute (ETSI), Sophia-Antipolis, France, also published as ITU-T Recommendation series Z.140, 2007.

[11] Axel Rennoch: Recent Developments on TTCN-3, TAROT Summer School, Schloß Laubegg, 25th June 2010

[12] Web Service Definition Language (WSDL) 1.1. W3C Note 15 March 2001. World Wide Web Consortium. URL http://www.w3.org/TR/wsdl

[13] Web Services Description Language (WSDL) Version 2.0 Part 1: Core Language. W3C Recommendation 26 June 2007. World Wide Web Consortium. URL http://www.w3.org/TR/wsdl20/

[14] Web Services Description Language (WSDL) Version 2.0 Part 2: Adjuncts. W3C Recommendation 26 June 2007. World Wide Web Consortium. URL http://www.w3.org/TR/wsdl20-adjuncts/

[15] Extensible Markup Language (XML) 1.0 (Fifth Edition). World Wide Web Consortium. W3C Recommendation 26 November 2008. URL http://www.w3.org/TR/xml/

[16] Extensible Markup Language (XML) 1.1 (Second Edition). World Wide Web Consortium. W3C Recommendation 16 August 2006, edited in place 29 September 2006. URL http://www.w3.org/TR/xml11/

[17] XML Media Types. Request for Comments: 3023. The Internet Society. URL http://www.ietf.org/rfc/rfc3023.txt

[18] XML Schema Part 1: Structures Second Edition. W3C Recommendation 28 October 2004. World Wide Web Consortium. URL http://www.w3.org/TR/xmlschema-1/

[19] XML Schema Part 2: Datatypes Second Edition. W3C Recommendation 28 October 2004. World Wide Web Consortium. URL http://www.w3.org/TR/xmlschema-2/

Page 77: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 77 of 78

[20] W3C XML Schema Definition Language (XSD) 1.1 Part 1: Structures. W3C Recommendation 5 April 2012. World Wide Web Consortium. URL http://www.w3.org/TR/xmlschema11-1/

[21] XSD_11_2 (2012). W3C XML Schema Definition Language (XSD) 1.1 Part 2: Datatypes. W3C Recommendation 5 April 2012. World Wide Web Consortium. URL http://www.w3.org/TR/xmlschema11-2/

[22] Process Execution Language for Web Services Version 1.1. URL http://download.boulder.ibm.com/ibmdl/pub/software/dw/specs/ws-bpel/ws-bpel.pdf

[23] Web Services Business Process Execution Language Version 2.0. Organization for the Advancement of Structured Information Standards. OASIS Standard 11 April 2007. URL http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html

[24] UTP_1_2 (2012). UML Testing Profile (UTP) Version 1.2. ptc/2012-09-13. Object Management Group. URL http://www.omg.org/spec/UTP/1.2/

[25] MIDAS Deliverable D2.1. MIDAS compatibility requirements and recommendations for services and service architectures, MIDAS FP7 project, 2013.

[26] ETSI ES 202 553: Methods for Testing and Specification (MTS); TPLan: A notation for expressing Test Purposes.

[27] ETSI ES 202 951: Methods for Testing and Specification (MTS); Model-Based Testing (MBT); Requirements for Modeling Notations. ETSI, 2011

[28] ETSI TR 102 840: Methods for Testing and Specifications (MTS); Model-based testing in standardization. ETSI, 2011

[29] ETSI TS 101 583: Security Testing Terminology. ETSI, 2013

[30] IETF RFC 2119: Key Words for Use in RFCs to Indicate Requirement Levels, IETF, 1997

[31] ETSI Work Item DES/MTS-999: Data Fuzzing with TTCN-3. ETSI,,,, 2013

[32] Supply Chain Council, http://supply-chain.org/ , Last Accessed 07-2013.

[33] R Keith Oliver and Michael D Webber. Supply-chain management: logistics catches up with strategy. 1982.

[34] Supply chain management & risk British Standards, http://shop.bsigroup.com/en/Browse-By-Subject/Risk/Supply-chain-management--risk/, , Last Accessed 07-2013.

[35] Spanish National Demostration Centre for Logistics, http//www.cdlogistica.es , Last Accessed 07-2013.

[36] Wikipedia: Service-oriented architecture, http://en.wikipedia.org/wiki/Service-oriented_architecture

[37] Healthcare Services Specification Program http://hssp.wikispaces.com/

[38] HEALTHSOAF Project, http://www.healthsoaf.it

[39] Integrating the Healthcare Enterprise, http://www.ihe.net/

[40] Dedalus solution for supporting EHR, http://www.dedalus.eu/x1v1.cfm?chg_lang=eng

[41] Information Technology Infrastructure (ITI) Technical Framework of IHE, http://www.ihe.net/IT_Infrastructure/

[42] Health Level Seven International, http://www.hl7.org/

[43] Object Management Group, http://www.omg.org/

[44] http://www.hl7.org/implement/standards/product_brief.cfm?product_id=185.

Page 78: Model and Inference Driven Automated testing of Services …web.itainnova.es/midas/files/2013/10/MIDAS_D9.3-WP9.pdf · 2013-10-08 · Deliverable D9.3 MIDAS • Project Number 318786

Deliverable D9.3

MIDAS • Project Number 318786 • D9.3 • Page 78 of 78

[45] http://www.hl7.org/implement/standards/rim.cfm

[46] http://www.hl7.org/implement/standards/product_brief.cfm?product_id=7

[47] HITCH Project, http://www.hitch-project.eu; [email protected]

[48] Guide to pretest process, http://ihewiki.wustl.edu/wiki/index.php/Index_to_Preconnectathon_Test_Software

[49] IHE proposes a set of software tools, http://www.ihe.net/Testing_Tools

[50] Kryterions specifications , http://shopping.netsuite.com/kryterion

[51] MIDAS Glossary, https://extsvnsrv.fokus.fraunhofer.de/svn/cc/motion/MIDAS/Glossary/MIDAS_Glossary_v1.0_SIN.docx

[52] MIDAS Deliverable D2.2: Architecture and specifications of the MIDAS framework and platform, August 2013

[53] Gartner, Hype Cycle for Supply Chain Management, 27 July 2012, Gartner Research Report 2012