View
3
Download
0
Category
Preview:
Citation preview
D61 - Initial integration and validation plan
Angel Martin
Document Number WD6.1
Status Final
Work Package WP 6
Deliverable Type Report
Date of Delivery 28/02/2016
Responsible Unit VIC
Contributors Angel Martin (VIC), Joe Tynan (TSSG), Martin
Tolan (TSSG), Diego Lopez (TID), Antonio
Agustin Pastor (TID), Alaa Alloush (TUB), Udi
Margolin (NOK), Gorka Velez (VIC), Haytham
Assem (IBM), Teodora Sandra Buda (IBM), Lei
Xu (IBM), Antonio Pastor (TID), Domenico
Gallico (IRT), Matteo Biancani (IRT), Philippe
Dooze (ORA), Alassane Samba (ORA), Imen
Grida Ben Yahia (ORA), Elie El Hayek (ORA)
Reviewers Domenico Gallico (IRT), Imen Grida Ben Yahia
(ORA), Elie El Hayek (ORA)
Keywords Methodology, integration, validation,
infrastructure, demonstration
Dissemination level PU
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 2 of 87
Change History
Version Date Status Author (Unit) Description
0.1 06/11/2015 Working Angel Martin (VIC) TOC
0.2 25/11/2015 Working Angel Martin (VIC) Updated TOC & Section
responsible
0.3 26/11/2015 Working Alaa Alloush (TUB) Section 5.1
0.4 30/11/2015 Working Joe Tynan (TSSG) TOC Review
0.5 15/12/2015 Working Udi Margolin (NOK) Open Stack section
0.6 15/12/2015 Working Gorka Velez (VIC) Introduction sections
0.7 18/12/2015 Working Angel Martin (VIC) Methodology sections
0.8 23/12/2015 Working Angel Martin (VIC) Pending sections
0.9 23/12/2015 Working Haytham Assem, Teodora Sandra
Buda, Lei Xu (IBM)
Section Complementary
technologies
0.10 11/01/2016 Working Antonio Pastor (TID), Domenico
Gallico (IRT)
Unit Tests, evaluation
frameworks & OpenMANO
0.11 20/01/2016 Working Domenico Gallico (IRT) Test card templates & Test-
bed maintenance
0.12 21/01/2016 Working Philippe Dooze (ORA), Antonio
Pastor (TID)
Section evaluation metrics
0.13 21/01/2016 Working Angel Martin (VIC) Demonstrator description
0.14 26/01/2016 Working Angel Martin (VIC) Ready for sections review
0.15 28/01/2016 Working Matteo Biancani (IRT) Review Validation section
0.16 08/01/2016 Working Angel Martin (VIC) Review sections
0.17 09/02/2016 Working Norisy Orea (TID), Antonio Pastor
(TID)
Review Licensing Section &
Development Conventions.
0.18 12/02/2016 Working Teodora Sandra Buda (IBM) Review sections
0.19 19/02/2016 Working Domenico Gallico (IRT), Imen Grida
Ben Yahia (ORA), Elie El Hayek (ORA)
Overall review
0.20 23/02/2016 Working Angel Martin (VIC) Review fixes
1.0 26/02/2016 Final Robert Mullins, Joe Tynan, Martin
Tolan (TSSG)
Final review
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 3 of 87
Abstract
Keywords
This deliverable outlines the common structures, methodologies and policies following the
strategy to efficiently and continuously integrate and test the different components developed
in WP3, WP4 and WP5 that fills the CogNet Architecture. The goal is to track cooperation
among participants to create and deploy a CogNet solution that provide a demonstrator where
the performance and efficiency of the project outcomes will be validated. To this end, this
document also describes the features of the context where CogNet demonstrator will come into
play, from the development context, to the target infrastructures and the candidate evaluation
tools.
5G, Integration, Testing, Validation, Demonstrators, Infrastructures, Methodology.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 4 of 87
Executive Summary
This document is the public deliverable D6.1 of the H2020 project entitled
“Building an Intelligent System of Insights and Action for 5G Network
Management”, denoted by CogNet. This deliverable presents the
integration and validation plans, including plans for development/test
processes and methodologies, development environment, and integration
plans for complementary technologies and demonstrators of the project.
It has been constructed from the first four tasks of WP6 entitled
‘Validation & Integration‘, namely from the Tasks 6.1 “Integration and
Validation Methodology”, 6.2 “CogNet platform Integration and Testing”,
6.3: “Integration with Complementary Technologies” and 6.4 “Development
and Testing of Demonstrators”, taking inputs from WP2 “Requirements and
Architecture” of CogNet.
The goal of this deliverable is to define a common and uniform set of
integration and validation guidelines, to identify the infrastructures for an
efficient integration and testing, and to outline the candidate
demonstrators, their scope and requirements, related to the 5G network,
specifically from a network management perspective.
First, this document introduces policies for the code development aiming
at the integration and validation activities. Regarding integration, a
continuous integration solution, Jenkins, will be used. Regarding
validation, due to the differences between the involved modules, different
metrics will be used for each case.
Furthermore, a set of infrastructures has been sized considering the
expected computation needs of the Machine Learning tools, the
dimension of a representative forwarding/delivery infrastructure acting as
the telco infrastructure to be managed and optimized and the
demonstrator assets necessary to inject realistic traffic to the
infrastructures.
Finally, this deliverable introduces a set of candidate demonstrators
pivoting around the WP2 defined scenarios including their definition,
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 5 of 87
technical enablers and evaluation metrics. The final set of developed
demonstrators will incorporate CogNet’s solutions proving the benefits of
CogNet to achieve new levels of performance for the next generation
networks and will be delivered in D6.2 “First release of the integrated
platform and performance reports” and D6.3 “Final release of the integrated
platform and performance reports”.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 6 of 87
Table of Contents
1. Introduction..................................................................................................................... 10
1.1. Background .................................................................................................................................................... 11
1.2. Motivation & Scope .................................................................................................................................... 11
1.3. Structure of the Document ...................................................................................................................... 12
2. Basic Principles ................................................................................................................ 13
3. Stakeholders in the Development Process ................................................................... 15
3.1. Software Architect ....................................................................................................................................... 15
3.2. Software Developer ..................................................................................................................................... 16
3.3. Unit Tester ....................................................................................................................................................... 16
3.4. Continuous Integrator ................................................................................................................................ 16
3.5. End device / User ......................................................................................................................................... 16
3.6. System tester ................................................................................................................................................. 17
4. Development Conventions ............................................................................................ 18
4.1. Programming Languages .......................................................................................................................... 18
4.2. Source Control .............................................................................................................................................. 18
4.2.1. System .................................................................................................................................................... 19
4.2.2. Naming Conventions / Taxonomy / Versioning ..................................................................... 20
4.3. Schematics ...................................................................................................................................................... 20
4.3.1. Template of the components ........................................................................................................ 21
4.3.2. Configuration / Setup ....................................................................................................................... 21
4.3.3. Developer APIs .................................................................................................................................... 22
4.3.4. Logs ......................................................................................................................................................... 22
4.4. Communication of Components ............................................................................................................ 23
4.4.1. Messaging formats ............................................................................................................................ 23
4.4.2. Invocation formats ............................................................................................................................. 23
4.4.3. Result formats ...................................................................................................................................... 23
4.5. Documentation ............................................................................................................................................. 24
4.5.1. README.md .......................................................................................................................................... 24
4.5.2. Wiki .......................................................................................................................................................... 25
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 7 of 87
4.6. Issue Management ...................................................................................................................................... 26
4.6.1. System .................................................................................................................................................... 26
4.6.2. Responsibilities .................................................................................................................................... 26
5. General Strategy ............................................................................................................. 27
5.1. Use Cases and Scenarios analysis .......................................................................................................... 27
5.2. Iterations .......................................................................................................................................................... 28
5.3. Continuous Integration companion ..................................................................................................... 28
5.4. Teams communication ............................................................................................................................... 30
6. Integration of Components ........................................................................................... 31
6.1. Continuous Integration .............................................................................................................................. 31
6.1.1. Platforms ................................................................................................................................................ 31
6.1.2. Structure ................................................................................................................................................ 32
6.1.3. Procedures ............................................................................................................................................ 34
6.2. Standards ........................................................................................................................................................ 35
6.3. Licensing .......................................................................................................................................................... 37
7. Prototype evaluation ...................................................................................................... 38
7.1. Overall Methodology.................................................................................................................................. 38
7.2. Unit tests ......................................................................................................................................................... 39
7.2.1. Functional / Error ................................................................................................................................ 40
7.2.2. Connectivity / Timeout ..................................................................................................................... 40
7.2.3. Quality / Ground Truth ..................................................................................................................... 40
7.2.4. Toolsets .................................................................................................................................................. 41
7.3. Test-card template ...................................................................................................................................... 41
7.4. Software Quality evaluation ..................................................................................................................... 43
7.4.1. Coding Standards ............................................................................................................................... 43
7.5. Implementation testing strategy............................................................................................................ 44
7.5.1. Functional experiments .................................................................................................................... 44
7.5.2. Performance experiments ............................................................................................................... 45
8. Planning and Milestones ................................................................................................ 48
8.1. Activities and relation to DOW ............................................................................................................... 48
8.2. Iteration deadlines ....................................................................................................................................... 48
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 8 of 87
8.3. Development Calendar .............................................................................................................................. 49
9. Infrastructures ................................................................................................................. 50
9.1. Virtualization Stacks .................................................................................................................................... 50
9.1.1. OpenStack ............................................................................................................................................. 51
9.1.2. OpenMano ............................................................................................................................................ 52
9.2. Methodology ................................................................................................................................................. 53
9.3. Available assets ............................................................................................................................................. 54
9.4. Requirements assessment ........................................................................................................................ 59
9.5. Test-bed - Infrastructure maintenance ................................................................................................ 61
10. Complementary technologies .................................................................................... 63
10.1. Specification of candidate complementary technologies ............................................................ 63
10.1.1. IBM BigInsights for Apache Hadoop .......................................................................................... 63
10.1.2. IBM Infosphere Streams................................................................................................................... 64
10.1.3. Apache SystemML .............................................................................................................................. 65
10.2. Integration plan ............................................................................................................................................ 65
10.3. References....................................................................................................................................................... 67
11. Demonstrator applications......................................................................................... 68
11.1. Methodology to find candidate demonstrators .............................................................................. 68
11.2. Demonstrator Massive Multimedia and Connected Cars ............................................................ 71
11.2.1. Description ............................................................................................................................................ 71
11.2.2. Architecture .......................................................................................................................................... 72
11.2.3. Scope ....................................................................................................................................................... 73
11.2.4. Metrics .................................................................................................................................................... 76
12. Conclusions .................................................................................................................. 77
Appendix A. Evaluation Frameworks and tools ............................................................... 78
A.1. Simulation tools ............................................................................................................................................ 78
A.1.1. Network Emulation ............................................................................................................................ 78
A.1.2. Event Network Emulator .................................................................................................................. 79
A.1.3. Network emulators (Riverbed Modeller, NS3) ........................................................................ 79
A.2. Emulation framework ................................................................................................................................. 79
A.3. Tools for traffic generation and probing ............................................................................................ 79
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 9 of 87
A.3.1. Traffic Generation ............................................................................................................................... 79
A.3.2. Performance Measurement ............................................................................................................ 79
A.3.3. Packet Manipulation, Reconciliation and Auditing ............................................................... 80
A.3.4. Application KPI level Measurement ............................................................................................ 80
A.3.5. Automatic Traffic deployment ....................................................................................................... 81
A.3.6. Wire speed Ethernet packet generator and playback .......................................................... 81
A.3.7. Layer 2 Forwarding in Virtualized environments.................................................................... 82
A.3.8. Network throughput ......................................................................................................................... 82
A.4. Network Management tools .................................................................................................................... 82
A.5. Network QoS probes over OpenStack ................................................................................................. 82
A.5.1. Ryu ........................................................................................................................................................... 83
A.5.2. OpenDaylight ....................................................................................................................................... 83
A.5.3. Neutron QoS extension ................................................................................................................... 83
A.6. Monitoring tools .......................................................................................................................................... 84
A.6.1. Prometheus ........................................................................................................................................... 84
Appendix B. Definitions ..................................................................................................... 85
Appendix C. Abbreviations ................................................................................................ 87
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 10 of 87
1. Introduction
CogNet is a challenging project from a software engineering point of view. CogNet has a special
feature that makes it even more challenging in terms of integration. CogNet brings together two
different worlds that have traditionally been separated: network management with autonomic
principles and machine learning. Each one has its terminology and practices. All these factors
require an immense integration effort. Here, different software pieces that must be put together,
from the data collector, metering or probing, the data storage, the data processing, to the
generation of policies accordingly, being defined in WP2 and designed and developed in WP3,
WP4 and WP5, that will interact somehow with each other. Furthermore, they are being built
using different technologies depending on the more suitable available tools.
The software modules developed in CogNet aim to achieve the following objectives:
Be easy to integrate: the integration needs to be taken into account from the very
beginning, in the design stage.
Be easy to adapt: the components are self-contained, acting as individual modules. This
should allow combining them in different ways to create a custom solution adapted to
the end-user necessities.
Be computationally efficient: processing resources from the management software to
calculate optimal network performance must be balanced and minimized.
In order to cope with these requirements, a structured methodology for development and
integration is necessary targeting an objective validation of the resulting platform. This
methodology should employ the adequate software tools for its purposes. Modern software
development methodologies usually have short iterative development cycles, where early
prototypes are functionally and technically tested. Thus, integration of composing pieces of
software, and validation, verifying that system meets its specifications and that it fulfils its
intended purpose, are therefore intrinsic parts of the whole development process. In other words,
in order to define an efficient integration plan, the testing needs to be taken into account, and
vice versa.
The next diagram depicts the different steps considered in the methodology defined in this
document. It starts from the development policies to be taken into account, then the integration
in a CogNet system to the evaluation of individual components and the resulting system. The
testing activities of the integrated CogNet system will generate data that will lead to the
validation conclusions. This way, the validation feedback will assess the effectiveness score from
the data resulting of the testing setup.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 11 of 87
Figure 1 Steps of the Integration and Validation methodology
CogNet will not have as output a unique monolithic solution, but a set of tools that can be used
in different use cases and scenarios. In order to prove the usefulness of these tools, some
demonstrators will be developed and tested.
This document’s main objective is to define CogNet’s integration and validation plan, including
the strategy to develop and test the demonstrators. It also identifies the infrastructures for an
efficient integration and testing, defines the policies for code development to enable a
continuous integration methodology, and introduces a set of candidate demonstrators.
1.1. Background
This document establishes the pillars and guidelines for the activity in WP6. To this end, it takes
results from WP2, in special scenarios, challenges and architecture design in order to collect all
the requirements to shape outcomes from WP3, WP4 and WP5 into consistent and reliable
solutions that support the demonstrators developed in WP6.
1.2. Motivation & Scope
The eleven partners participating in the project and their respective teams, spanning a wide
range of cutting edge technologies with individual experience will develop different components
that, when integrated, will provide the end users with a reliable solution. To aid the development
of the individual components, as well as to increase the success of integration and ease validation
activities, this section describes the overall agreement upon research and development
methodology of the CogNet project. An agreement upon development methodology will most
likely improve the quality of the project and the resulting software.
Desing Components
Implement, Deploy & Setup Components
Unit Test Components (functional &
technical)
Consolidation of Components in a CogNet System
Test System (functional &
technical)
Validate Effectiveness (specified in WP2)
DEFINE
DEVELOP
INTEGRATE
EVALUATE
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 12 of 87
It must be noted that the plan will evolve over time adopting needed adjustments over the
lifecycle of a software product and its team. This document should therefore be seen as a starting
point, not as an immutable master plan.
1.3. Structure of the Document
This deliverable is organised with the following structure:
Section 2: a definition of basic development, validation, and integration principles.
Section 3: an overview of stakeholders involved in the development process.
Section 4: a definition of conventions.
Section 5: an overview of the general strategy.
Section 6: a description of how the components are integrated, including schematics, unit
tests, continuous integration, standards and licensing.
Section 7: a description of the validation methodology, including software quality,
evaluation frameworks, implementation testing and performance validation.
Section 8: a description of the overall planning and the milestones, including
development tasks and its relation to DOW, cycle deadlines and development calendar.
Section 9: a description of the available infrastructures, including virtualization stack,
available assets, and related requirements.
Section 10: a review of complementary technologies.
Section 11: a list of project demonstrator applications.
Section 12: general conclusions.
Appendix A: list of relevant tools for the CogNet system.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 13 of 87
2. Basic Principles
This section defines a set of general guidelines conducting the overall development, integration,
testing and validation activities on CogNet.
The basic principles on which development methodology decisions can be based are:
Manage highly distributed character.
o Tools will boost, track and conduct asynchronous collaboration among teams.
Consider skills of participants that may vary widely.
o Experiment with new tools and receive help from partners with expertise on that.
Avoid central authority or control.
o Don’t force teams to use tools and methodologies when not absolutely necessary.
Respect specific own partner methodologies, (programming) languages, toolsets and
workflows.
o Decide what is “common” ground, controlled by an agreement upon conventions;
keep partner freedom to design and implement according to its background on
the “not-common” ground outside of “common ground” anything goes.
Maximize the simplicity to configure.
o Create and configure applications based on a single, human readable text-file.
Procure plainness to deploy.
o Single command installation of a CogNet setup (checking that pre-requisites are
installed).
Design to be integrated.
o Outline components with well-defined inputs and outputs.
o Design clear workflow and pipeline.
o Define and document interface with consistent functions interface and formats.
Grant ability to adapt and extend.
o Define mechanisms to expand the solution and reach new features.
o Prepare the solution to further challenges or needs.
o Enable tuning controls to drive components towards high performance.
Assure reliability.
o Run last working version of components making single component adjustments
as opposed to replacing full applications.
Keep high security.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 14 of 87
o Collaborate for a distributed security where all the cooperative modules take part
of it.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 15 of 87
3. Stakeholders in the Development
Process
CogNet’s work package 6 is tasked with integrating and validating use cases and their
effectiveness. With separate outcomes emerging from work packages 3, 4 and 5, with their
respective partners’ contributions, project roles will have to be recognized before any integration
process can commence.
From a validation and integration stand point, stake holders in CogNet’s software development
cycle shall be categorised into the following roles:
Software Architect
Software Developer
Unit Tester
Continuous Integrator
End device / User
System Tester
From CogNet’s description of work, stakeholder roles are associated to the following work
packages: a Software Architectural role is primarily associated to the Requirements and
Architectural work package (WP2), where the Software Developer and unit tester roles are
concerned with work packages on: Advanced Machine Learning for Data Filtering, Classification
and Prediction, Network Resource Management and Network Security & Resilience, work
package 3, 4 and 5 respectively; while the Continuous Integrator, End Device / User and System
Tester roles are concentrated within the Validation & Integration work package, which is work
package 6.
3.1. Software Architect
The software architect role is responsible for defining the overall platform and its corresponding
inter-working interfaces. The CogNet platform is divided into a number of software platforms in
the form of learning algorithms, big data, NFV, SDN and cloud platforms, architect will aid the
smooth transition of the developed subcomponents into a continuous integration and execution
environment. The role will investigate and identify possible data flows structures between afore
mentioned platforms by matching function requirements outlined in deliverable D2.1 to the
proposed CogNet continuous integration platform.
The software architect will provide a system architecture outline in the form of working
deliverable D2.2 and D3.1, which then will be used by software developers and platform
integrators, providing them with an understanding on how the software components and sub
structures are to be deployed. The architect will resolve technical disputes, make trade-offs and
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 16 of 87
resolve technical problems while ensuring all of the proposed solution will meet the projects
technical and quality attributes.
3.2. Software Developer
The software developer is responsible for the specification, the design and the coding of software
subcomponents. This work is performed within Work Packages 3, 4, and 5. The software
developer uses the Software Design Architectural document D2.2 and D3.1 to identify the
functionalities allocated to the components. He also uses this document to identify the
interaction of the Network Function Virtualization infrastructure (NFVi) platforms to the Machine
Learning Platform. They are also responsible for the creation and implementation of the testbed
unit test suite.
3.3. Unit Tester
The unit tester role has the responsibility for the unit testing of the software components from
both the CogNet Machine Learning and Network Function Virtualization infrastructure platforms.
To perform testing, the software developer/unit tester writes the unit test scripts, executes the
tests and reports the results to (1) the software developer and/or (2) the Continuous Integration
(CI) system for validation feedback. The unit tester and the software developer in our case is the
same person. The Unit tester’s scripts are committed to the project versioning management
system and are used to generate unit test reports stored in the CI system for both archiving and
reporting purposes.
3.4. Continuous Integrator
The continuous integrator role is responsible for the integration of software component
executables, machine learning platforms and the network testbeds into the CogNet continuous
integration validation platform. The integrator will also be responsible for creating,
implementing and executing the integration tests. They will analyse how to integrate the software
platform and components by details in the integration test plan.
Through deliverable D6.2 the role will report on the initial release of the integrated platform and
performance report. This role resides primarily inside WP6 within the CogNet project.
3.5. End device / User
The end device or user is the one that shall use the resources exposed by the CogNet testbed,
which allows end users and devices to perform test operations on the network, ultimately
allowing training to machine learning algorithms to enable network corrections and/or
predictions. The user or device shall produce and retain logs that can be subsequently offered as
real time or historical scenario play books into the learning platform.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 17 of 87
The end user will report on faults and bugs observed on the platform by using a bug
management tool setup in the integration and validation phase. This role also resides inside WP6
within the CogNet project.
3.6. System tester
The system tester is a work package 6 role who has responsibilities for preforming tests on the
overall integrated testbed. The role identifies system application implementation strategies,
selecting suitable component test technologies and tools, they select and configure all necessary
hardware and operating environments external to the core testbed, and shall supply the
appropriate level of automation and virtualization as needed to complete all testing scenarios.
They also establish procedures for test results analysis and reporting in accordance with the
CogNet’s technical quality requirements, whilst resolving any underlying issues discovered during
validation.
The System Tester defines and handles defect tracking, work around procedures, along with
monitoring and maintaining of defect reports. They shall complete the final release of the
integrated platform and performance report and the final evaluation and impact assessment
report that is D63, D64 respectively.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 18 of 87
4. Development Conventions
The integration and validation plan of this document also promotes some developments
directives and conduct developments with common methodologies. This section describes
project wide conventions to be used during the software development and the integration
phases. It shall define a set of guidelines for software structural quality, thus giving the integrator
and software developer a recommended path to abide by, helping them improve the readability
of their source code, logically identifying software versions, aid with bug fixes and making
software maintenance and integration tasks easier.
The topics covered include:
Programming Languages
Source code control
Communication
Documentation
Issue Management
4.1. Programming Languages
High level languages used in the project include, but not limited to: Java, Python, and R
programming languages. Ansible1 shall be used as a configuration and provisioning management
tool, allowing the integrator in the continuous integration and validation phase, the ability to
bootstrap the testbeds to a predefined desired state.
4.2. Source Control
Source code control is a requirement in all modern software development projects and CogNet is
no exception. It will provide the mechanisms for checking source code in and out of a central
repository, along with enabling continuous delivery of functionality to the platform.
Source code control also brings the ability to version software releases, it can be used in
conjunction with software development lifecycles, keeping track of which changes were made,
who made the change, a time stamp on when the change was made and a description on why
the change happened. The control of code shall provide the ability to group common versioned
files as a single release in the form of a master branch, while also supporting the possibility of
maintaining active concurrent releases via a method known as branching.
1 http://www.ansible.com/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 19 of 87
4.2.1. System
Currently GitHub2 is being explored as a source code control application for the project and is
similar to other version control systems such as subversion, CVS, etc. GitHub brings other
features into CogNet that we can take advantage of; publicly accessible, contains an issue tracker,
user access controls, wiki posting, and distributed source code control architecture.
Figure 2 CogNet software developer interacting Github
Figure 1 shows how a typical CogNet software developer will interact with GitHub when
developing code. This process allows each user maintain their own copy of the source code
locally for editing in their IDE. It takes into account that other software developers, both locally
and remotely can work in parallel. Git will also aid the development process as the developer can
divide builds into branches that include features, work package specifics, continuous integration,
unit testing, system validation and final production/trial branches.
The general policy related to fold back branches into the trunk is driven by the continuous
integration system for the CogNet platform described in Section 6. In summary once the scripts
for building, deployment, unit tests and integration tests has been successfully passed, the
branch comes into the trunk. It the continuous integration system detects a problem in any step,
the branch is not accepted, being rejected/reverted. This way, the continuous integration system
assures the availability of an updated and functional release available in the trunk all the time.
Branch function roles are as follows:
Feature / Work Packages branch: it will contain code on new features produced by each
developer. This branch would be considered unstable until the feature matures, at that
stage the feature will be pushed to the development branch.
Development branch/trunk: this branch will be used for continuously integration of the
software developers released features, the branch would be considered more stable and
less prone to build breaks. All development is on this branch until the feature set
complete cut-off date is reached, then a new testing branch is created, after that
development for the next release can continue on the Development branch.
2 https://github.com/CogNet-5GPPP
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 20 of 87
Testing branch: it will contain the code that will be used in the testing on a final system
and is used in the final release of a feature set. The code is tagged at this stage, creating a
snapshot of features implemented.
Production / Trial branch: The code base here is deployed for reviews and project
demonstrators.
4.2.2. Naming Conventions / Taxonomy / Versioning
Code versioning is divided into a number of areas in development:
A Build number: this is generated for every successful build.
GitHub Repository: this version number is generated on every commit.
Release version number: this is generated when the feature set of an interim release is
ready for final system testing and deployment. The build is then tagged with this version
number.
Tagging/Versioning
When the features are initially developed and ready for system test the build is tagged
with the version 0.1 and deployed to the development branch for a continuous
integration cycle.
With the completion of system testing and the software ready for deployment to a trial
testbed, the build is to be tagged as a major version release (e.g. 1.0) and deployed. At
the same time the bug fixes that were implemented during the previous testing cycle on
the branch/integration code are merged back into the trunk development branch.
CogNet uses the following format “x.y.z-qualifier” with the qualifier donated by the
aliases “SNAPSHOT”. The “x.y.z” digits donates major, minor and micro releases.
o x is Major release. A major release number is used for project milestones, such as
adding a new area of functionality, new concepts.
o y is a minor release number, where a change to functionality is been
incorporated; this may include API changes, etc.
o z is a micro release number. A micro number increment here would involve such
changes as bug fixes, workarounds etc.
In addition, the execution logs of the components and the source code files should include the
same number of the version.
4.3. Schematics
In order to boost the integration and ease the maintenance of a set of components along the
lifecycle of a solution, this section defines a homogeneous development setup shared among all
the components inside CogNet.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 21 of 87
4.3.1. Template of the components
A common and consistent files structure can ease the transition from exploratory research
towards validation on demonstrators. These policies flatten the effort to turn industrial research
into a mature solution package for industrialisation and deployment.
The following list of generic inner blocks states a high abstraction level that does not establish
architecture constraints:
Core – It is the Kernel of the component. It performs the actual processing of the module.
Work - Invocation wrapper that is the interface for other components or entities from the
architecture. All the operations should be concentrated here, from setup to execution.
Connect – This is not only in charge of communication between modules but also ingest
of data, probing for data capturing and application of decision making conclusions on
actuators. So this inner block can have these sub-blocks:
o Ingest
o Marshalling
o Actuation adaptation
Output – It manages result throughput, performs parsing and marshalling
As not all the inner blocks play at all the components, some can be empty.
4.3.2. Configuration / Setup
In order to adapt different components to new contexts it is very important to ease the ability to
configure them. This way, the user can tune their configuration by just modifying the setup.
The following conventions will be adopted by the components for the configuration:
File Name. To clearly match module and configuration file the name will have the next
structure “<ComponentName>.cfg”. Thanks to the continuous integration system it is not
necessary to include the version of the release where this configuration file is applicable.
Path. Located at the same place (root of deployed module at the system path).
Format. The setup of each module must be based in just one file with a human-readable
format.
Syntax. One parameter per line accompanied by a set of extra commented lines with:
o description of the parameter;
o value limits (if any);
o reference value.
Security. The protection of those parameters concerning critical security data such as
credentials to connect other systems is out of the scope. This decision does not
compromise future ability to protect the involved connections for an industrial package.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 22 of 87
In other words this issue can be addressed in the future without major impact at the
CogNet stack.
Updates. The component gets awareness of file updates by manual triggers not by
automatic events detection. So, is not forced to perform polling to get configuration file
updates.
4.3.3. Developer APIs
Each module, depending on its requirements and role in the architecture, can fit better with
different designs. In each case, the interfaces and the invocation syntax is completely different.
Daemon. Component performing a background process in an autonomous way without a
direct control of an interactive user.
Service. Component attending requests and generating responses.
Library. Component providing a collection of functions.
Anyway, the modules should provide a minimal set of functions that provide baseline operational
actions:
Init. This operation loads setup and performs resources allocation.
Start. This operation runs processing.
Stop. This operation ceases processing.
Free. This operation performs component tearing down and resources free.
The different components will be autonomous and apply automatically network management
rules according to the dataflow. However, the components that need to communicate with other
components will send signals employing synchronous or asynchronous patterns, depending on
the speed of the processing, availability of data, volume of response, and so on. These signals can
be based in a wide spectrum of mechanisms, from a shared file, to sockets, memory or database
register, etc.
The format of the interface, number of parameters, limits, reference values and the signals set
have to be documented providing a minimal sample.
4.3.4. Logs
In order to avoid locking issues due to file multi-access, each module have to have its own log
file. The following policies will be adopted by the components for the logs:
File Name. To clearly match module and log file the name will have the next structure
“<ComponentName>.log”.
Path. Located at the same place (root of deployed module at the system path).
Format. The setup of each module must be based in just one file with a human-readable
format.
Syntax. One log per line with:
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 23 of 87
o UTC date and time
o criticality where:
1 - ERROR
2 - WARNING
3 - LOG
4 - DEBUG
o log generator with the syntax:
file (<line>)
o short description
Garbage collector. It is not needed to have a system to remove old logs in time.
4.4. Communication of Components
In order to conduct developments to common interfaces, this section defines the process that all
developers will have to adhere to when designing and implementing additional components to
support the CogNet project. As each component is designed the software architect and
developer must be aware of the requirements that they are to comply with in order to reduce the
effort that other consumers of their components spend.
4.4.1. Messaging formats
When designing a component the architect must define the messaging format to be used to
interact with the component. This will clearly inform software developers how to communicate
with the component if they wish to use/consume its services. These message formats will define
the structure of the messages that are used for requests and the responses expected from the
component.
4.4.2. Invocation formats
The component designed is required to define the Application Programming Interface (API) that
is exposed by the component being implemented. This will also include any dependencies that
the component may have on external libraries/components such as call-back functions or shared
memory locations. This API must be documented so once again users/consumers of the service
are able to utilise the component.
4.4.3. Result formats
The format of the messages is also to be defined. These formats can include but are not limited
to JSON, XML, etc. The users must know how to format the message blocks in order to
communicate with the component.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 24 of 87
4.5. Documentation
This section defines what pieces of information are required to properly document each technical
element.
4.5.1. README.md
It must include:
Component name
Brief description of component
Navigation
o Including links to all sections of the readme.md
Structure
o Brief description of the structure of the repository
Architecture
o Including inner blocks and inter CogNet stack
Interface
o Including the APIs or the functions to operate
o Examples
Build
o Languages
Dependencies
o Including URLs
Setup
o Parameters that govern the configuration
Execution / Deployment
o Command line interface / Webservices / Daemon
o Quick Use Example
[Dataset] (optional)
o Dataset name employed
o Brief description of the dataset
o Where can dataset be found
o How was the dataset obtained
Authors
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 25 of 87
o Names, entities and email
License
o Legal info
4.5.2. Wiki
As CogNet has a large number of contributing partners, we have considered Media Wiki as a
collaborative tool during the integration and validation phase3. This allows partners to capture
and document project integration procedures and create how-to’s on validation processes. The
wiki entry allows the WP6 integrator search through previous tagged key words for relevant
integration topic information. The software developer in work package 3, 4 and 5 are tasked with
the creation of this content.
Integration topic structure will comprise of:
Integration Topic
This shall contain a clear descriptive title that should ease an efficient wiki search. For example:
OpenStack Neutron plugins for an SDN controller.
Concept description
This section shall contain the integration topics particular functionality and defines a successful
integration. The aim is to produce a concept diagram, and if other approaches to the task could
be considered in order to meet the integration goal.
Task
An integration task topic explains in detail a how to accomplish a particular integration task. This
section shall provide a sequence of steps on how to achieve the goal of the integration topic.
Also to aid the description this section includes multiple sub-sections. Integration topics are the
most important topics in any technical documentation and will become valuable during WP6’s
continuous integration phase.
Task subsections include:
Task Short description (solution, intro, diagram, prerequisites)
Procedure (a sequence of steps)
Procedure Sub section (optional)
Result
Reference
The topic reference section shall provide any other reference material about the integration
functionality. It is to be presented in a bullet list format.
3 http://wiki.cognet.5g-ppp.eu/mediawiki/index.php/Main_Page
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 26 of 87
Software components
The same applies to as the README.md but will also contain a link to the git repo.
4.6. Issue Management
The methodology for the integration and validation covered in this document must also define
the mechanisms to detect early misalignments and assign responsibilities. The detection will be
addressed by the continuous integration system and the testing activities described in following
sections, while an issue-tracking system shall provide CogNet with a clear centralized overview of
the developing features and their functional state. It shall provide a valuable consolidated
overview and record the operational state of the software components under development.
4.6.1. System
CogNet’s Redmine issue tracker shall be used for software development bug management.
Redmine allows issues to be tracked at a CogNet work package level, it records issues properties
such as, status priority, creation date, description, owner and time worked on issue. For example
the issue management system for WP3 is at the following URL: http://redmine.cognet.5g-
ppp.eu/projects/cognet-wp3/issues/new
4.6.2. Responsibilities
Defined in Table 1 is a list of responsibilities, it shows test validation roles, its phase and the
corresponding work package.
Test Phase Role Work Package
Unit Testing Software Developer,
Unit Tester
WP3, WP4, WP5
Integration Testing Integrator WP6
System Validation Integrator WP6
Table 4-1 Responsibilities table
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 27 of 87
5. General Strategy
In order to reach CogNet goals in an affordable way, considering timing and assets, a strategy
must manage different activities pivoting around the development, integration and evaluation
(testing and validation) steps.
Moreover, the main driver will be the scheduled iterations of the integrated platform (M20, M28)
and their paired releases of the demonstrators, which validate the CogNet effectiveness, in the
project. This way, the steps have a common goal and timeframe closely related to the
demonstrators, which gather activities with an active interest in overcoming challenges present in
a representative set of scenarios from D2.1 - Initial scenarios, use cases and requirements.
Therefore, it makes sense to connect the overall strategy to the demonstrators and their planned
milestones, and in addition, to verify a continuous integration regime with iterative deployment
and testing loop tools for continuous integration and validation that can support conflict
mitigation guiding cooperation of the different teams developing components.
5.1. Use Cases and Scenarios analysis
The initially identified use cases and scenarios were described in D2.1 deliverable. These are
presented in Figure 3, together with the foreseen challenges.
Figure 3 CogNet challenges, use cases and scenarios
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 28 of 87
In this initial approach, 6 use cases, 11 scenarios and 6 challenges were identified. It is not
realistic to test all of the combination of use cases, scenarios and challenges. It is better to focus
in some key scenarios where these use cases could be demonstrated. As explained in Section 1,
CogNet will not produce a monolithic/vertical solution, but a set of tools to improve network
management using machine learning. Here, each tool will tackle at least one use case in a certain
scenario.
The different use cases and scenarios described in D2.1 represent the challenges met in CogNet
and the landing of the proposed solution for a specific problem bringing a set of metrics to
assess the resulting score. In other words, these scenarios establish the final features that must be
achieved to get a standout outcome.
5.2. Iterations
Development, integration and evaluation are driven by different iterations where results are re-
injected. Each iteration will consist of a specification and design, development, integration and
evaluation phase. The purpose of the methodology is to build up a solution on an incremental
basis. This strategy eases different partner participation and foster mutual understanding to co-
create research and innovation outcomes.
The project will push to have a first early prototype for quick testing and concept validation.
Functionalities and services will gradually be built into this alpha prototype. As the demonstrators
are the main catalyst for development consolidation, assessment and support to decision making
there are two major releases planned in the Work Plan:
First official release of the integrated platform and performance reports – first suite of
software and tools for evaluation of WP3, 4 and 5 results, plus demonstrations of key
applications of the core technology (Deliverable D6.2, due month 20). First release will
sustain the architecture, designed in WP2, through the development, integration and
deployment of the different core modules bringing a whole solution for network
management optimization.
Final release of the integrated platform and performance reports – first suite of software
and tools for evaluation of WP4 and 5 results, plus demonstrations of key applications of
the core technology (Deliverable D6.3 due to M28). It provides space to adopt
conclusions from first iteration evaluation aiming a further maturity. This means tuning
and expanding the solution to achieve updated features and target performance
established by the metrics (KPIs) of the demonstrators.
5.3. Continuous Integration companion
Traditionally developers compile and test their software modules locally and when they decide
that they have a significant contribution they upload it to the software versioning system. This
procedure is usually followed without interruptions for several months, incrementally generating
new code. The problem is that each module has grown independently, without checking the
compatibility with other modules. When the integration time comes, incompatibilities and other
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 29 of 87
integration problems arise. The main aim of Continuous Integration is to prevent critical
integration problems, promoting the integration as part of the entire development cycle.
Continuous Integration, in its simplest form, involves a tool that monitors your version control
system. If a change is detected, this tool automatically compiles and tests your application. If
something goes wrong, the tool immediately notifies the corresponding developers so that they
can fix the issue. This typical Continuous Integration workflow is depicted in Figure 4.
Source control
Build
Testing
Development
Initiate CI process
TestReport
Commit
Figure 4 Continuous Integration cycle
However, Continuous Integration is much more than a simple modification in the development
cycle. Continuous Integration reduces integration risk by providing faster feedback. It is designed
to help identify and fix integration and regression issues faster, resulting in a reduction of the
delivery time, reducing the number of bugs and achieving superior overall quality. By providing
better visibility on the state of the project it can open and facilitate communication channels
between team members and encourage collaborative problem solving and process improvement.
Furthermore it automates the deployment process reducing time to market in a reliable way.
To this end, it is mandatory to design an efficient coordination of integration activities, including
validation steps aimed at ensuring correctness and performance, with a continuous iterative
basis. This means using defined milestones on work performed at CogNet solution by
development and validation checkpoints. This strategy enables a release roadmap to structure
and perform work at project level and progressively increase solution maturity, with the final
objective of solution packages for industrialisation and deployment.
There are development frameworks that couple ongoing development and validation activities to
verify the individual functionality and ensure an overall solution release. This way, integration and
validation turn into complementary thanks to automatic build, unit tests and deployment steps.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 30 of 87
CogNet plans to deploy a continuous integration framework, based on Jenkins and described in
the Section 6, to homogenize the development, evaluation, integration, testing and validation
steps.
5.4. Teams communication
For the communication among teams of partners, there is a two speed mechanism:
The periodic one established along with the iterations of the system components and
demonstrator development and evaluation plan (specific to each demonstrator).
The spontaneous one based on the Continuous Integration system, triggering emails
among partners along with the development, integration and evaluation activities.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 31 of 87
6. Integration of Components
The eleven partners participating in the project, and their respective teams, will develop and
integrate different components to provide a reliable solution for optimizing network
management. This section establishes guidelines, close related with the integration of the
different components that build the CogNet solution, in order to make more efficient the
cooperation of the different teams involved in the project.
6.1. Continuous Integration
As it was explained in past sections, in cooperative projects the efficient coordination of system
integration and validation becomes crucial. A continuous iterative regime speeds up the delivery
of software by decreasing integration times.
6.1.1. Platforms
6.1.1.1 Jenkins
Jenkins4 is an open source continuous integration tool. Jenkins provides a system for developers
to integrate changes to the project and obtain a fresh build.
Jenkins offers the following features:
Easy installation: Just java -jar jenkins.war, or deploy it in a servlet container. No
additional install, no database.
Easy configuration: Jenkins can be configured entirely from its web GUI with extensive
on-the-fly error checks and inline help. There's no need to tweak XML manually.
Change set support: Jenkins can generate a list of changes made into the build from
Subversion/CVS.
Permanent links: Jenkins gives you clean readable URLs for most of its pages, including
some permalinks like "latest build"/"latest successful build", so that they can be easily
linked from elsewhere.
RSS/E-mail/IM Integration: Monitor build results by RSS or e-mail to get real-time
notifications on failures.
After-the-fact tagging: Builds can be tagged long after builds are completed.
JUnit/TestNG test reporting: JUnit test reports can be tabulated, summarized, and
displayed.
Distributed builds: Jenkins can distribute build/test loads to multiple computers.
4 http://jenkins-ci.org/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 32 of 87
File fingerprinting: Jenkins can keep track of which build produced which jars, and which
build is using which version of jars, and so on. It is ideal for projects to track dependency.
Plugin support: Jenkins can be extended via 3rd party plugins.
OpenStack support5: Jenkins OpenStack Cloud Plugin provides deployment capabilities
compatible with OpenStack.
Jenkins is distributed under a MIT license. Moreover, an instance can be deployed on an own
infrastructure (onpremise) or it can be hired as a service (in the cloud), the same way GIT works.
CloudBees6 Jenkins Platform enables continuous delivery (CD) and continuous integration (CI)
powered by Jenkins open source IT automation tool. It has business model7 with two rates one
fixed per month based on number of users and other extra variable by used hours.
Based on the previous experience from different partners on this technology CogNet project will use
Jenkins as a central part of the development, testing, deployment and validation.
CogNet will use a dedicated continuous integration build machine instead of hiring it as a service.
6.1.1.2 Travis-CI
Travis-CI8 is another Open Source platform for testing and deploying developments synced with
GitHub projects. The possibility to contract an online service is also possible with a business
model based on number of concurrent jobs9.
6.1.1.3 Drone.io
Drone.io10
is an online platform that provides continuous integration services on the cloud. It
integrates directly with the repositories hosted in Github, so the latest source code can be built
and tested automatically every day. This provides a report that informs about which modules are
passing their tests and which of them are not. Its pricing plans are based on number of projects
with a monthly basis.
6.1.2. Structure
This section defines a set of components, coming from the current architecture design from WP2,
listing the different entities to detect commits, build, test, stage and deploy.
5 https://wiki.jenkins-ci.org/display/JENKINS/Openstack+Cloud+Plugin
6 https://www.cloudbees.com/products
7 https://www.cloudbees.com/products/jenkins-cloud#pricing
8 https://travis-ci.org/
9 https://travis-ci.com/plans
10 https://drone.io/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 33 of 87
Figure 5 CogNet Architecture from WP2
The more complete control loop of CogNet solution performs data capturing, data processing
and decision recommendation consequently. This way, the deployment and integration of an
existing virtualized infrastructure with the CogNet solution is limited to the data ingest and the
policy recommendation embodied in two APIs that decoupled the CogNet solution from the
analysed and optimized system.
Figure 6 CogNet overall dataflow
According to the previous design the list of components to be continuously integrated would be:
CogNet Measurement
o Network by means of SDN controllers, switches, VNFs, running applications, etc
Syst
em
Z
Syst
em
Y
Data Collector Data Storage Data Analyser (ML)
ML Algorithm X
ML Algorithm Y
ML Algorithm Z
Policy Manager
Policy for System X
Policy for System Y
Policy for System Z
System X System/VNF
analysed/optimized
Outgoing API
Incoming API
(feedback)
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 34 of 87
o Infrastructure of system X, Y or Z
CogNet Smart Engine
o Feature Selection and Extraction
o Data Aggregation, Cleaning, Normalisation, Transformation, Filtering
CogNet Optimizer
o Function Generation
Optimisation
o Policies Generation
Semi-automated
CogNet Policy Repository
CogNet Policy Distribution
o Policy Adaptor
o Policy Publisher
CogNet Policies Execution
OSS/BSS/VTN
WP2 aims to define and set up the CogNet framework and architecture. In this regards, WP3,
WP4, WP5 aims to study and develop proof of concept and the different components identified
within WP2. WP6 will incrementally and continuously plug and put into play the different
components on top of a common CogNet system.
The aim is to achieve a complete dataflow including the interfaces and the connection of the
different components. This way, it will conduct the integration and testing accompanied by a
Jenkins deployment to build, deploy and test the different components involved.
6.1.3. Procedures
Development and testing organization need agile practices to allow development teams to
detect problems earlier. Continuous Integration platforms aims to verify by automated build and
test tasks code shared in repositories. To this end, a set of steps is being widely adopted creating
jobs to run automated scripts either nightly or after every commit:
1. Download code or binaries (development infrastructure). This should include:
a. Source Code / Binaries
b. Build Scripts (when code is available)
c. Unit Tests (step 3 - no interdependencies)
d. Automated Functional Tests (step 5 - inter-dependencies)
e. Environment Configuration scripts.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 35 of 87
2. Build (if no building/preparation for testing). It involves creating automated build scripts
using tools like Maven, Ant or Gradle.
3. Unit Tests (no interdependencies). Develop automated Unit Tests, add them to the
Continuous Integration Server and run them as part of every build, thus ensuring 100%
coverage.
4. Deploy (deployment infrastructure). It manages automate environment configuration and
provisioning. It also gives engineers an opportunity to easily replicate instances.
5. Unit Tests (inter-dependencies). It checks for dependencies availability, web and database
server availability, validity of test usernames and passwords (credentials), etc.
6. Feedback mechanism. It automatically sends emails to concerned parties notifying them
about build or test failures. Developers get awareness as soon as possible if their code
broke the build or anything during integration or regression testing.
7. Revert deployment (last working “binary”/release). It runs automated regression testing.
This allows engineers to quickly discover if the check-in broke something that previously
worked and will enable them to fix it almost real-time.
Concerning other technologies involved in CogNet such as OpenStack and Spark, there are two
alternatives.
1. Setup and Install Jenkins OpenStack and Spark plugins.
2. Use Puppet11
to trigger scripts in python, Perl and Bash to handle various aspects of the
installation.
a. OpenStack modules are installed from a Jenkins job invoking a Puppet process.
b. Spark is installed from within a Python script called from Jenkins. It calls other
scripts in Bash and Python to install the Spark Master and Spark Workers.
Depending on the effort required to work with baseline plugins of Jenkins CogNet will choose
one of the alternatives.
6.2. Standards
Standards play an important role for the integration turning connection of different components
more widely supported, documented and verifiable. Standards are widely adopted and proven to
significantly improve agility in IT solutions. Hence, standards adoption reduces time-to-market.
Moreover, best practices and standards provide the blueprint for effective and efficient
integration operations. They conduct integration management of components across complex,
multi-partner environments.
This way, through standards, the different components:
11 https://puppetlabs.com/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 36 of 87
gain independence of concrete setups or environments by means of clear interfaces with
defined entities and attributes;
establish a common design, architecture and language that ensure interoperability on
implementation thanks to the included processing perspective;
increase flexibility to wider environments and vendors.
Going deeper, the standards bring universal formats and workflows that catalyze development
integration due to the normalization of data structures and the harmonization of protocols.
First, IT standards such as W3C12
and Oasis Tosca13
improve agility and portability in IT solutions
across infrastructures regardless underlying platform or development framework. Second,
architecture, descriptors and protocols coming from European Telecommunications Standards
Institute (ETSI) and Internet Engineering Task Force (IETF) provide a common operational and
communication model.
5G will impose changes not only in the radio access network but also in the core network. For the
core, a new approach to network design based on decoupling hardware from software and
moving network functions towards software aims to provide connectivity to a growing number of
users and devices. Software-defined networking (SDN), which is being standardized by the Open
Networking Foundation (ONF), assumes separation between control and data. Consequently,
thanks to centralization and programmability, configuration of forwarding can be greatly
automated.
Standardization efforts aiming at defining network function virtualization are being conducted by
multiple industrial partners, including network operators and equipment vendors within the ETSI.
Introducing a new software-based solution is much faster than installing an additional specialized
device with a particular functionality while improves network adaptability and makes it easily
scalable. So, with simpler operation, new network features are likely to be deployed more quickly.
A logically centralized network intelligence can program the network control directly without
taking care about the underlying infrastructure that is completely abstract for applications and
network services. ETSI YANG14
is a data modeling language used for the Network Configuration
Protocol (NETCONF)15
which provides mechanisms to install, manipulate, and delete the
configuration of network devices based in XML. Thus, networks are transformed into flexible,
programmable platforms with intelligence to meet dynamically performance and react to
degradation symptoms.
In the field of innovation in the areas of SDN and Network function virtualization (NFV) and
hybrid network infrastructure management, different standards play a major role. On the one
12 https://www.w3.org/
13 https://www.oasis-open.org/committees/tosca/
14 https://tools.ietf.org/html/rfc6020
15 https://tools.ietf.org/html/rfc6241
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 37 of 87
hand, the specifications for network management systems like ETSI NFV MANO and others
compiled in D7.6 – “Preliminary Report into Proposed Standardization Activities” establish
architectures, stacks and workflows. On the other hand, some Open Source approaches such as
OpenStack16
, OpenMano17
, OpenBaton18
or OpNFV19
clearly define interfaces and dataflows that
must be considered by the integration activities.
Moreover, solutions like OpenFlow20
open standard deploy innovative protocols in production
networks by means of communications interface defined between the control and forwarding
layers of an SDN architecture. The key function is to enable direct access to and manipulation of
the forwarding plane of network devices (e.g. router, switch) by moving the network control out
of the networking switches to logically centralized control software.
6.3. Licensing
Terms and conditions outlined in the Consortium Agreement provide guidelines on how
intellectual property rights, access rights and dissemination of results are to be exercised. During
the integration phase there are two noteworthy areas that become relevant, the first been the
dissemination of results and secondly software licence and sub-licensing rights.
Dissemination of results
Every effort is to be made to curtail, both integration and system test results, where the results
are to be protected from industrial or commercial exploitation. Secondly to exploit the results
that CogNet generate to further commercial purposes or by establishing licensing
deals/partnerships to allow exploitation by other entities. Thirdly to disseminate the results they
own as soon as possible and by appropriate means for example, white paper, journal entries, etc.
Software licence and sub-licensing rights
Obtaining access rights to use IPR, project results, information generated by the project in the
testing and validation phase is subject to software licensing and sub-licensing agreements. Also
defined is the right for the consortium members in particular software developers to be included
in the right to sublicense source code solely for purpose of error correction, maintenance and/or
support of the software within the project.
In this licensing model should be considered the compatibility between different types of licences
like different open source licenses and proprietary licenses of the partners' code.
16 http://www.openstack.org/
17 https://github.com/nfvlabs/openmano
18 http://openbaton.github.io/
19 https://www.sdxcentral.com/listings/opnfv/
20 https://www.opennetworking.org/sdn-resources/openflow
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 38 of 87
7. Prototype evaluation
7.1. Overall Methodology
To ensure project success, it is crucial to define and execute appropriate testing and validation
activities proving that the project solution is feasible, applicable to different operational contexts
and will bring the expected performance benefits.
Deliverable D2.1 – “Initial scenarios, use cases and requirements” defines a set of requirements
and the metrics to be considered, where section 6.1.1 “Evaluation Metrics Definitions” lists a set of
metrics around following categories, and section 6.1.2 “Scenarios Evaluation” relates these metrics
to the scenarios evaluation. So, the levels and thresholds listed in this section establish the target
features to be achieved and to be validated.
Figure 7 CogNet Evaluation Metrics Categories
This way, it establishes different categories pivoting around:
Machine Learning, used to evaluate the accuracy of algorithms developed.
Network, from node and link to end-to-end features. For instance packet loss, end-to-
end delay, jitter, overall network load, and others.
System Performance, scoring business features, response time, scalability, availability,
reliability and operational cost.
Mobile Telecom (Quality), end user quality provided by the Telecom Operator.
The evaluation of the metrics aims to compare the measured and the expected values. This way,
quantitative requirements including both operational and business model are assessed in the
same equation with features coming from the network and metrics from the service.
To this end, a set of steps are being defined:
1. Development of the measurement capability for the related metrics. This means that in the
network some agents must be aggregated to collect data from the nodes and links, while
the service clients must generate logs related to end-to-end QoS.
2. Specification of test cases using the scenario descriptions as a basis. This means defining,
concrete experiments to assess the metrics in the demonstrators according to the
requirements defined in D2.1.
3. Realization of the test cases. This means preparing, setup and execute experiments driven
to assess the metrics in the demonstrators according to the requirements defined in D2.1.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 39 of 87
4. Collection of the evaluation results, and comparing them against the thresholds specified in
the requirements. This means reporting the results to re-inject conclusions for the next
iteration.
The testing activity goes from the verification of functional and technical testing of the CogNet
components individually to the overall system. To this end, unit tests and test-cards templates
will be used respectively. Templates for test-cards provide a coherent and organized way to
describe test, gather results and ensure that experiments are replicable. This means that, as far as
the same stimulus can be injected into the managed network, the results must be coherent and
consistent. When just similar patterns for stimulus can be created, the accuracy for comparing the
replicability along the timeline will be reduced.
7.2. Unit tests
Research activities from WP3, WP4 and WP5 in generating CogNet components will need to be
individually tested to maximize the success of integrating software components on the same
system. This section focuses on unit tests before any kind of integration of the prototypes.
Moreover, it proposes tools that might be used during the testing process.
Unit testing refers to the verification of functionality of lower-level unit of software that can be
tested independently. In most cases, unit tests refer to class-level tests (whole class or separate
class methods).
The main goals of unit testing are:
Identify if units conform to their specified functionalities
Find and correct bugs as soon as possible
Ensure that the code is bug-free as possible
Unit testing holds an important part of the system validation and integration process. Taking as
example two modules that are tested together without the prior execution of unit tests, a test
failure can be caused by either failure on module 1, failure on module 2, failure on both modules,
failure on the interface between components or failure on the test code. In case of both modules
had been previously unit tested according to their expected functionalities, the possible causes of
test failure are eliminated to failure on the interface between components or failure on the
testing code.
The unit testing is applied to the stand-alone module not yet integrated with the other modules
of the system. It is commonly accepted to consider a unit as the smallest testable part of
software. Unit tests are executed by the developers of CogNet components and we may consider
single local instances of the components to run the defined test in an isolated environment.
Moreover, for the unit testing automated testing tools will be used. It is noted that during the
unit testing phase more than one test iteration is needed, in order to eliminate possible bugs of
the developed software.
The activities related to unit testing include definition and preparation of unit tests and the
needed input data, set-up of the unit-testing environment, execution of tests and reporting
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 40 of 87
(collection and processing) of the test results. It is noted that for the execution of unit tests, the
interactions with other components should be simulated by dummy components (mocks).
7.2.1. Functional / Error
The different aspects considered here are:
Prior specification of each functionality of a piece of software is required for this type of
test.
Functional tests check a particular feature for correctness by comparing the results for a
given input against the specification.
Side effects like intermediate results and data or performance impacts are not part of the
test and it will be considered in other test sets.
7.2.2. Connectivity / Timeout
These ones are mainly oriented to interface testing. This subcategory of test is focused on
interconnectivity with different types of elements, like endpoints (databases, web services) or
intermediate (network functions). It will include validation of different features: protocol stack
implementations, protocol handshakes which include timers, and correctness in the errors and
timeout events to avoid wrong answers, race conditions, and in general instabilities in the code.
7.2.3. Quality / Ground Truth
Quality test covers a set of aspects of the developed software that are considered non functional
tests. From the point of view of optimal performance a setup target score must be established to
compare the obtained results with the ideal situation.
Security. Bug-free verifications with code auditing tools, or input and output validations
with fuzzing test. Invalid or unexpected data are some examples.
Style Guidelines. Complexity can make code more bug-prone and harder to read and
maintain. In order to produce quality code styleguides are needed. There are some
examples and tools in different languages like google style guide for C++21
Performance. This set of tests tries to evaluate the performance under different levels of
input loads to unit components.
Regression. In general, regression testing refers to the process for rework, review, and
retest of any element that needed modification. This process must include sufficient
retest to verify that any performed modifications have not impacted on other functions
already tested. In cases of minor code changes the entire set of tests are not required to
be executed, only the tests that have been affected by the changed functionality.
21 https://github.com/google/styleguide/tree/gh-pages/cpplint
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 41 of 87
Usability. The aim of usability testing is to increase accuracy and user-satisfaction
performing typical (common) tasks. Users can be end users or other components,
depending on the type of module tested.
7.2.4. Toolsets
There are some reference test frameworks in different languages that are used to automate unit
testing. Shown below is an example of some frameworks and there corresponding languages.
7.2.4.1 Junit/TestNG
JUnit22
is a Java version of Kent Beck’s Smalltalk testing framework. JUnit is a simple, open source
framework to write and run repeatable tests. It is an instance of the xUnit architecture for unit
testing frameworks. JUnit features include several functionalities, like assertions for testing
expected results, test fixtures for sharing common test data and test runners for running tests.
TestNG23
is a testing framework inspired from JUnit but introducing some new functionalities like
multithreads validation or improved test flexibility.
7.2.4.2 Unittest and nose
Unittest24
is the Python unit testing framework, also known as PyUnit. unittest supports test
automation, sharing of setup and shutdown code for tests, aggregation of tests into collections,
and independence of the tests from the reporting framework. The unittest module provides
classes that make it easy to support these functionalities for a set of tests.
Nose25
expands and completes unittest. Nose collects tests automatically from python source
files, directories and packages found in its working directory, also supplies a number of helpful
functions for writing timed tests, testing for exceptions, and other common use cases.
7.3. Test-card template
In this section a template test-card is proposed, to be used an operational tool across CogNet
testing activities. Each project experiment is mapped onto a test-card, based on the proposed
template which provides the following main benefits:
coherent description of experiments across WP6 activities
coherent collection of results, by providing indication of how the results are gathered
(probes, logs, different outputs), evaluated (metrics) and stored (local/shared repositories)
22 http://junit.org/
23 http://testng.org
24 https://docs.python.org/2/library/unittest.html
25 http://nose.readthedocs.org
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 42 of 87
by providing formal indication about the execution of the experiment, provides for
replicability of the experiments themselves. This will be highly useful when testing
different releases of CogNet prototype or in case the same experiment should be
executed over different platforms/test-beds
provides for clear indications about topology, infrastructure and configuration
parameters relevant for each experiment
provides the mapping of CogNet features to validation activities thus maximizing the
coverage of project features to WP6 activities
Field Description
Use case Use Case from CogNet reflected in this experiment
Scenario Scenario from CogNet reflected in this experiment
Challenge Specific challenge to be demonstrated in the current test-
card
Test-card Id Identification of Test card X.Y.Z
Related test-card Related test-card
Goal Motivation for this experiment is needed (short description)
Prototype version This is the version of COGNET prototype under test.
ML algorithm Which ML algorithm is evaluated
Experiment type
Three types of experiments are foreseen at this stage:
Integration test
Validation test (unit test)
Validation test (end-to-end)
Experiment setup Details on topology, configurable parameters, architecture,
assumptions, setup, etc.
Experiment pre-conditions Description of pre-conditions before experiment execution
Experiment description
This section should be organized in steps, to provide clear
indication to the experimenter of the different steps to
conduct the experiment.
Step 1: ….
Step 2: ….
Step N: …
Experiment post-conditions Description of post conditions after experiment execution
Recorded data and raw
data format
Direct measurements made, measurement points, frequency
of measurements, raw data format
Measured metrics and
evaluation methodology
How metrics are estimated (direct observation, aggregation,
averaging, calculation, formulas)
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 43 of 87
Field Description
Results repository Where the results are stored (COGNET redmine, local partner
repository, others …)
Table 7-1 common template for experiment description
7.4. Software Quality evaluation
CogNet’s software quality shall meet the needs of the project goals by being reliable, well
supported, maintainable, portable, and easily integrated with other tools and software
components.
The integration and validation process shall achieve software quality in four common areas, bug
count, code quality, meeting project goals, and multiple validation phases
Bug count. A low level of bugs recorded is a valuable metric on how the project
requirements are being satisfied in the code. The number and severity are measurable
indicators on the quality condition of the software been released.
Code quality. The quality of code, is the ability to unit test, modify and maintain the
CogNet code base. Unit test coverage tools and static code analytic tools shall be
executed during the integration phase and shall produce metrics that can be used in the
validation process.
Meeting project goals. A key attribute of quality software is that the outcome of the
developed software and integration phases adheres to the overall goals and feature sets
of CogNet.
Multiple validation phases. Another aspect of producing quality project code is to
expose it to different validation cycles, as in performance, load, benchmarking and soak
testing in the test framework.
7.4.1. Coding Standards
In CogNet the software coding quality represents an important part in creating a successful
integration phase and assures that all characteristics of the proposed architecture are fulfilled to
each work package needs.
Here coding styles and code reviews have been the most common method of improving code
and software quality. Automated analysis tools introduced at compile and integration time can
aid the code review process. These tools will highlight coding issues in the form of warning and
error messages within the continuous integration platform, where the integrator then sets an
acceptable threshold level to allow a successful build release.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 44 of 87
Code style analytic plugin tools deployed during continuous integration include PMD26
,
CheckStyle27
, pylint28
.
7.5. Implementation testing strategy
The testing activities will be conducted not just to verify the achieved correctness of the solution
results and the comprising components, but also the efficiency and other aspects related to the
implementation.
As already stated, the evaluation activities will be conducted in the form of experiments
described across a common test-card template. The following strategy will be used:
Identification of project scenarios to be deployed and mapping over test-bed instances
Mapping of the features of single scenario to suitable set of experiments. The evaluation
experiments will be divided in two main areas:
o Functional experiments
o Performance/stress/load experiments
Experiment execution and raw results gathering
Data results aggregation and analysis
7.5.1. Functional experiments
Functional experiments will be executed to ensure that the project prototype deployed over test-
beds is able to perform its overall functions and the integration of its main components has been
done properly. Functional experiments will evaluate the prototype in the following main areas:
Modularity
Security
Interoperability
Robustness
7.5.1.1 Modularity
The unit tests foster the creation of a demonstrable interface specification working with specific
parameters with an observable response.
The unit tests should also aid in the detection of missing or unreachable dependencies to other
components from the CogNet system.
26 https://pmd.github.io/
27 http://checkstyle.sourceforge.net/
28 http://www.pylint.org/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 45 of 87
7.5.1.2 Security
Concerning the security, the main aspect to be considered are the policies established in D1.2 –
“Data Management Plan and Data Protection Agencies notifications” guiding data management
and protection.
No extra aspects will be considered related to this domain apart from the baseline and pre-
existing security policies on premises.
7.5.1.3 Interoperability
The unit tests check is an efficient strategy to test re-execution of a component or system with a
different setup such as:
Versatility. Processing different datasets.
Flexibility. Facing different situations or contexts and observe the result in a wider
spectrum of environments.
7.5.1.4 Robustness
These tests must cover two aspects:
Replicability. In this case unit tests also must be designed and implemented to check
consistent response with the same stimulus and environmental status.
Stability. Additionally, tests to check that there is not system degradation after long time
execution let to conclude that the solution is reliable.
7.5.2. Performance experiments
The main outcome of these tests will be to create the records/measures to enable later
comparison of metrics and the expected values. Measure/records should comply with the
following requirements:
Network Stimulus. Timestamped log relative to the lifecycle of service events (i.e. events
of creation and deletion of sessions)
Network Topology. Timestamped log relative to the topology of the service and network
assets (i.e. media pipeline topology)
Service Perfomance. Timestamped log relative to the QoE and QoS of the service features
(i.e. jitter, latency, packet loss, etc.)
Errors. Timestamped information log to errors of the service and network assets (i.e.
software failures, hardware failures, etc.)
Reconfiguration. Timestamped log relative to the specific CogNet workflow (i.e. policy
apply, etc.)
Measures shall be generated by the platform enabling:
Summarization. Capture of relevant metric records for a platform application.
Persistence. Storage of relevant metric records for a platform application.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 46 of 87
Parsing. Metrics shall be marshalled and stored as string values susceptible of being
filtered.
7.5.2.1 Quality of service scores
The Quality of Service (QoS) is widely used and its score is determined by the transport network
design and provisioning of network access, terminations and connections. The evaluation of the
QoS values is measured and compared with the expected values. The most interesting
parameters that are used in the context of the QoS intrinsic of the network are:
Bitrate: throughput of packets of the service that can be achieved.
Jitter: is the variation in the time packets arrive due to network congestion and route
changes.
Packet loss rate: is the proportion between packets that are lost and packets that are sent.
Latency: is the time between the action and the following up action. Either in relation to
end-to-end or in regards to a particular network element.
Other features relevant from the network are packet loss, network load, percentage of
unavailability and number of subscribers.
As an example, ITU standards link quality of service and performance at network level in a set of
documents in Y.1500 series. Specifically ITU.Y1540 defines a set performance parameters from
the point of view of IP traffic that are relevant from the point of view of quality measurement.
Below are highlighted some of them:
IP packet transfer delay, the time, between the ingress event and egress event. When
there are several nodes, it can be provided as a mean, maximum or a variation in the End
to end connection.
IP packet error ratio: the ratio of total erroneous IP packet outcomes to the total of
successful IP packet transfer outcomes plus erroneous IP packet outcomes in a
population of interest
Spurious IP packet rate: the total number of spurious IP packets observed at that egress
node during a specified time interval divided by the time interval duration
IP packet duplicate ratio: the ratio of total duplicate IP packet outcomes to the total of
successful IP packet transfer outcomes minus the duplicate IP packet outcomes in a
population of interest.
7.5.2.2 Quantitative Efficiency
As defined in ISO 9001 Standards, efficiency is the “relationship between the result achieved and
the resources used”29
. So in our typical case, efficiency could be calculated as the rate between
QoS and cost. In other words, how good is the network to generate the least cost that we need
to match and keep a good QoS on the network demand with adequate resources. Cost, usually
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 47 of 87
identified as the cost of network resources allocated (CAPEX) and the operational cost like
maintenance or procedures and management (OPEX).
In terms of maintenance OPEX energy efficiency is one of the key measurements to be obtained
and one of the parameters identified in requirements from D2.1. This aspect is highly important
because it has a direct relation with the business model. In other words, adding this aspect into
the equation to be optimized, we will establish an operational range to achieve an affordable
infrastructure.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 48 of 87
8. Planning and Milestones
The set of development, integration and evaluation activities are carried out for the platform
deployment. The goal is to undertake these activities and provide early and frequent versions of
the CogNet system as it is developed in an iterative basis in WP3, 4 and 5 where components are
brought together while managing common data and structures.
Because of the high level of dependencies in the development of components, their integration
on a system, and the required evaluation of the effectiveness around demonstrators deployed on
top of it, it is necessary to align the timing of them with the Gantt of the project.
8.1. Activities and relation to DOW
This document defines methodologies and mechanisms that apply to activities performed in
WP3, WP4, WP5 and WP6. So it is important to clearly map those activities to tasks from the WPs:
The development of the research components is done in WP3, WP4 and WP5.
The development of the demonstrator takes place in T6.4 “Development and Testing of
Demonstrators”.
The unit testing of individual components are carried out in the corresponding WP (3, 4
and 5).
The final set of unit testing in a system deployment is performed in T6.2 “CogNet
platform Integration and Testing”.
The components integration in a system deployment is performed in T6.2 “CogNet
platform Integration and Testing” and T6.3 “Integration with Complementary
Technologies”.
The demonstrator deployment and setup is done in T6.4 “Development and Testing of
Demonstrators”.
The system testing for a demonstrator capturing WP2 metrics take place in T6.4
“Development and Testing of Demonstrators”.
The demonstrator validation according to WP2 metrics take place in T6.4 “Development
and Testing of Demonstrators”.
8.2. Iteration deadlines
In order to minimize deviations in the working plan some internal milestones will led to early
issues detection to take corrective actions. They will be:
Definitions: once the 25% of iteration has been consumed, the design should be
completed.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 49 of 87
Initial versions: in order to trace the progress, once 50% of iteration is over the initial
versions should be ready and released.
Final versions: in order to avoid final sprints, 2 weeks before iteration deadline the final
version must be released.
8.3. Development Calendar
The plan consists of two iterations. First release of the integrated platform and performance
reports at M20 and the final one at M28. Each one includes the next phases:
1. Development of components and demonstrator
2. Unit tests of components
3. Integration of components and demonstrator
4. Demonstrator deployment and setup
5. System testing
6. Demonstrator validation reporting
The below table shows the timing of the various phases in the development and integration
schedule.
Phase First Iteration Timing First Iteration Timing
Development M3-M17 M21-M25
Unit Tests M15-M17 M24-M25
Integration M17-M18 M25-M26
Demonstrator setup M8-M19 M21-M27
System Testing M18-M20 M27-M28
Validation report M19-M20 M28
Table 8-1 Timeline for development, integration and evaluation phases over the iterations
This way each phase has an overlap time with previous and next phases to place close
cooperation efforts and keep margins for feedback communication.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 50 of 87
9. Infrastructures
One major aspect of the activities in WP6 is the infrastructure that will implement and execute
the CogNet solution for the demonstrator. It will employ integrated components and outcomes
from WP3, WP4 and WP5, while following the blueprint architecture provided by WP2, in order to
test and validate the project results.
Here, four logic infrastructures come into play:
1. Forwarding infrastructure. This is the managed infrastructure, the telco operator testbed
including the VNFs to be analysed and optimized. This will behave as the telco operator
infrastructure that needs to be managed (monitored and optimized).
2. MANO infrastructure. This is responsible for managing the virtualization aspects and
forwarding infrastructure.
3. Machine Learning infrastructure. This one analyses data and makes decisions. In order to
cope with a big volume of data, coming from the monitored forwarding infrastructure, a
set of computing resources is needed.
4. Demonstrator Service. This module requests/injects traffic in the forwarding infrastructure
in order to push it to the limits with challenging networking contexts with stimulus
coming from a realistic service. So it comprises the service servers and clients.
In order to provide a suitable infrastructure different aspects must be analysed. First, a common
SW stack eases later integration of all components over the same infrastructure. Second, a
methodology to select an infrastructure from the ones available that can deal with the
requirements coming from all the WPs and demonstrators.
Concerning the infrastructure instances, an important decision is that the CogNet implemented
solution must be a common infrastructure to all the different infrastructures that will be analysed
and optimized. So the experiments coming from the activities in WP3, WP4 and WP5 and the
demonstrators developed in WP6 will be executed on top of the same WP6 infrastructure.
9.1. Virtualization Stacks
5G needs to massively program and configure interfaces for the involved assets and instances.
This ability enables network setup adaptability and makes the network scale more easily and is
likely to be deployed quicker. Here the virtualization technologies respond to the request to build
infrastructures with decoupled hardware from software.
SDN proposes to transition from network configurability to network programmability through
network abstractions, open interfaces and the separation of control and data plane. Meanwhile,
NFV proposes to virtualize network functions (VNF). However, there is still a need for network
management because of network complexity. To organize all the VNF instances under common
goals and policies it needs a manager and orchestrator for the life cycle. MANO stands for
Management and Orchestration setting up, maintaining and tearing-down VNFs. Moreover
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 51 of 87
MANO entity communicates with OSS/BSS (Operation Support System/Business Support System)
system of the telco operator. OSS deals with network management, fault management,
configuration management and service management. BSS deals with customer management,
product management and order management etc.
Open Source approaches such as OpenStack, OpenMano, OpenBaton or OpNFV implement NFV
and MANO stacks. Other solutions, like OpenFlow is a communications protocol that gives access
to the forwarding plane by means of a communications interface defined between the control
and forwarding layers of an SDN architecture.
9.1.1. OpenStack
OpenStack30
is an open source solution for cloud management. It has been released under the
Apache 2.0 license. OpenStack is divided to projects in 3 layers:
Core Projects: the projects that give core functionality such as Compute, Networking &
Storage. The OpenStack core projects include:
o Nova31
: Compute resources management. It focuses on the compute and storage
aspects.
o Neutron32
: Virtual networking. Neutron manages the networking associated with
OpenStack clouds. It is an API-driven system that allows administrators or users to
customize network settings, then spin up and down a variety of different network
types.
o Keystone33
: identity and access management. OpenStack has a variety of
components that are OpenStack shared services, meaning they work across
various parts of the software, such as Keystone. This project is the primary tool for
user authentication and role-based access controls in OpenStack clouds.
o Swift34
: Object storage. Swift, which was one of the original components
contributed by Rackspace, is a fully-distributed, scale-out API-accessible platform
that can be integrated into applications or used for backup and archiving.
o Cinder35
: Unlike Swift, Cinder allows for blocks of storage to be managed. They’re
meant to be assigned to compute instances to allow for expanded storage. The
30 https://www.openstack.org/
31 https://wiki.openstack.org/wiki/Nova
32 https://wiki.openstack.org/wiki/Neutron
33 https://wiki.openstack.org/wiki/Keystone
34 https://wiki.openstack.org/wiki/Swift
35 https://wiki.openstack.org/wiki/Cinder
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 52 of 87
Cinder software manages the creation of these blocks, plus the acts of attaching
and detaching the blocks to compute servers.
Big Tent Projects36
: many more projects that are related to the cloud functionality and
are aligned with the OpenStack development methodology and are approved by
OpenStack
Candidate/Incubation projects37
: New projects that are being proposed by the
community. These are usually projects that would like to be included in the “big tent” and
are in the process.
One of the growing areas in OpenStack is Analytics and there is major interest in Monitoring,
Root Cause Analysis and Event correlation as well as Big Data technologies required for this
Analytics.
For CogNet the most interesting projects of OpenStack are Nova and Neutron. Nova manages
the lifecycle of virtual machines interoperating via a driver mechanism with an extensible number
of hypervisors. While ML238
plugin of Neutron brings the interface between neutron and backend
technologies such as SDN, Cisco, VMware NSX and so on. It provides a framework to utilize a
variety of L2 networking technologies simultaneously. The implemented mechanisms are modular
drivers such as OpenvSwitch, linuxbridge or vendor specific implementations.
OPNFV39
is another industry initiative to create a reference implementation for NFV infrastructure
based on ETSI NFV definitions. Most of the actual implementation of OPNFV is done by
contributing blueprints and code to OpenStack (and a few other open source projects such as
Opendaylight40
).
9.1.2. OpenMano
OpenMANO41
is an open source project initiated by Telefonica. It aims to provide a practical
implementation of the reference architecture for NFV management and orchestration proposed
by ETSI NFV ISG, and being enhanced to address wider service orchestration functions. The
project is available under the Apache 2.0 license42
. The OpenMANO framework is essentially
36 http://governance.openstack.org/reference/projects/
37 https://git.openstack.org/cgit
38 https://wiki.openstack.org/wiki/Neutron/ML2
39 https://www.opnfv.org/
40 https://www.opendaylight.org/
41http://www.tid.es/long-term-innovation/network-innovation/telefonica-nfv-reference-
lab/openmano
42 https://github.com/nfvlabs/openmano
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 53 of 87
focused on resource orchestration for NFV and consists of three major components: openvim,
openmano, and openmano-gui.
The first component is essentially focused on the resource infrastructure orchestration,
implementing express EPA (Enhanced Platform Awareness) requirements to provide the
functionality of a Virtual Infrastructure Manager (VIM) optimized for virtual network functions for
high and predictable performance. Although openvim is comparable to other VIMs, like
OpenStack, it provides:
Direct control over SDN controllers by means of specific plugins (currently available for
Floodlight and OpenDaylight), aiming at high performance dataplane connectivity.
A northbound API available to the functional resource orchestration component
openmano to allocate resources from the underlying infrastructure, by direct requests for
the creation, deletion and management of images, flavours, instances and networks.
A lightweight design that does not require additional agents to be installed on the
managed infrastructural nodes.
The functional resource orchestration component itself is controlled by a northbound API, which
are currently suitable to be used directly by network administrators via a web-based interface
(openmano-gui) or by a command line interface (CLI) that eases integration in heterogeneous
infrastructures and with legacy network management systems. The functional resource
orchestrator is able to manage entire function chains that are called network scenarios and that
correspond to what ETSI NFV calls network services. These network scenarios consist of several
interconnected VNFs and are specified by the function/service developer by means of easy-to-
manage YAML/JSON descriptors. It currently supports a basic life-cycle for VNF or scenarios
(supporting the following events: define/start/stop/undefine). The OpenMANO framework
includes catalogues for both predefined VNFs and entire network scenarios, and infrastructure
descriptions carrying EPA information.
9.2. Methodology
The plan is to match infrastructures, available from the different partners, and requirements,
coming from WPs and demonstrators. To goal is to decide if we should take one infrastructure
from the partners or should we hire the infrastructure from bluebox43
, rackspace44
or amazon 45
,
all of which are compatible with OpenStack.
One important aspect is that in order to size accurately the required infrastructure it is needed
some information specifically for each case:
43 https://www.blueboxcloud.com/products/pricing
44 http://www.rackspace.co.uk/cloud/servers/pricing
45 http://aws.amazon.com/ec2/pricing/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 54 of 87
1. Demonstrator service. This infrastructure needs the scope of the demonstrator in order to
assess the service assets and the density of clients.
2. Forwarding infrastructure. This infrastructure needs the scope of a specific scenario in
order to create a map of the different topologies that would reflect a telco operator
network behaviour and performance.
3. MANO infrastructure. This infrastructure is attached to the forwarding infrastructure.
Hence, its size and setup has a strong dependency on the previous one.
4. Machine learning. This infrastructure requires the volume and speed of the data to be
processed and also the fields present in the data records and their meaning in order to
gauge the required computing capacity.
In order to avoid blocking the infrastructure decision due to lack of figures and concrete
specifications, we have decided an iterative approach, aligned with the iterations of the project.
First, we will limit the dimension of the demonstrators and experiments to the available
infrastructures. Then, once the real scope and dataset complexity are defined, a more accurate
dimension will push the project to find a more suitable infrastructure. Finally, the gap between
employed infrastructure and specific dataset and demonstrator scope will be minimized.
9.3. Available assets
First of all, it is very important to get the overview of the available infrastructures from the
consortium. To this end, a table has been shared among the partners to compile all the
infrastructures that could be used and different parameters to consider.
Each partner has provided a description of the testbed that comprises different features relevant
to conclude the eligibility:
Main feature. This aspect includes a brief description of some relevant/profitable aspects
related to the infrastructure
Underneath Technology. It points to the virtualization stack employed in each
infrastructure.
Scale. It defines the HW performance, including CPU chipset, RAM memory and persistent
storage capacity (HDD)
Connectivity. It defines the technology and capacity to access from the outside of the
private infrastructure.
Bandwidth. It establishes limits for the IP communication, upload and download rates.
NFV / NS catalogue. This aspect refers to the network functions virtualized and the
network services already deployed and ready to be used.
All these parameters summarize the dimension of each of the infrastructures with objective
parameters suitable to be compared.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 55 of 87
The next table summarized all the testbeds available from the different partners.
Partner Main
feature
Underneath
Technology Scale Connectivity Bandwidth NFV / NS catalogue
NOK Monitoring
System
OpenStack
with CBMS
HP SL with 2+4 blades
each SL360S/Gen7 w/ 2x1.8TB HDDs, 4x10GbE, 96Gb RAM
12cores
TBD - will
require a VPN
to access
TSSG
Bare Metal +
Openflow
enabled
Devices
OpenStack
(Fuel 6.1)
+SVN
Openflow Devices ( 2x pronto 3290 / 4x Arista 7050-T )
12 * Dell C6220 servers
- 64 gigs of ram each
- 6 TBytes of local storage
- 12 TBytes of SAN storage
- Intel E5 240 CPUs.
2 * Dell 720DXs
16 public IPv4
access to /62
IPv6
Shared 10 G
link to NREN
(one hop
away from
Géant).
No
download
limit.
Ethernet Switch (OVS ) SBI
= OVS DB
TURN server ( coTurn) SBI
= confD
OpenStack enabled LB
(Neuton API)
OpenSatck enabled
Firewall (Neutron API)
IRT Hardware
Servers OpenStack
2 servers each one will have the following specifications:
(2x) 8 or 10 Core CPU
Hyper-Threading support up to 4x physical core
256 GB of Memory
1TB storage HDD
(4) 1GbE Ports
iLO Chassis Lights Out Management Card
IP/MPLS
2Mbps IRT has provided here
details of HW
infrastructure which will
be used to expose to
project partners IaaS
services (in terms of
virtual machines and
virtual networks). Projects
partners will be granted
(on the basis of current
availability) virtual
resources and access to
them.
No access to physical
resources will be
provided. Any deviation
should be discussed
further with IRT.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 56 of 87
Partner Main
feature
Underneath
Technology Scale Connectivity Bandwidth NFV / NS catalogue
FOKUS
+ TUB
Full
virtualization
environment +
OpenBaton
orchestration.
Relevant VNFs
&
benchmarking
tools
OpenStack
1x Controller Dell PE 2900, 2x Quadcore @3.2GHz, 32GB,
2x300GB 15k SAS, 3x 10Gbps NICs
5x Compute Dell PE M620, 2x 2GHz 8-core Xeon, 128GB DDR3,
136GB, 2x 10Gbps NICs
Swift Dell PE 1950, 2x 1.6GHz Quadcore Xeon 12GB 1x 146GB
10k SAS
Monitor Dell PE 1950 1x 1.6GHz Quadcore Xeon 8GB 2x400GB
SAS (HW-RAID)
ITBox Dell PE R300 2.83GHz Quadcore Xeon 4GB 2*147GB SAS
(no RAID)
Storage NetApp-Cluster 5TB scalable
16 public IPv4
addresses, IPv6
access
possible on
demand
50Gbps
OpenBaton orchestrator
5GCore Network, IMS,
M2M, etc.
TID
NFV
experimental
lab
OpenMANO
/OpenVIM
OpenStack
OpenDayLight
NetIDE
Up to two HP9206 switches, OpenFlow 1.1, 10GBase-T
Up to four Dell/HP servers, 2x12 cores, 64 GB memory, 8 10GE
ports
Up to three mini-PCs Jetway JC-125-B, Celeron i5-M350
IPv4 and IPv6
addressing on
demand
VPN tunneling
10 Gbps link
More
capacity
avaiable on
demand
OpenMANO orchestrator
NetIDE core on
OpenDayLight, Ryu,
FloodLight
VNFs for IP/MPLS routing,
firewalling, traffic
monitoring, IDS,
honeypots...
ORA OpNFV OpenStack,
Opendaylight
4 DELL machinesR730 with 128 GB of RAM each one.
Intel Xeon E5-26003 v3 processors at 1,6 Ghz, 6 core for 3
component.
the fourth component has the same kind of processor with 18
core working ay 2.3 Ghz
Storage capacity is of 1 hard disk SATA with 250 GB and two
hard disks SSDs with 480GB
Other platform with more capacity is planned to be launched at
the end of 2015.
http://paris.op
nfv.fr/horizon
vIMS implemented
Table 9-1 Available infrastructures for testbeds
In this table it becomes evident that all of the testbeds support OpenStack, so a taken was decision to use it as a common software stack for virtualization.
The same exercise has been done for the testbeds where some partners have different resources available for such specialized tasks.
The next table depicts the new assets that come into play for the Machine Learning activities.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 57 of 87
Partner Main
feature
Underneath
Technology Scale COMMENTS
FOKUS HW
3-4x60 cores
IBM HW OpenStack
2 octo-core CPUs - (total 16 cores) running at
at least 2.5GHz
96GB RAM
4TB (can be 8-16 disk drives)
Bonded Gigabit Ethernet
Pending approvals to be used externally.
This could be used as part of Training scalable Machine
Learning Algorithms.
IBM HW
Big Data Applications based
on Cloudera, MongoDB, Inc.,
and Bash
IBM Softlayer (IBM IaaS)
http://www.softlayer.com/big-data
Resources need to be rented but support could be
provided which is important thing.
IBM SW Machine Learning Tools for
Batch and stream processing.
IBM BigInsights
(http://www-03.ibm.com/software/products/en/ibm-
biginsights-for-apache-hadoop)
IBM Infosphere Streams
http://www-03.ibm.com/software/products/en/infosphere-
streams
IBM BigInsights could be used for training models for
offline model building using Big R (middleware
component for distributing R code). Then IBM
Inforshpere streams will be used for real time scoring of
real live teleco data (In CogNet, we suggest in some use
cases to read the stored telco data from disk rather than
live from the telco network due to time constriants).
TSSG HW OpenStack Sahara (Spark,
Hadoop and HDFS)
12 * Dell C6220 servers
64 gigs of ram each
6 TBytes of local storage
12 TBytes of SAN storage
Intel E5 240 CPUs.
2 * Dell 720DXs
UPM HW Spark, Hadoop and HDFS 10 PCs (i5 4-cores , 4G RAM, 500Gigas disk) Pending approvals to be used externally
IRT HW OpenStack
2 servers each one will have the following specifications:
- (2x) 8 or 10 Core CPU
- Hyper-Threading support up to 4x physical core
- 256 GB of Memory
- 1TB storage HDD
- (4) 1GbE Ports
- iLO Chassis Lights Out Management Card
IRT has provided here details of HW infrastructure which
will be used to expose to project partners IaaS services
(in terms of virtual machines and virtual networks).
Projects partners will be granted (on the basis of current
availability) virtual resources and access to them.
No access to physical resources will be provided. Any
deviation should be discussed further with IRT.
UNITN HW
2 servers: (a) 4x8CPU, 1TB storage, 128GB RAM (b) smaller-
scale machine to support NVIDIA® Tesla™ K40M GPUs (a) can be used only partially for the project
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 58 of 87
Partner Main
feature
Underneath
Technology Scale COMMENTS
UNITN SW
Machine Learning Tools and
Libraries for (a) advanced
structural models, (b) DNNs
svlight-tk; libraries and tools based on svmlight, svmstruct,
theano, keras
Table 9-2 Available infrastructures for Machine Learning
These infrastructures add software stack for Machine Learning activities. Here, no consensus about Spark46
or Bluemix47
has been reached. The most likely
scenario is the development on top of Spark, used by most of the partners, and the adoption of Bluemix as a third party technology to specific
demonstrator integrating market solution.
46 http://spark.apache.org/
47 http://www.ibm.com/cloud-computing/bluemix/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 59 of 87
9.4. Requirements assessment
In this section two different requirements aspects related to the development will be described.
First, the requirements for the infrastructure are listed. Second, the current development
framework for each partner, in order to provide the general picture with all the technologies,
languages and frameworks involved.
The next table shows the requirements coming from the different WPs according to coarse
estimations based on past experiments and samples of datasets.
WP INFRASTRUCTURE CONNECTIVITY BAND
WIDTH STORAGE SW
WP3
20CPUs
125GB RAM
400GB SSD + 10Tb disk
Internet (ssh) >=
100Mbps 10Tb
Apache
Spark
MLlib
WP4
30 PCs (i7, 8-cores 128G RAM, 10Tb
disk, 1Gbps),
2 ethernet switches with 32 ports of
1Gbps
Internet (ssh) >=
100Mbps 10Tb
Python,
Scala, R
WP5 24 CPUs
each with 8GB RAM Internet (ssh)
>=
100Mbps 10Tb
Apache
Spark /
Hadoop /
Storm
WP6
13 CPUs (3Service + 3ML +
5Swithing + 2Clients)
200GB RAM
384GB SSD + 10TB disk
Internet (ssh)
Server HTTP 80
>=
100Mbps 10Tb
Apache
Spark
Table 9-3 Infrastructure estimation from each R&D WP
The previous table shows ambitious requirements. So, we have decided an iterative approach,
aligned with the iterations of the project. First, the scope of the demonstrators and experiments
will be limited to available infrastructures. After, according to the conclusions, reassess the
requirements in further iterations. Just in case a bigger infrastructure is required the consortium
could decide to hire infrastructures on demand for specific iteration periods.
Concerning the development environment the next table wraps up the different languages and
technologies that each partner plans to use to develop the CogNet’s solution.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 60 of 87
Partner Languages Repository CogNet Repo Schema Infrastructure
s Additional Comments
TSSG java, C (netconf) Public (among partners) b) WP (more independent
& WP cooperation)
Same Devel &
Deployment
TID
Mostly Python, with
some pieces in Java
and C
Public (among partners)
a) Components from WP2
architecture (more efficient
& functional cooperation)
Same Devel &
Deployment
Repositories so far: based on github
SDKs: OpenStack, OpenMANO, Juju Charms, Eclipse
(with NetIDE plugins)
IBM Python, R & Scala
Mixed (some
components public
some private)
a) Components from WP2
architecture (more efficient
& functional cooperation)
Same Devel &
Deployment
Repo: github
Solutions: Apache, Apache Spark, YARN, OpenStack.
Technologies: Theano, SciKit, numpy, scipy
VIC
ML: python, R
Service Server: C, html
Service Client: C, html
Building: Makefile
Public (among partners)
a) Components from WP2
architecture (more efficient
& functional cooperation)
Same Devel &
Deployment
Repo: github
SDKs: gstreamer
Solutions: Apache, Apache Spark, OpenStack, mongo.?
Technologies: MPEG-DASH, HLS
IRT N/A N/A N/A N/A IRT is not participating to development activities, , but
provides support to run unit testing/integration
FHG
C for telco network
Java for orchestrator
Python for monitoring
Mixed (some
components public
some private)
b) WP (more independent
& WP cooperation)
Same Devel &
Deployment
Repository: github/FOKUS private
Solutions: Apache Spark, OpenStack, OpenBaton,
FOKUS Telco components, FOKUS benchmkarking tool
TUB
C for telco network
Java for orchestrator
Python for monitoring
Mixed (some
components public
some private)
b) WP (more independent
& WP cooperation)
Same Devel &
Deployment
Repository: github/FOKUS private
Solutions: Apache Spark, OpenStack, OpenBaton,
FOKUS Telco components, FOKUS benchmkarking tool
UPM ML: Python, Scala, R,
Java Building: Maven Public (among partners)
b) WP (more independent
& WP cooperation)
Different Devel
& Deployment
Repo: git Solutions: Apache Spark, OpenStack, Yarn,
Hadoop HDFS Libraries: numpy, scipy, sklearn,
NOK
Java for management
tools, Python for
OpenStack code
Mixed (some
components public
some private)
b) WP (more independent
& WP cooperation)
Different Devel
& Deployment Repo: Git, Apache Cassandra
UNITN ML: java, C
Mixed (some
components public
some private)
b) WP (more independent
& WP cooperation)
Same Devel &
Deployment
repo: bitbucket/github; solutions: UNITN ML
components/libraries, external ML libraries (svmlight,
svmstruct, keras, theano)
ORA
ML : R, RStudio;
Deep leaning: Python
(pex package Theano)
Public (among partners)
a) Components from WP2
architecture (more efficient
& functional cooperation)
Same Devel &
Deployment
storage : Big Data environement hadoop ?
Big Data langage : PIG
Table 9-4 Development environment per partner
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 61 of 87
One important aspect lays on the CogNet's repository and its organization basis for the different
repositories. Here we propose 4 alternatives
a) Components from WP2 architecture (more efficient & functional cooperation)
b) WP (more independent & WP cooperation)
c) Partner (autonomous & limited cooperation)
d) Demo (autonomous & prototype cooperation)
The position of the partners is divided between: repositories per components from WP2
architecture, with WP6 integration activities pivoting around components; and repositories per
WP, isolating developments with autonomous WP activities, with WP6 integration merging
contexts and setups.
Because part of the testing and validation responsibility is the verification of the functionality of
each component and the assessment of the global efficiency, it has sense to conduct the
development environments with components.
9.5. Test-bed - Infrastructure maintenance
As described in the previous sections, CogNet partners have agreed to supply infrastructure and
facilities to provide support for integration and validation activities. On the basis of CogNet
needs partner infrastructure will be arranged in the form of test-beds where the project artefacts
will be deployed and where the integration of the different CogNet elements and the validation
of overall features will happen.
Even given the early stage of project, the following limitations are observed:
Infrastructure demands coming from WPs are ambitious.
Partner resources declared in Table 9-1 “Available infrastructures for testbeds” and Table
9-2 “Available infrastructures for Machine Learning” will have, as short as possible, stress
periods for testing and validation.
The following approach is proposed:
Multiple test-beds will run on different partner premises.
Reference platform should be identified to ensure that WP6 activities are replicated
consistently at different partner premises.
Mapping between WP6 activities to test-bed will be provided at a later stage of the
project.
Identification of experiments which needs to run on a short time scale whose releases can
be freed at a high pace.
Identification of experiments which needs to run on a long time scale and consequently
lock resources for a long time.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 62 of 87
Each partner will be responsible for the maintenance of the test-bed installed over its
own infrastructure.
Each partner will ensure that resources (available at a given time) are provided to the
project partners and released when no longer needed by WP6 activities.
At later stage of the project it is recommended the creation of clear WP6 plan (Gantt, excel), to
maximize the usage of the test-bed and to maximize the number of experiment that project is
able to execute, which clearly provide indication of available resources at partner test-beds and
the time-plan for experiments execution.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 63 of 87
10. Complementary technologies
CogNet itself does not span all the possible platforms and technologies. To assure optional
extensibility, some solutions, technologies or infrastructures beyond the CogNet system will be
integrated. This section introduces potential complementary technologies to be utilized in the
project. CogNet aims to apply machine learning techniques for: (i) service demand prediction and
provisioning, which allows the network to resize and resource itself using virtualization, to serve
predicted demand according to parameters such as location, time and specific service demand
from specific users or user groups (Objective 3), (ii) addressing network resilience issues to
identify network errors, faults or conditions such as congestion (Objective 4), and (iii) identifying
serious security issues such as unauthorised intrusion or fraud and liaise with autonomic network
management and policies to formulate and take the appropriate action (Objective 5). The
following initial set of technologies and products presented can aid in complementing the
implementation for achieving the above objectives. However, these technologies might
change/evolve as needed with the progress of the core work packages.
10.1. Specification of candidate complementary technologies
10.1.1. IBM BigInsights for Apache Hadoop
In order to achieve the above mentioned objectives, the consortium will rely on deploying
Apache Spark for implementing the batch layer for model training which is part of CogNet
Architecture. Apache Spark is an open source big data processing framework built around speed,
ease of use, and sophisticated analytics. Based on the initial results achieved from testing, there
might be a need of complementing or replacing Spark with IBM BigInsights for the batch
processing middleware layer for moving towards more commercial and business value for
CogNet. IBM® BigInsights™48
for Apache™ Hadoop aims to help organizations to cost effectively
manage and analyse big data – the volume and variety of data that customers and businesses
create and collect every day by combining open-source software with enterprise solutions. IBM
BigInsights is also available on Bluemix as a Hadoop-as-a-service on the IBM SoftLayer® global
cloud infrastructure49
.
BigInsights is based 100 percent on open source Apache Hadoop, however, it extends Hadoop
with enterprise-grade technology including administration and integration capabilities,
visualization and discovery tools as well as security, audit history and performance management.
Moreover, BigInsights reports on average approximately 4 times performance gain over Hadoop
48 http://www-03.ibm.com/software/products/en/ibm-biginsights-for-apache-hadoop
49 http://www-03.ibm.com/software/products/en/ibm-biginsights-on-cloud
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 64 of 87
– the testing involved the SWIM benchmark50
. It is designed for a wide range of users, such as
integration developers, administrators, data scientists, analysts and line-of-business contacts. In
addition, it is integrated with IBM Watson™ Foundations big data platform and comes bundled
with search and streaming analytics capabilities. Finally, it provides built-in Hadoop analytics
capabilities for machine data, social data, text and Big R for gathering insights from the data in
the Hadoop cluster (Reference: ebook: Hadoop in the cloud51
).
10.1.2. IBM Infosphere Streams
The consortium identified Spark Streaming as a suitable technology for implementing the speed
processing middleware layer for providing (near) real time processing as part of CogNet
architecture due to the fact it is an open source and compatible with Spark. We might introduce
IBM InfoSphere Streams for implementing the speed layer and comparing its latency with Spark
Streaming. Our initial thoughts that Spark streaming can not deliver low latency – 0.5 seconds is
the absolute lowest latency. Since Spark Streaming is really micro batching, we are not sure if it
will qualify for CogNet and 5G requirements. IBM InfoSphere Streams does provide the ability to
handle each record as it arrives, delivering very low latency and very high throughput for
streaming applications.
IBM InfoSphere Streams is a part of IBM big data solution that is intended to address the need
for scalable platforms and parallel architectures to process huge amount of steaming data in near
real time. It is designed to discover patterns in data streams during a time interval of minutes to
hours. IBM InfoSphere Steams is suitable for low-latency and time-sensitive applications, such as
network management and healthcare52
. It mainly includes the following components53
:
Runtime Environment – It is an agile development environment consisting of the Eclipse
IDE, the Streaming Live Graph view, a streams debugger, and toolkits to simplify and
facilitate the development of solutions.
Programming Model – This model identifies the target or the expectation of streaming
applications by the Steaming Processing Language (SPL) of the InfoSphere Streams.
These applications are represented as graphs that consist of operators and the steams
between them. The performance of the applications will be optimised by the InfoSphere
and no effort from user side is required on this issue.
Monitoring Tools and Administrative Interfaces – It is an efficient and low-latency
monitoring system to track and handle the streaming application hosted in the IBM
InfoSphere Steams.
50 https://github.com/SWIMProjectUCB/SWIM
51 http://www-03.ibm.com/software/products/en/ibm-biginsights-on-cloud
52https://www-01.ibm.com/marketing/iwm/iwm/web/signup.do?source=sw-
infomgt&S_PKG=500005177&S_CMP=is_wp67_opp
53 http://www.ibm.com/developerworks/library/bd-streamsintro/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 65 of 87
Both IBM BigInsight and InforSphere Streams are parts of the analytics solution of IBM to tackle
the continuously generated large volumes of data from the systems of companies. They are
equipped with the same analytics capabilities, common data formats, and data-exchange
adapters but InfoSphere Streams focuses on storing and analysing data in motion (streaming
data) whereas BigInsight mainly works on data at rest (inactive data).
10.1.3. Apache SystemML
The consortium is planning to build novel scalable ML algorithms that can fulfil the 5G
requirements on Spark using either Scala, Python or Java with a preference towards Scala due to
the preference from the performance point of view. Besides, the consortium might leverage some
already existing ML models in Spark via the MLlib, Apache Spark’s built-in scalable machine
learning library. Based on the developed ML models and the requirements of the core WPs. The
consortium might complement the developed ML models with SystemML as a complementary
technology for providing distinguishing characteristics including Multiple Execution modes and
Automatic optimization based on data and clusters characteristics to ensure both efficient and
scalability.
SystemML54
is now an incubating solution in Apache. SystemML aims at flexible specification of
machine learning algorithms and automatic generation of efficient hybrid runtime plans on
MapReduce or Spark. ML algorithms55
are expressed in an R-like syntax, which includes linear
algebra primitives, statistical functions, and ML-specific constructs. Moreover, SystemML
introduces a high-level language called Declarative Machine learning Language (DML) for writing
machine learning algorithms. DML exposes mathematical and linear algebra primitives on
matrices that are natural to express a large class of ML algorithms, including linear models, PCA,
PageRank, NMF etc. In addition, DML supports control constructs such as while and for to write
complex iterative algorithms [1].
10.2. Integration plan
The technologies presented in Section 10.1 can be integrated using the IBM Bluemix platform56
.
Bluemix aims to combine into a single development and management experience any
combination of public, dedicated and local Bluemix instances. It provides the ability to integrate
with apps and systems running elsewhere through a secure connection in order to connect to
your environment, transform and synchronize data, and create and expose entreprise APIs to the
Bluemix catalog.
IBM® Bluemix® is the IBM® open cloud platform that provides mobile and web developers
access to IBM software for integration, security, transaction, and other key functions, as well as
54 http://systemml.apache.org/
55 http://researcher.watson.ibm.com/researcher/view_group.php?id=3174
56 https://console.ng.bluemix.net/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 66 of 87
software from business partners57
. It is built on Cloud Foundry58
, which is an open-source
Platform as a Service (PaaS) that abstracts the underlying infrastructure needed to run a cloud,
letting the focus be on the business of building cloud applications. IBM Bluemix is app-centric
and provides PaaS and pre-built Mobile Backend as a Service (MBaaS) capabilities. The goal is to
simplify the delivery of an app by providing services that are ready for immediate use and
hosting capabilities to enable internet scale development59
.
The main advantage of Bluemix is the wide selection of boilerplates, runtimes and services
available for the user to choose from. A boilerplate is a container for an application and its
associated runtime environment and predefined services for a particular domain. A runtime
environment is the set of resources that is used to run an application, such as Liberty for Java,
and SDK for Node.js. Moreover, Bluemix offers a set of services, which can be used as
components of the application. For instance, the set of Watson services are presented in Figure 1.
The CogNet consortium and mainly WP6 leader is investigating several alternative options for
showing demonstrators but Bluemix is definitely one of the options. The exact route of going
through Bluemix or other platform to facilitate integration will be decided based on the progress
of the core work packages.
Figure 8 IBM Bluemix Watson Services.
57 https://www.ng.bluemix.net/docs/overview/index.html
58 https://www.cloudfoundry.org/
59 https://www.ng.bluemix.net/docs/overview/index.html
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 67 of 87
10.3. References
[1] A. Ghoting, R. Krishnamurthy, E. Pednault, B. Reinwald, V. Sindhwani, S. Tatikonda, Y. Tian, and
S. Vaithyanathan. Systemml: Declarative machine learning on mapreduce. In 27th IEEE
International Conference on Data Engineering (ICDE), pages 307-316, 2005.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 68 of 87
11. Demonstrator applications
The real outcome from WP6 are the demonstrators, they conduct the activities of WP6
establishing the metrics to be validated. The following sections establish a plan on how to
formulate a candidate demonstrator, whilst describing an initial demonstrator and its component
elements.
11.1. Methodology to find candidate demonstrators
The D2.1 – “Initial scenarios, use cases and requirements” establishes a methodology that links
datasets analysis of data acquired from implementation of scenarios to overcome 5G challenges.
This methodology is illustrated in the next diagram.
Figure 9 D2.1 Methodology concept map
In all of the above, CogNet will strive to reuse data and ML techniques for multiple scenarios. This
will help focus our efforts on quality rather than quantity of testbeds and tools, and yield tools
that are general enough to be used to solve a wide range of problems in the network, hopefully
promoting lean ML techniques and models for the 5G network.
The ML algorithms will be validated to overcome specific 5G challenges. These ML process will be
tailored to different datasets extracted from a specific realistic scenario implementation. So, the
demonstrators should try to provide representative dataset to apply different process to target
5G challenges.
Moreover, a relevant aspect for the demonstrators is that it will be subject to real business plans
from partners. The next table gathers the explicit interest on scenarios and challenges expressed
by the partners in order to define the scope of candidate demonstrators.
According to this, to capture all the expected features from CogNet, the demonstrators need to
meet the requirements coming from a minimum set of scenarios representing all the challenges.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 69 of 87
To this end, CogNet has ranked the scenarios according to the interest of the partners to create a
demonstrator around a specific scenario and challenge.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 70 of 87
The result of the first poll to record explicit attention to scenarios and challenges (from D2.1) is depicted in the following table: P
art
ner
Sce
na
rio
La
rge S
cale
Even
ts
Ind
ust
ry 4
.0
Den
se
Urb
an
Are
a
Inte
ract
ive
Str
eet
Wa
lk
Em
erg
en
cy
Co
mm
s
Pers
on
al
Secu
rity
Ap
pli
cati
on
s
Co
nn
ect
ed
Ca
r
Urb
an
Mo
bil
ity
Aw
are
ness
La
rge S
cale
Mu
ltim
ed
ia
Cro
wd
Dete
ctio
n &
Rep
ara
tio
n
of
Netw
ork
Th
rea
ts
Fo
llo
w t
he
Su
n
TSSG
Network
Security &
Resilience
Network
Traffic
Management
Network Security
& Resilience
TID Network Resource
Allocation
Network Traffic
Management
Network Security
& Resilience
IBM Network Resource
Allocation
Network Traffic
Management
Network
Performance
Degradation
VIC Network Resource
Allocation
Network
Performance
Degradation
Network
Performance
Degradation
IRT Network Resource
Allocation
Network Resource
Allocation
Network Security
& Resilience
FHG
Network
Security &
Resilience
Network Security
& Resilience
Network Security
& Resilience
TUB Network Traffic
Management
Network Security
& Resilience
Network Security
& Resilience
UPM Network Resource
Allocation
Network Resource
Allocation
Network Traffic
Management
NOK Network Resource
Allocation
Network Resource
Allocation
Network
Resource
Allocation
UNITN Network Resource
Allocation
Network
Resource
Allocation
ORA
Network
Performance
Degradation
Network Resource
Allocation
Network
Resource
Allocation
Table 11-1 Scenario and Challenge interest map
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 71 of 87
The previous table not only reflects the interest around some candidate demonstrators but also
the intended scope according to the tackled challenge.
The CogNet implemented solution will be a common infrastructure to all the different virtualized
infrastructures that will be analysed and optimized. So the experiments coming from the activities
in WP3, WP4 and WP5 and the demonstrators developed in WP6 will be executed on top of the
same WP6 infrastructure. Thus, the aim is to create a unique demonstrator infrastructure that can
be applied to different virtualized infrastructures where different use cases and challenges are
addressed.
As described previously the main driver for validation are the scenarios that bring metrics that
can be evaluated. This validation is performed on top of a demonstrator that conducts the
integration and validation activities. Moreover, a concrete scenario establishes concrete service
demand patterns to inject real traffic on the virtualized infrastructures to optimize their
performance under stress conditions.
The next section describes a potential final demonstrator driven by the validation phase. In terms
of traffic patterns to be injected to stress the virtualized systems to be optimized, it represents
concrete scenarios while spanning different use cases motivated from WP4 and WP5 research.
Furthermore, other candidates will come in the future from the different WPs and partners. They
will be used to conduct the integration of the CogNet platform and they will be designed to
conclude performance reports.
11.2. Demonstrator Massive Multimedia and Connected Cars
11.2.1. Description
Vicomtech-IK4 foresees a relevant benefit from the outcomes of CogNet that will aid in the
media service provision sector and the connected cars domain leveraging the new mobility
paradigms underpinned by an efficient network manager. This will enable Vicomtech-IK4 with a
new technological and scientific expertise that will satisfy the future demands.
Real-time and live applications target highly heterogeneous SLA requirements in terms of QoS,
restrictive network conditions, changeable bandwidth, latency, jitter and thresholds for error
resilience
To answer future rates of performance it is needed to capture the behaviour of different services
in next generation delivery networks (5G) and analyse future network problems to design
solutions accordingly.
In terms of data streams, estimating the network capacity even for the near future is challenging.
Inaccurate estimates can lead to degraded QoS. If network capacity is underestimated, the end
point will receive the data in a worse condition basis, even though the current network condition
allows a higher quality to be delivered. On the contrary, if it is overestimated the end point
requests a bit rate greater than network capacity blocking the client processing with waits.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 72 of 87
Three kinds of services will be deployed in order to cover a wider spectrum of casuistries
corresponding to different traffic patterns:
1. Downstream. In this case, a farm of clients downloading in real-time a stream with media
(OTT services like Netflix and the broadcast of live events) or data (traffic status and road
parameters)
2. Upstream. In this case, distributed users upload in real-time a stream with media
(UStream, Meerkat or Periscope apps let users live broadcast of their cameras) or data
(signals from car sensors and in-car video camera)
3. Balanced. In this case different parties get coupled and produce a similar volume of
upstream and downstream traffic (SIP calls like Skype and Car2Car apps).
This way two asymmetric context (main dataflow between servers and clients) and one symmetric
(dataflow between participants) span all the different dataflow patterns.
So, on top of the same servers infrastructure, but not executed at the same time the three
different services will be deployed, executed and analysed.
11.2.2. Architecture
In all the different services described before, the four logic infrastructures come into play:
Figure 10 CogNet Architecture for Massive Multimedia and Connected Car demonstrator
SERVERS
SERVER SERVER
SERVER
SERVER
CLIENTS
CLIENT CLIENT
CLIENT
CLIENT
FORWARDING
ROUTER ROUTER
ROUTER
ROUTER
MANO
MANAGERS MANAGERS
MANAGERS
MANAGERS
ORCHESTRATOR
MACHINE LEARNING
COMPUTATION
NODE COMPUTATION
NODE COMPUTATION
NODE COMPUTATION
NODE
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 73 of 87
The MANO system will monitor, meter and setup network function services on a virtualized server
based infrastructure. While the clients and the MANO system report to the ML system the data to
be classified to monitor performance and create forecasts. According to these inputs, the
Machine Learning decides to trigger management events of scaling (up to achieve/maintain
performance, down to maximize business models) and configuration (to track dynamic contexts).
Finally these events are matched to network management policies and applied to the MANO
system.
This way, the ML infrastructure not only shape the forwarding infrastructure, but also the servers
achieving a quicker buffer fills, reducing skips, stalls, freezes and stutters while minimizing the
required infrastructure. The aimed capabilities of 5G does not only lay on more capacity, lower
latency, more mobility, increased reliability and availability, energy efficiency consuming a
fraction of the energy that a 4G networks consumes today, but also to reduce service creation
time. Thus, network QoS management in the core and edge strategies in the servers can support
bitrate stabilization in the client.
11.2.3. Scope
The quality of the network experience is an important element in customer satisfaction and
retention. Some technologies are operated by adjusting the play-out rate to stay within the
actual network throughput and device capability. These technologies catalyze QoS solutions for
each connection, however, from the point of view of the infrastructure and the network, a global
optimization for massive media and connected car services must be done. Thus, the network
manager or telco operator needs tools to improve QoS in a 5G environment, such as:
Selection of the most appropriate setup to deliver the best QoS at the best cost (e.g.
policy-based or fully dynamic network selection; support for different layers of QoS with
different levels of cost and service level agreement).
Optimization of traffic when passing across the network (e.g. RAN optimization, specific
encoding optimization tools, management of applications traffic, congestion control).
These tools meet specific network challenges in 5G:
Scalability. Heterogeneous SLA levels collocated with denser device services. CogNet will
compensate extra needs of premium traffic with more flexible ones. Identified groups of
data streams should be prioritize according to SLA or criticality of data
QoS. Heavier volumes of data bring bigger BW needs while it is required to keep latency,
and minimize skips, freezes and stutters brought by multi-stream switching. CogNet will
identify network errors, faults or congestion conditions that might cause severe
performance degradations, and trigger mitigating actions to minimise the overall impact
of network resilience.
Dynamicity. More spontaneous data consumption patterns that need a more flexible a
rapid answer. CogNet will guide self-configuration, self-optimization and self-healing
system shifting from reactive to proactive, from monitoring to forecasting. Efficient and
accurate prediction of service demand and provisioning accordingly the network such
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 74 of 87
that it can resize and resource itself based on certain parameters such as location, time,
and historical data.
Efficient resource management. Accommodate peak delivery needs and even competing
services over the same resources while meeting business costs. Streaming clients
compete with other traffic bitrates. Clients cannot figure out how much bandwidth to use
until they use too much. CogNet will conduct the operative thresholds to keep network
operation inside business ranges.
This demonstrator will develop several technologies that can contribute in achieving the
objectives set in the Massive Multimedia and Connected Cars scenarios:
Service probing: a client-side embedded system of data collection from end devices that
involves capturing and sharing QoS metrics. This will be done from the Gstreamer logs of
media players. It includes:
o Latency
o Jitter
o Number of handover/switching errors
o Buffering time
o Session time
Smart client: a client-side extended capacity to exploit previous metrics taking
autonomous decisions of switching to another streams with bitrates more adapted to the
network conditions. This will be designed and developed for the dynamically adaptive
streaming services over HTTP.
Demand patterns: a client-side automatization of demand generation. The traffic profile
includes the following parameters:
o Average clients volume/population
o Geolocation distribution (physically or by means of subnet masks)
o Number of clients per service
o Percentage of Premium clients
o Session inter-arrival
o Session time
o Session inter-leaving
o Percentage of stream handover (service) / switching (bitrate)
o Frequency of handover (service) / switching (bitrate)
o Percentage of trending stream handover (service)
Streaming services: 3 standard compliant systems to inject contents streamed according
to different dynamic patterns. It includes:
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 75 of 87
o Downstream. Main dataflow is from the servers to the clients (similar to Netflix
and ADAS services). I includes:
o Upstream. Main dataflow is from the clients to the servers (similar to UStream and
Enhanced Navigation services)
o Balanced. Main dataflow is between clients through the servers (similar to Skype
and Car2Car services)
o Among all of them the common parameters will be:
Number of streams
Number of servers
Duplication level
Service infrastructure: a pool of server assets. This responsibility is delegated to
OpenStack.
Forwarding Network: a pool of delivery (routing/switching) nodes. This responsibility is
delegated to OpenStack and vSwitch. It must enable different network setups and
topologies that simulates real big scale infrastructures behaviour.
Forwarding Network and Service infrastructure monitoring: a system of data collection
from nodes. This responsibility is held by OpenFlow and OpenDaylight. It includes:
o RAM utilization
o CPU utilization
o Number of managed connections
o Retransmission factor
o Average throughput
Machine Learning analysis: to develop a system of demand prediction and provisioning.
This will be developed on top of Spark technologies. It will process inputs such as:
o Client metrics (QoS logs for different clients and SLAs over the time)
o Forwarding network metrics (HW and Network interface performance)
o Service infrastructure metrics (HW and Network interface performance)
o Business thresholds (maximum and peak assets and SLAs)
Application of machine learning triggers: to allow the network to resize and resource
itself, using virtualization, to serve predicted demand. This will be integrated with
OpenMANO technology.
Smart network control and management: providing infrastructures with efficient and
flexible provisioning of end-to-end differentiated services by means of significantly
improving operations and network efficiencies. This responsibility is delegated to
OpenMANO setup.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 76 of 87
Network Functions Virtualization: enabled by software defined networking (SDN), it plays
an important role to automatically reallocate resources. This will be the responsibility of
OpenFlow and vSwitch setup
All these elements build the demonstrator that Vicomtech has intended to develop and deploy
establishing a clear picture of the data involved, the complexity of the solution and the target
features to be accomplished.
11.2.4. Metrics
From the point of view of demand patterns, these are the different client activities designed to
test the forwarding performance and the optimal service scale that the SLAs will honour.
Different requirements come into play for live or on-demand experiences, low latency and flat
jitter. On top of these, the setup is conducted by extra requirements when premium subscriptions
come into play. The considered metrics are:
Service scale: the average throughput and the ratio compared to the theoretical
maximum must be around 90%.
Forwarding efficiency: the average throughput and the ratio compared to the theoretical
maximum must be around 90%.
Initial Delay: the delay between the first client request and the start of the playback.
Stalling Time: the sum of all playback interruptions should be under 2 in 1 hour.
Number of quality switches: the total number of quality switches during the playback
should be under 2 in 1 minute.
Inter switching times: the time between quality switches should be imperceptible under
20ms.
Latency: minimizing delivery up to 20ms.
Packet jitter: maximum deviation under 20ms.
Service denial: out of time connections maximum 1 per 100 attempts.
Session drop: errors or interrupted connections maximum 1 per 200 playing hours.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 77 of 87
12. Conclusions
A common methodology and set of policies to efficiently develop, integrate and evaluate the
different components involved and the resulting system promoting common goals is mandatory
when distributed teams with heterogeneous backgrounds come into play. More specifically this
document :
defines development directives to conduct developments to common interfaces and
workflows.
schedules integration activities to provide frequent versions with incremental progress
and to detect misalignments and assign responsibilities.
unifies evaluation processes to undertake resulting effectiveness demonstrating
contrastable performance improvements over the conventional approaches for network
management systems.
The strategy to undertake a demonstrator creation and validation efficiently is based on
frameworks for continuous integration and testing, having short iterative development cycles,
and policies for code development. Here, the testing responsibility is the functional and technical
verification of each component while validation carries out the assessment of the global
efficiency.
CogNet partners have agreed to supply infrastructure and facilities to provide support for
integration and validation activities. On the basis of CogNet needs, partner infrastructure will be
arranged in the form of test-beds where the project artefacts will be deployed and where the
integration of the different CogNet elements and the validation of overall features will happen.
A key WP6 goal is the realization of the WP2 components stack and dataflow following the
provided architecture. One major aspect of the activities in WP6 is the infrastructure that will
implement and execute the CogNet solution for the demonstrator. It will employ integrated
components and outcomes from WP3, WP4 and WP5 in order to test and validate project results.
The CogNet implemented solution will be a common infrastructure to all the different virtualized
infrastructures that will be analysed and optimized. So the experiments coming from the activities
in WP3, WP4 and WP5 and the demonstrators developed in WP6 will be executed on top of the
common WP6 infrastructure.
Last but not least, this document establishes the plan to generate the first candidate
demonstrator. The real outcome from WP6 is the demonstrator, it conducts the activities of WP6
establishing the metrics to be validated.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 78 of 87
Appendix A. Evaluation Frameworks
and tools
This section provides an overview of the tools that will be used to build a proper framework
where the prototype evaluation activities will be executed. The following definition will be used
across the description:
Emulation is the process of mimicking the outwardly observable behaviour to match an
existing target. The internal state of the emulation mechanism does not have to
accurately reflect the internal state of the target which it is emulating.
Simulation, on the other hand, involves modelling the underlying state of the target. The
end result of a good simulation is that the simulation model will emulate the target which
it is simulating
In the following subsection a wide range of tools will be presented related to the following
categories:
Simulation tools
Emulation tools
Traffic generation tools
Network management tools
Probing tools
Monitoring tools
A.1. Simulation tools
A.1.1. Network Emulation
Mininet60
is an OpenFlow network emulator. It brings a Network Emulation platform for quick
network test-bed set-up. Nodes are created as processes in separate network namespaces using
the Linux network namespaces, a lightweight virtualization feature that provides individual
processes with separate network interfaces, routing tables, and ARP tables. It implements the
OpenFlow protocol on Open Virtual switches (OVS), it can also swap out OVS for other software
switch implementations. It uses Linux namespace to run a collection of end-hosts, switches and
links on a single host. It shall be utilized in the integration process to validate topologies and
function service chaining.
60 http://mininet.org/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 79 of 87
A.1.2. Event Network Emulator
Ns61
is a discrete event simulator targeted at networking research. Ns provides substantial
support for simulation of TCP, routing, and multicast protocols over wired and wireless (local and
satellite) networks.
A.1.3. Network emulators (Riverbed Modeller, NS3)
Most network simulators are based on event or discrete simulation engines. Riverbed Modeller,
NS2, NS3 are all network simulators for building sophisticated development environment,
allowing the integrator compare the impact of different technology designs on end-to-end
behaviour patterns.
A.2. Emulation framework
The main emulation framework across WP6 activities will be represented by test-bed instances
deployed over partner premises. The management of test-beds and resources that are made
available by the partners will be discussed in later sections.
A.3. Tools for traffic generation and probing
This section describes the most representative categories and examples of tools that can be used
for preparing, managing, and obtain results during the integration process and validation phases.
A.3.1. Traffic Generation
D-ITG (Distributed Internet Traffic Generator)62
is a platform capable of producing traffic at the
packet level that can accurately replicate appropriate stochastic processes for both IDT (Inter
Departure Time) and PS (Packet Size) random variables. D-ITG supports both IPv4 and IPv6 traffic
generation and it is capable of generating traffic at network, transport, and application layers.
Tcpreplay63
is a suite of tools which allows you to use previously captured traffic in libpcap
format to test a variety of network devices. It allows you to classify traffic as client or server,
rewrite Layer 2, 3 and 4 headers and finally replay the traffic back onto the network.
A.3.2. Performance Measurement
iPerf64
is a tool for active measurements of the maximum achievable bandwidth on IP networks. It
supports tuning of various parameters related to timing, protocols, and buffers. For each test it
reports the bandwidth, loss, and other parameters.
61 https://www.nsnam.org/
62 http://traffic.comics.unina.it/software/ITG/
63 http://tcpreplay.synfin.net/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 80 of 87
SmokePing65
keeps track of your network latency and offers visualization through MRTG.
A.3.3. Packet Manipulation, Reconciliation and Auditing
Some tools allows you to manipulate packets or to author packets and place them on the wire66
.
Others tools allow you to capture and audit the data:
Wireshark67
is an open-source packet analyser. It is used for network troubleshooting,
analysis, software and communications protocol development.
Ntopng68
is a traffic analysis and auditing tool with an intuitive visualization capability,
that includes statistics, flow reports and active nodes discovery between others.
Argus69
is a network Audit Record Generation allowing large-scale network activity audit.
Argus generates detailed network flow status reports of all of the flows in the packet
stream.
Tstat70
is a passive sniffer capable of providing several insights on the traffic patterns at
both the network and the transport levels. It offers important information about classic
and novel performance indexes and statistical data about Internet traffic.
A.3.4. Application KPI level Measurement
Some domain specific tools that shall aid the measurement of key quality indicators of an end to
end service include, Two-Way Active Measurement Protocol (TWAMP), sFlow and ceilometer.
Two-Way Active Measurement Protocol (TWAMP)71
. Defined in RFC 5357 TWAMP is
an open protocol for measurement of two-way or round-trip metrics. The TWAMP-Test
protocol can be incorporated as a probe used to send and receive performance
measurements.
64 http://software.es.net/iperf/
65 http://oss.oetiker.ch/smokeping/index.en.html
66 http://www.secdev.org/projects/scapy/
67 https://www.wireshark.org/
68 http://ntop.org
69 http://www.qosient.com/argus/
70 http://tstat.polito.it/
71 https://www.packetizer.com/rfc/rfc5357/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 81 of 87
sFlow72
. sFlow is an industry standard technology for monitoring high speed switched
networks. It gives complete visibility into the use of networks enabling performance
optimization, and defence against security threats.
Ceilometer73
. The ceilometer application reliably collects data on the utilization of the
physical and virtual resources the testbed contains. These performance records can be
stored for subsequent retrieval and test reporting.
A.3.5. Automatic Traffic deployment
NetIDE74
has produced a network engine that executes SDN programs and an IDE based on
Eclipse to develop and debug them. The network engine is based on a two-tier controller
framework architecture with client controllers executing SDN applications that are used as
modules to compose more complex SDN applications. The engine provides the composition
logic for said SDN applications. The IDE assists the SDN developer when creating and deploying
the applications on the NetIDE engine. It provides a common interface for tools that execute on
the network engine. Tools that may be significant in the context of CogNet include traffic
generators and network profilers.
The IDE interacts with mininet and provides a graphical method to create mininet test scenarios.
It also interacts with physical network devices. On the controller front, the IDE may need
adaptation for scenarios that envisage the use of a simpler controller infrastructure (e.g.
standalone controller that executes native SDN applications directly).
In addition to access to leading-edge code through the github repository maintained by the
project, stable releases of the IDE are available at the Eclipse Marketplace.
A.3.6. Wire speed Ethernet packet generator and playback
PFSend75
brings two main components for line rate packet generation, (1) is the ability to
generate synthetic packets: it will forge packets with meaningless content to fill the
communications link with data (2) provides the ability to reproduce packets at line rate or original
speeds from stored pcap formatted files. It will play an important part in staging and the
evaluation of performance related scenarios.
Pktgen76
, (Packet Generator) is a software based traffic generator powered by the DPDK fast
packet-processing framework. It is capable of generating 10Gbit wire rate traffic with 64 byte
72 http://www.sflow.org/
73 http://docs.openstack.org/developer/ceilometer/
74 http://www.netide.eu
75 http://www.ntop.org/solutions/wire-speed-traffic-generation/
76 http://pktgen.readthedocs.org/en/latest/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 82 of 87
frames, and support real time metrics. Also supports command scripts to set up repeatable test
cases.
These tools will play an important part in staging and the evaluation of performance related
scenarios.
A.3.7. Layer 2 Forwarding in Virtualized environments
L2fwd77
is a network traffic generation application, again a Layer 2 forwarding tool. In addition to
the Layer 2 forwarding it brings a specialised feature set to the virtualization test environment. It
allows traffic to be orchestrated using network performance acceleration techniques formulated
from the instruction set single root I/O virtualization (SR-IOV).
A.3.8. Network throughput
The application Test TCP (TTCP)78
can be used to measure network throughput load in the TCP
and UDP protocols on an IP path. It will determine the actual bit rate of a particular unshared end
to end connection and shall be considered in testing for bandwidth degradation and connection
benchmarking.
A.4. Network Management tools
Cacti79
is a network graphing solution designed to harness the power of RRDTool's data storage
and graphing functionality. It includes advanced graph templating, multiple data acquisition
methods, and user management features.
Nagios80
is an open-source computer-software application which monitors systems, networks
and infrastructure. Nagios offers monitoring and alerting services for servers, switches,
applications and services. Nagios offers monitoring of network services (SMTP, POP3, HTTP,
NNTP, ICMP, SNMP, FTP, SSH).
A.5. Network QoS probes over OpenStack
Neutron is the interface of OpenStack to configure the network that helped by the ML2 driver
provides network attributes.
77 http://dpdk.org/doc/guides/sample_app_ug/l2_forward_real_virtual.html
78 https://en.wikipedia.org/wiki/Ttcp
79 http://www.cacti.net/
80 https://www.nagios.org/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 83 of 87
A.5.1. Ryu
Ryu81
is a component-based software defined networking framework. Ryu supports various
protocols for managing network devices, such as OpenFlow, Netconf, OF-config, etc. About
OpenFlow, Ryu supports fully 1.0, 1.2, 1.3, 1.4, 1.5 and Nicira Extensions. All of the code is freely
available under the Apache 2.0 license and is fully written in Python.
Ryu is a full featured OpenFlow controller that, embedded in the agent, builds the base of the
SDN controller like L2 switch, ReST interface, topology viewer and tunnel modules.
Ryu also allows setting QoS policies through a ReST interface which uses an Open vSwitch
database (OVSDB) interaction library to apply those policies over Open vSwitches, that are the
implementation of virtual switches. The QoS rules can be either applied to a specific Queue
within a VLAN or a Switch port. It supports DSCP tagging and setting the min-rate and max-rate
of an interface.
A.5.2. OpenDaylight
Open DayLight82
is a SDN controller that provisions the network policies as specified and sends
that information to the Hypervisor. As a controller it also performs the role of maintaining those
policies in spite of the changes happening in the network, recomputing policies and pushing to
Hypervisors.
The OVSDB Plugin component for OpenDaylight implements the OVSDB management protocol
that allows the configuration of Open vSwitches (DSCP marking, setting priority, min-/max-rate
for switch ports & OpenFlow Queues).
A.5.3. Neutron QoS extension
A Neutron extension8384
has been implemented applying QoS rules to Neutron networks and
specific ports. The patch consists of an extension to the Neutron API which allows setting QoS
rules through the Neutron Python client, the actual Neutron extension with the QoS, QoS Driver
in the Open vSwitch agent and an addition to the Neutron Database that includes QoS.
81 http://osrg.github.io/ryu/
82
https://wiki.opendaylight.org/view/OpenDaylight_OpenFlow_Plugin::Running_controller_with
_the_new_OF_plugin
83 https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api
84 https://blueprints.launchpad.net/neutron/+spec/ml2-qos
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 84 of 87
A.6. Monitoring tools
Monitoring systems allows for the creation of alerting rules that allow you to define the
conditions based on expressions and to send notifications about these alerts to an external
service.
A.6.1. Prometheus
Prometheus85
is an open-source service monitoring system and time series database. It brings a
highly dimensional data model accompanied by a query language to slice data in order to
generate ad-hoc graphs, tables, and alerts. Concerning storage, it uses both memory and local
disk where scaling is achieved by functional sharding and federation. Go, Java, and Ruby are
supported to create alerts and statistics.
85 http://prometheus.io/
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 85 of 87
Appendix B. Definitions
System integration refers to the process of joining together different computing systems or
software applications or components to act as a one coordinated large system. It also ensures
that each integrated subsystem acts as required. In other words, it means the combining of
software modules or full programs with other software components in order to develop an
application or enhance the functionality of an existing one. Thus, developed components must be
integrated with other components or in the environment where they are expected to be
deployed. The more a software system is integrated, the better it functions.
Integrated system is a system that has combined different functions or software components
together in order to work as one entity. The integrated system is then to be tested to verify its
performance and to fulfil its specified requirements.
Vertical Integration is combining components or subsystems according to functionality by
creating silos of functional entities. The integration process starts from the bottom basic function
upward. Vertical integration can be done quickly and cost efficient for the short term. However, it
becomes more expensive over time because to implement new functionalities, new silos must be
created.
Horizontal Integration is an integration method, in which new capabilities are created across
individual systems. Different acquisition programs have originally developed these systems for
different purposes. Furthermore, there is only one interface among subsystems allowing
replacing subsystem with another without affecting the others.
Continuous Integration the aim of continuous integration is to prevent integration problems
and ensures the functionality of the whole system or application whenever changes in code have
been submitted by developers to the source code repository. Furthermore, continuous
integration provides feedback to developers if a failure in building the components or one of
integration tests fails, so the failure can be identified and corrected as soon as possible. Adopting
continuous integration provides various benefits, such as improving software quality, reducing
risk and providing feedback on the current status of the software, etc. These benefits help to
minimize the complexity of finding and solving errors.
Integration Tests run automatically on a machine when it detects changes submitted to the
source code repository to test software components to ensure interaction and interoperability
between various software components. After Integration tests have been successfully performed
on the components, the whole application is ready for System Validation.
System Validation means evaluating the total and integrated application or system to assess the
compliance of the whole system with respect to the specified requirements. It also helps in
checking and ensuring the fulfilment of functional and non-functional requirements of the
system concerning the architecture as a whole.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 86 of 87
Integration Platform as a Service is a platform that allows connecting various applications and
software across multiple organizations and making them compatible in term of deploying these
integrations without having to install new hardware, software or write custom code.
System Integrator is an individual or an organization that implements IT solutions within an
organization. System Integrator ensures addressing the issues that the system is designed for, the
required efficiency out of the implemented solution and the fulfilment of functional and non-
functional requirements.
WD61 - Initial integration and validation plan
CogNet Version 1.0 Page 87 of 87
Appendix C. Abbreviations
ETSI European Telecommunications Standards Institute
IETF Internet Engineering Task Force
KPI Key Performance Indicator
MANO Management and Orchestration
NF Network Function
NFV Network Function Virtualization
NFVO Network Function Virtualization Orchestrator
OSS Operations Support System
PSA Personal Security Applications
REST Representational State Transfer
SDK Software Development Kit
SDN Software-Defined Networking or Software-Defined Network
SLA Service Level Agreement
SP Service Platform
SSM Service-Specific Manager
VIM Virtual Infrastructure Manager
VM Virtual Machine
VNF Virtual Network Function
VNFM Virtual Network Function Manager
Recommended