of 63/63
Grant Agreement No.: 687871 ARCFIRE Large-scale RINA Experimentation on FIRE+ Instrument: Research and Innovation Action Thematic Priority: H2020-ICT-2015 D4.3 Design of experimental scenarios, selection of metrics and KPIs Due date of Deliverable: Month 12 Submission date: January 2017 Final version: May 5th 2017 Start date of the project: January 1st, 2016. Duration: 24 months version: V1.0 Project funded by the European Commission in the H2020 Programme (2014-2020) Dissemination level PU Public X PP Restricted to other programme participants (including the Commission Services) RE Restricted to a group specified by the consortium (including the Commission Services) CO Confidential, only for members of the consortium (including the Commission Services)

ARCFIREict-arcfire.eu/wp-content/uploads/2017/10/arcfire_d43-final.pdf · IP (Internet) Core router MPLS (LSP) Internet Border Router Internet Border Router 802.3 Eth PHY Customer

  • View
    12

  • Download
    0

Embed Size (px)

Text of ARCFIREict-arcfire.eu/wp-content/uploads/2017/10/arcfire_d43-final.pdf · IP (Internet) Core router...

  • Grant Agreement No.: 687871

    ARCFIRE

    Large-scale RINA Experimentation on FIRE+

    Instrument: Research and Innovation ActionThematic Priority: H2020-ICT-2015

    D4.3 Design of experimental scenarios, selection of metrics and KPIs

    Due date of Deliverable: Month 12Submission date: January 2017

    Final version: May 5th 2017Start date of the project: January 1st, 2016. Duration: 24 months

    version: V1.0

    Project funded by the European Commission in the H2020 Programme (2014-2020)

    Dissemination level

    PU Public X

    PP Restricted to other programme participants (including the Commission Services)

    RE Restricted to a group specified by the consortium (including the Commission Services)

    CO Confidential, only for members of the consortium (including the Commission Services)

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    H2020 Grant Agreement No. 687871

    Project Name Large-scale RINA Experimentation on FIRE+

    Document Name Deliverable 4.3

    Document Title Design of experimental scenarios, selection of metrics andKPIs

    Workpackage WP4

    Authors Dimitri Staessens (imec)

    Sander Vrijders (imec)

    Eduard Grasa (i2CAT)

    Leonardo Bergesio (i2CAT)

    Miquel Tarzan (i2CAT)

    Bernat Gaston (i2CAT)

    Sven van der Meer (LMI)

    John Keeney (LMI)

    Liam Fallon (LMI)

    Vincenzo Maffione (NXW)

    Gino Carrozzo (NXW)

    Diego Lopez (TID)

    John Day (BU)

    Editor Dimitri Staessens

    Delivery Date May 5th 2017

    Version v1.1

    2

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    Abstract

    This deliverable details the preparatory work done in ARCFIRE WP4 before the actual experimen-tation can begin. It takes the converged network operator scenarios for a traditional network designand the RINA designs that were developed in WP2 together with the testbed overview that wascompiled by T4.1 and extracts interesting reference scenarios for the 4 main experiment objectives.This document will thus serve as a crutch for the experimenters during experimentation, providingall necessary information on the experiment objectives and tools in one location, and providingreferences to the sections of the WP2 deliverables where more details can be found.

    First, this document briefly summarises the WP2 reference scenarios for the converged networkoperator, focusing on the architecture of the access network, since that is where the technologydiversification is most apparent. The first experiment will investigate the differences between aRINA and an evolutionary 5G network in terms of management complexity, targeting configurationmanagement deploying a new service. The experiment has been divided into 4 sub-experimentswith green and brown field starting points for the service. The second experiment will performnetwork performance oriented measurements, investigating the delivery of multimedia servicesover heterogenous networks. Here, ARCFIRE will evaluate network parameters such as packetoverhead with respect to data transport, resiliency to failures and maintenance and scalability withrespect to the number of users (flows), services and network elements. The third experiment turnsour sight towards multi-provider networks. RINA will be evaluated as an alternative for MPLStowards its capability for maintaining end-to-end QoS guarantees with respect to delay, jitter andbandwidth. A fourth experiment will evaluate how renumbering end-users addresses with respectto the location in the network improves overall scalability. The fifth and final experiment will alsodelve into the OMEC scenario for RINA, keeping applications reachable while they move throughthe network.

    3

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    Table of Contents

    1 Reference scenario 81.1 Baseline reference for traditional networks . . . . . . . . . . . . . . . . . . . . . 81.2 RINA network design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    2 Experiment 1: Management of multi-layer converged service provider networks 142.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    2.1.1 The Current Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.1.2 Multi-layer Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . 152.1.3 The resulting Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 162.1.4 Scope: Configuration Management . . . . . . . . . . . . . . . . . . . . 162.1.5 Aspects, KPIs, Scenarios and Use Cases . . . . . . . . . . . . . . . . . . 19

    2.2 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.1 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.2 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.2.3 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.4 Experiment 1-1: Deploy Network from Zero . . . . . . . . . . . . . . . 252.2.5 Experiment 1-2: Deploy new Service in Existing Network . . . . . . . . 272.2.6 Experiment 1-3: Deploy Network and Service from Zero . . . . . . . . . 282.2.7 Experiment 1-4: Add new Network Node - Zero touch . . . . . . . . . . 28

    2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    3 Experiment 2: Scaling up the deployment of resilient multimedia services over het-erogeneous physical media 313.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.3 Experiment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.4 Metrics and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.5 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.6 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    4 Experiment 3: RINA as an alternative to MPLS 374.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.2 Metrics and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.3 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.4 Experiment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.5 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    4

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    5 Experiment 4: Dynamic and seamless DIF renumbering 455.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.2 Metrics and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.3 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.4 Experiment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.5 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    6 Experiment 5: Application discovery, mobility and layer security in support of OMEC 536.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536.2 Metrics and KPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.3 Testbeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.4 Experiment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556.5 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    7 Software 60

    8 Conclusion 61

    5

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    List of Figures

    1 Converged service providers’ network . . . . . . . . . . . . . . . . . . . . . . . 82 Residential fixed Internet access: data plane (up) and control plane (down) . . . . 93 4G Internet access: data plane (up) and control plane (down) . . . . . . . . . . . 104 Layer 3 VPN between customer sites: data plane (up) and control plane (down) . . 115 Fixed access network (RINA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Cellular access network (RINA) . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Enterprise VPNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Management Points in a Protocol Stack Pattern . . . . . . . . . . . . . . . . . . 149 Management Points in a Layered Domain Pattern . . . . . . . . . . . . . . . . . 1710 FCAPS, Strategies, and RINA Network . . . . . . . . . . . . . . . . . . . . . . 1711 DMS: Stand-alone System for Benchmark Experiments . . . . . . . . . . . . . . 2212 DMS: Full System for Experiments with a RINA Network . . . . . . . . . . . . 2313 Experiment 1: Minimal RINA System for simple Experiments . . . . . . . . . . 2314 Extended experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3315 Data plane of 2-layer hierarchical LSP scenario for a core service provider network 3716 Control plane of 2-layer hierarchical LSP scenario for a core service provider network 3817 2-layer service provider core implemented with a RINA over Ethernet configuration 3918 Mid-scale ladder topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4119 Large-scale ladder topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4120 Connectivity graph of the backbone L1 DIF (left) and the backbone L2 DIF (right) 4221 Renumbering experiment: small single DIF scenario, layering structure (up), and

    DIF connectivity (down) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4822 Renumbering experiment: large single DIF scenario, DIF connectivity . . . . . . 4923 Renumbering experiment: DIF structure for the multi-DIF scenario . . . . . . . . 5024 Renumbering experiment: systems for small scale multi-DIF scenario . . . . . . 5025 Renumbering experiment: systems for large scale multi-DIF scenario . . . . . . . 5126 Categorisation of scenarios and configurations for the renumbering experiments . . 5127 OMEC experiment: physical systems involved in the scenario . . . . . . . . . . 5528 OMEC experiment: DIF configurations: UE to server on the public Internet . . . 5629 OMEC experiment: DIF configurations: UE to server on the provider’s cloud . . 57

    6

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    List of Tables

    1 Milestones for experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 KPIs for experiment 1-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 KPIs for experiment 1-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 KPIs for experiment 1-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 KPIs for experiment 1-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 KPIs for experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Testbeds for experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Planning of experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 KPIs for guaranteed QoS levels experiment . . . . . . . . . . . . . . . . . . . . 4010 KPIs for guaranteed QoS levels experiments . . . . . . . . . . . . . . . . . . . . 4311 Milestones for guaranteed QoS levels experiment . . . . . . . . . . . . . . . . . 4412 KPIs for renumbering experiments . . . . . . . . . . . . . . . . . . . . . . . . . 4613 Testbeds for renumbering experiments . . . . . . . . . . . . . . . . . . . . . . . 4714 Milestones for renumbering experiments . . . . . . . . . . . . . . . . . . . . . . 5215 KPIs for OMEC experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5416 Testbeds for OMEC experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 5517 Milestones for renumbering experiments . . . . . . . . . . . . . . . . . . . . . . 5918 Software to be used in ARCFIRE experiments . . . . . . . . . . . . . . . . . . . 60

    7

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    1 Reference scenario

    All experiments in WP4 are grafted in the converged service provider network design as researchedin WP2. This section provides a quick summary of the core designs that will be used in theexperiments. More details can be found in deliverables D2.1 [1] and D2.2 [2].

    1.1 Baseline reference for traditional networks

    xDSL FTTH WiFi 4G

    National DC Regional DC(s)

    Metropolitan aggregation

    Metropolitan aggregation

    … …

    Metro DC

    Metro DC

    Access

    Internet Border

    Private peering with other operators , or Internet transit

    To Internet eXchange Point (IXP)

    IP eXchange border To IPX network (IMS traffic)

    micro DC

    Metropolitan aggregation Metropolitan

    aggregation

    Metropolitan aggregation

    Metropolitan aggregation

    Metropolitan aggregation

    Core/backbone

    Figure 1: Converged service providers’ network

    Figure 1 recaps the main parts of a converged service provider network. The network ispartitioned in three: several types of access networks allow the provider to reach its customersvia wired and wireless technologies. The traffic from these access networks is aggregated bymetropolitan area networks (MANs), which aggregate the traffic towards the network core.

    Customers and services are identified and authenticated in the access or MAN. At the core, thetraffic is either forwarded to a DC or an Interconnect edge router (e.g. Internet edge) otherwise.The service provider may have different datacentres attached to different parts of the network:

    • Micro data centres attached to the access networks, supporting the Mobile Edge Computingconcept by running very latency-critical services very close to its customers. Micro-data-centers may also be used to support C-RAN (Cloud RAN) deployments.

    • Metro data centres attached to the metropolitan networks. These DCs could host serviceproviders’ network services such as DHCP, DNS or authentication servers; but also ContentDelivery Networks (CDNs) or even provide cloud computing services to customers.

    • Regional or national data centres attached to the core networks. Could run the sameservices as metro data-centers at a national scale, as well as mobile network gateways and/orthe Network Operations Center (NOC).

    8

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    DSLAM

    XDSL

    AAL5/ATM

    CPE BRAS/Edge router Carrier Eth.

    PE Carrier Eth.

    PE

    802.3

    802.1ah

    802.1q/p or 802.1ad

    Eth PHY Eth PHY

    Eth PHY Eth PHY

    Carrier Eth. P

    802.3

    Eth PHY

    PPP and PPPoE

    Host L2/L1

    IP (Internet)

    Core router Core router

    L2/L1 L2/L1

    MPLS (LSP)

    Internet Border Router

    Internet Border Router

    802.3

    Eth PHY

    Customer network Service Prov. 2 Service Prov. 1 network

    Access

    Aggregation Service Edge Core (BGP-free) Internet Edge

    Internet eXchange Point Core PoP, city B Core PoP, city A City A Ethernet-based MAN City A Cabinets

    MPLS (VPN/Pseudo Wire)

    TCP/UDP

    DSLAM

    XDSL

    AAL5/ATM

    CPE BRAS/Edge router Carrier Eth.

    PE Carrier Eth.

    PE

    802.3

    IEEE 802.1aq (SPB)

    802.1q/p or 802.1ad

    Eth PHY Eth PHY

    Eth PHY Eth PHY

    Carrier Eth. P

    802.3

    Eth PHY

    PPP and PPPoE

    Host

    L2/L1

    Core router Core router

    L2/L1 L2/L1

    IP (operator)

    Internet Border Router

    Internet Border Router

    802.3

    Eth PHY

    Customer network Service Prov. 2 Service Prov. 1 network

    Access

    Aggregation Service Edge Core (BGP-free) Internet Edge

    Internet eXchange Point Core PoP, city B Core PoP, city A City A Ethernet-based MAN City A Cabinets

    TCP

    IS-IS IS-IS IS-IS

    iBGP

    RSVP (TE) IP

    TCP

    eBGP

    Figure 2: Residential fixed Internet access: data plane (up) and control plane (down)

    Figure 2 shows the data and control plane for fixed access. The design shows a Carrier-Ethernetbased MAN aggregating traffic and forwarding it to one or more BRAS routers located at a corePoP. In the control plane, a BRAS (connected over PPP) authenticates customers. Shortest PathBridging (SPB) in the MAN enables traffic engineering. eBGP is used by the provider border routerto exchange traffic with its peer or upstream routers. Routes are disseminated to the BRAS(es) viaiBGP. The BGP-free core runs an MPLS control plane.

    9

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    DSLAM eNodeB

    Multi Service Edge Carrier Eth.

    PE Carrier Eth.

    PE

    802.1ah

    802.1q/p or 802.1ad

    Eth PHY Eth PHY

    Eth PHY Eth PHY

    Carrier Eth. P

    MAC

    PHY

    UE

    802.3

    Eth PHY

    P-GW

    Service Prov. 1 network

    Access Core (BGP-free) Internet Edge

    Core PoP, city B Core PoP, city A

    City A Ethernet-based MAN City A Antenna sites

    RLC

    PDCP

    S-GW

    IP (provider)

    UDP

    GTP-U

    Eth PHY

    802.3

    UDP

    GTP-U

    IP (Internet)

    Aggregation

    L2/L1

    Core router Core router

    L2/L1 L2/L1

    MPLS (LSP)

    Internet Border Router

    MPLS (VPN/Pseudo Wire)

    802.3

    Eth PHY

    Service Edge Mobile gateways

    DSLAM

    eNodeB Multi Service Edge Carrier Eth.

    PE Carrier Eth.

    PE

    802.1aq (SPB)

    802.1q/p or 802.1ad

    Eth PHY Eth PHY

    Eth PHY Eth PHY

    Carrier Eth. P

    MAC

    PHY

    UE 802.3

    Eth PHY

    P-GW

    Service Prov. 1 network

    Access Core (BGP-free) Internet Edge

    Core PoP, city B Core PoP, city A

    City A Ethernet-based MAN City A Antenna sites

    RLC

    PDCP

    S-GW

    IP (provider)

    Eth PHY

    802.3

    UDP

    GTP-C

    Aggregation Service Edge Mobile gateways

    RRC

    802.3

    Eth PHY

    MME

    S1-AP

    NAS

    SCTP GTP-C

    UDP

    L2/L1

    Core router Core router

    L2/L1 L2/L1

    IP (operator)

    Internet Border Router

    802.3

    Eth PHY

    TCP

    IS-IS IS-IS IS-IS

    iBGP

    RSVP (TE) IP

    TCP

    eBGP

    Figure 3: 4G Internet access: data plane (up) and control plane (down)

    Figure 3 shows the data (user) plane and control plane for wireless (4G) access. The eNodeB(base station) is attached to the aggregation network, which aggregates the traffic from multiplebase stations into a Core PoP. The core PoP contains a Multi Service Edge and forwards the trafficto the mobile network gateways forming the EPC (Evolved Packet Core). In the example, the S-GWand the P-GW are located at the core PoP. Internet traffic reaching the P-GW is forwarded to one ofthe provider’s Internet border routers through the core network. In the control plane. the UE runsthe NAS protocol against the Mobility Management Element (MME), which allows the MME toauthenticate the user and negotiate handovers. The P-GW runs iBGP with Internet border routers,allowing it to route UE traffic to the Internet.

    10

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    CPE MS Edge router

    Carrier Eth. PE

    Carrier Eth. PE

    802.1ah

    802.1q/p or 802.1ad

    Eth PHY

    Eth PHY Eth PHY Eth PHY

    Carrier Eth. P

    802.3

    Eth PHY

    Host L2/L1

    IP (VPN)

    Core router Core router

    L2/L1 L2/L1

    MPLS (LSP)

    MS Edge Router

    Customer network Service Prov. 1 network

    Aggregation Service Edge Core (BGP-free)

    Core PoP, city B Core PoP, city A City A Ethernet-based MAN

    MPLS (VPN/Pseudo Wire)

    TCP/UDP

    Carrier Eth. PE

    Carrier Eth. PE

    802.1ah

    Eth PHY Eth PHY

    Carrier Eth. P

    802.1q/p or 802.1ad

    Eth PHY

    802.3

    Eth PHY Eth PHY

    Aggregation Service Edge

    Customer network

    City B Ethernet-based MAN

    Host

    CPE

    CPE

    MS Edge router

    Carrier Eth. PE

    Carrier Eth. PE

    802.1aq (SPB)

    802.1q/p or 802.1ad

    Eth PHY

    Eth PHY Eth PHY Eth PHY

    Carrier Eth. P

    Core router Core router MS Edge Router

    Customer network Service Prov. 1 network

    Aggregation Service Edge Core (BGP-free)

    Core PoP, city B Core PoP, city A City A Ethernet-based MAN

    Carrier Eth. PE

    Carrier Eth. PE

    802.1aq (SPB)

    Eth PHY Eth PHY

    Carrier Eth. P

    802.1q/p or 802.1ad

    Eth PHY Eth PHY

    Aggregation Service Edge

    Customer network

    City B Ethernet-based MAN

    CPE L2/L1 L2/L1 L2/L1

    IP (operator)

    TCP

    IS-IS IS-IS IS-IS

    iBGP

    RSVP (TE)

    IP

    TCP

    eBGP

    IP

    TCP

    eBGP

    Figure 4: Layer 3 VPN between customer sites: data plane (up) and control plane (down)

    Figure 4 illustrates data and control plane for a Layer 3 VPN service between two businesssites. The customer CPE router is directly attached to the MAN. The L3 VPN service is carriedover a VLAN to the MS Edge router, which is running a Virtual Routing and Forwarding (VRF)instance to forward the VPN traffic through the MPLS core. When VPN traffic exits the MPLScore, another VRF instance forwards it through another VLAN over the MAN delivering it to theCPE at the other business site. The control plane runs eBGP between the CPE and the MS Edgerouters to exchange VPN route information. These VPN routes are disseminated over the core viaiBGP, allowing all VPN locations to learn the routes required to forward the VPN traffic across allsites.

    11

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    1.2 RINA network design

    This section summarises the RINA network design for the 3 scenarios above.

    Host

    CPE

    PtP DIF

    Home DIF

    Public Internet or VPN or app-specific DIF

    Access Router

    Edge Service Router

    PtP DIF . . . PtP DIF

    Host

    Service provider top level DIF

    PtP DIF Aggregation

    DIFs

    Customer network

    Provider network

    Local loop Aggregation

    . . .

    Figure 5: Fixed access network (RINA)

    Figure 5 shows a RINA network structure for a fixed access network. The CPE is connected tothe access router via a point-to-point DIF. This DIF provides IPC services to one or more serviceDIFs, which are used to authenticate the customer and support access to one or more utility DIFs,such as a public Internet DIF or VPN or application-specific DIFs. The traffic from multiple accessrouters is multiplexed over the aggregation network into an edge service router, which forwards itfurther towards its final destination.

    IP (e.g. Internet)

    TCP or UDP

    PDCP GTP-U

    Protocol conversion

    GTP-U

    RLC

    MAC

    L1

    UDP

    IP (LTE transport)

    MAC MAC . . .

    L1 . . . L1

    UDP

    IP (LTE transport)

    MAC MAC . . .

    L1 . . . L1 UE

    eNodeB S-GW P-GW

    EPS bearer EPS bearer

    LTE-Uu

    S1-U S5/S8

    MAC

    L1

    SGi

    Public Internet DIF

    Radio DIF . . .

    . . . . . . UE

    eNodeB S-GW P-GW

    LTE-Uu

    S1-U S5/S8 SGi

    PtPDIF

    Metro DIF

    PtPDIF PtPDIF

    PtPDIFMobile Network Top Level DIF

    Backbone DIF

    App

    App

    PtPDIF

    Figure 6: Cellular access network (RINA)

    Figure 6 shows a possible RINA network for mobile access. A radio multi-access DIF managesthe radio resource allocation and provides IPC over the wireless medium. A Mobile networktop-level DIF provides flows spanning the scope of the mobile network, where the Metro and

    12

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    Backbone DIFs multiplex and transport the traffic of the Mobile Network Top-Level DIF. Finally,the public Internet DIF allows applications in the UE to connect to other applications available onthe public Internet.

    PE-rs 1

    IPCP

    PE-rs 2 P2 P1 CE 2 CE 1 H1 H2

    Customer 1 Site 1 Network

    Service Provider Network

    Customer 1 Site 2 Network

    MTU-s 1

    IPCP IPCP

    MTU-s 2

    IPCP IPCP

    P (metro) P (metro)

    IPCP

    App App

    PtP DIF

    Metro DIF Backbone DIF Metro DIF

    Green VPN DIF

    VPN Service DIF

    PtP DIF PtP DIF PtP DIF PtP DIF PtP DIF

    PtP DIF

    PtP DIF

    PtP DIF PtP DIF PtP DIF

    PE-rs 1

    VPLS

    PE-rs 2 P2 P1 CE 2 CE 1 H1 H2

    Customer 1 Site 1 Network

    Service Provider Network

    Customer 1 Site 2 Network

    MTU-s 1

    VPLS VPLS

    MTU-s 2

    VPLS VPLS

    P (metro) P (metro)

    VPLS

    App App TCP

    IP

    802.3

    Eth/Net Eth/Net

    MPLS

    MPLS

    Eth/Net Eth/Net Eth/Net

    MPLS

    MPLS

    Eth/Net Eth/Net

    MPLS

    MPLS

    802.3 802.1q

    PBB (802.1ah)

    Protocol conversion

    Figure 7: Enterprise VPNs

    Figure 7 shows a RINA configuration for providing Enterprise VPN services. The variousrouters are connected over various Point-to-Point DIFs. The Metro DIF provides IPC over metroaggregation networks, multiplexing the traffic of the different types of services the operator providesover the metropolitan segment. A backbone DIF provides IPC over the core segment of the network,interconnecting PoP in different cities. A VPN service DIF on top of the Metro and backbone DIFallocates resources to different VPN DIFs over the entire scope of the provider network.

    13

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    2 Experiment 1: Management of multi-layer converged service pro-vider networks

    2.1 Introduction

    The main goal of experiment 1 is to demonstrate the difference between a (protocol) stack and aDIF. Traditionally, in the literature as well as in commercial environments, the network is seen asa stack or protocol stack, realised by all components in the network. Figure 8 shows an exampleof this view using the resource abstraction introduced in [2]. Each vertical in the figure showsa network component, for instance an end system, a border router, or an interior router. Eachpair of communication components must implement entire stacks that are compatible with eachother, where most components must implement the whole stack (left and right) or parts of thestack (middle) depending on what network functionality it provides. In the figure, the left and rightcomponent could be end systems and the middle component could be a switch or router.

    Layer n

    Layer n+1

    Layer n-1

    Figure 8: Management Points in a Protocol Stack Pattern

    2.1.1 The Current Problem

    The management of stacks is complex because it is necessary to coordinate vertical as well ashorizontal aspects, each separated per component and provided stack element. On the horizontalaspect, each part of a communication service (each stack element on the same layer in eachcomponent), needs to be configured individually. A reconfiguration often requires to reconfigure allstack elements of all components. This situation is even more complicated when we apply a real

    14

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    word scenario, in which the ownership of the components varies. Any initial configuration and anyfurther reconfiguration must now be coordinated amongst all components and their respective owner(assuming that the owner has administrative permissions to perform the configuration action).

    On the vertical, the configuration of each stack element must be configured in a way that thecomponent overall can provide the advertised and required service. Here, ownership is not theproblem. Instead the multi-vendor, multi-protocol, and multi-technology aspect of each individualstack layer provides for extensive complexity.

    In a 5G converged service provider network, we must also consider a separation of technologyand ownership across the whole network, for components (here called nodes) as well as the“protocol zoo” dealing with various aspects of the network (and service) operation. Radio nodesform part of the radio network (RAN). Nodes aggregating radio access traffic and managing usersessions are found in the core network. Long distance communication and links to the publicInternet belong to the transport network. Each of those networks is often owned by different partsof a provider’s organisation.

    This mix makes coordinated management of (protocol stacks) virtually impossible, mainlybecause the state space that needs to be coordinated is too large. The current solution is twofold.First, a large amount of standards is produced as normative source for management activities (theactivities themselves as well as for the underlying stack). Those standards provide interoperabilitybut limit flexibility. Second, management is often based on management models using managedobjects (with standardised access in form of a defined protocol and standardised information models,often in terms of a management information base). A large number of models exists and needs tobe considered for management activities. The figure 8 also shows the management points (blueinterfaces) and the internal management or configuration policies that need to be coordinated. Whilethe figure suggests a largely normalised environment, reality is much more heterogeneous.

    2.1.2 Multi-layer Coordination

    In RINA, the situation is very different. Using a common API (the IPC API) and building immutableinfrastructure used by each and every IPCP means that there are no longer an technologicaldifferences among different layers in the network. There are also no differences between verticalelements in a component, everything is an IPC process (IPCP).

    Next, the concept of separation of mechanism and policy, and the definition of policies forall aspects of an IPCP (and thus inter-process communication), makes all available policies ex-plicit. There is no need anymore for separate management model interface definitions, everythingmanagement requires is already defined by the architecture.

    Next, the concept of a DIF as a layer, and not a stack, with autonomic functions for layermanagement (as part of every IPCP) and with an explicit definition of the scope of the layer (thescope of the DIF, realised by its policies), provides for a very new concept with regard to multi-layermanagement.

    Last but not least, the RIB in the IPCPs (and thus in the DIF) provides for the required

    15

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    information base for all of the above. The RIB maintains all shared state of all IPCPs in a DIF. Thedefined processes also realise an automatic update of this shared state amongst all IPCPs in a DIF.This means that changes to a DIF’s configuration will propagate from one IPCP to another until theentire DIF is reconfigured.

    2.1.3 The resulting Objectives

    The objectives of experiment 1 are

    1. to demonstrate that the simplification RINA provides (extensively discussed in [2] andsummarised above) lead to a significant simplification of the required management definitionsand implementations,

    2. to demonstrate that, since the manager can use the same model with a few variations based ondifferent RINA policies (DIF policies) to configure multiple layers, coordinated managementcan evolve from a complex management task (often provided by workflows with associatedmanagement policies) into a strategy-oriented management task in which the managementactivities are solely described by management strategies (as discussed in [2]),

    3. to demonstrate that the manager can perform an adequate number of strategies at the sametime (single manager deployment), and

    4. to demonstrate that the manager can be scaled up and scaled down in case a single managerdeployment is not able to handle the load of executed strategies.

    Multi-layer management can now evolve from stack management (vertical and horizontal)towards an actual coordination function in a multi-domain model (here the domains are DIFs).Figure 9 shows this new situation provided by RINA using the resource abstraction introduced in[2], including the management points (per domain element - IPCP, per domain - DIF, multi-domain,and for the management system on the right). Most of the control functions are realised in theIPCPs and the DIFs. DIF interaction is also provided by RINA. What is left for the coordination isto provide for a coordinated initial configuration, and later for possible re-configuration as part ofthe management activity (monitor and repair, see [2]).

    2.1.4 Scope: Configuration Management

    Management activities cover a wide range of functional aspects. Since this experiment cannot coverall aspects, we will narrow the scope to a representative set of those aspects.

    Network management activities are often called management functions. The functional scope ofthose management functions is commonly described in terms of Fault, Configuration, Accounting,Performance, and Security (FCAPS), also known as management areas [3]. On top of network

    16

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    Domainn

    Domainn-1

    Domainn+1

    Mul�-Domain

    Figure 9: Management Points in a Layered Domain Pattern

    management (including layer management) system management functions are standardised tomanage a single system, see for instance ITU-T recommendations X.730 to X.799.

    A management system, including the DMS, follows the common manager-agent paradigm,where a manager executes the management functions and a managed agent controls resources. Thecommunication between manager and agent is then the management protocol. Communication isrestricted to operations from manager to agent and notification from agent to manager. The agentuses a Management Information Base (MIB) as a standardised knowledge base of the resources itcontrols. The resources commonly provide a standardised interface to the agent.

    Fault

    Strategy

    Strategy

    Strategy

    Strategy

    CDAP

    Opera�ons

    No�fica�onsManager ManagementAgent

    Distributed IPC Facility (DIF)

    Border RouterInterior Router

    Host Host

    App A App B

    DIF DIF

    DIFDIF

    Border Router

    DIF

    IPCPIPCP

    IPCP

    IPCP

    IPCPIPCPIPCPIPCPIPCP

    IPCP

    IPCP IPCPIPCPIPCP

    IPCP

    Configura�on

    Performance

    Accoun�ng

    Security

    Figure 10: FCAPS, Strategies, and RINA Network

    Figure 10 shows an example of the manager/agent paradigm in a RINA context. On the managerside, the FCAPS management areas are used to define the management functions. Strategies realisethose functions, so they are the management activities. The management protocol is CDAP. TheManagement Agent (MA) then controls a RINA network, i.e. a deployment of DIFs with IPC

    17

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    processes. The MIB in RINA is superset of all the RINA Information Bases (RIB) of the controlledIPC processes. The common (standardised) management interface of the resources (DIFs, IPCprocesses) is the common IPC API.

    As stated above, this experiment cannot cover all functional areas. For the FCAPS, the followingassumptions can be made to narrow the scope for the experiment:

    • Security - the management of security aspects is covered in the ARCFIRE experiment 4.

    • Accounting - while the collection of data for resource use can be realised. Since a DIF has awell-defined scope, such a collection can be done per DIF (and then per IPC process in theDIF) with the full understanding of the DIFs scope. Call Data Records (CDRs) or other baseinformation for accounting can then be produced. However, to demonstrate full AccountingManagement (AM), we would need to associate the usage data (e.g. CDRs) to user session,then individual users, and finally to the contract those users have with the service provider.This means building such a system requires a significant amount of resources outside theoriginal scope of the project yet with (very likely) little added value for the experiment.

    • Performance - collection of performance data is very similar to collection of accountingdata. However, to demonstrate full Performance Management (PM) requires evaluating thecollected performance counters against the KPIs a network operator (or service provider) setfor its operation. Those goals differ from operator to operator and from provider to provider.They also depend on the optimisation strategy for a network (for instance optimise for voice,or data, or coverage, or against connection loss / call drop, etc.). There is little discussion inthe literature on common performance KPIs, beside some higher level goals such as optimalperformance. Since general performance tests of a RINA network have been done in otherprojects, performance management for this experiment will not add much value (besidesthe resource heavy and largely arbitrary design and implementation of several optimisationstrategies to test).

    • Fault - faults are problems in the network that can be observed either directly (the resource andthen the Management Agent send an error in form of a notification) or indirectly (by observingthe behaviour of the network comparing it to the anticipated or required behaviour). Anexample of a directly observed fault is the loss of connectivity due to a hardware problem (portbroken, physical node down). Examples for an indirectly observed fault is congestion control(monitor the network for indicators of congestion) or the classic sleeping cell scenario (aradio node is alive, not down or broken, but does not accept any further traffic or connectionsfrom mobile equipment). While Fault Management (FM) is often seen as the most importantmanagement activity, it is very hard to define fault scenarios for a repeatable experimentationwith measurable KPIs. On one side, we can of course design an experiment in which anIPC process is deliberately broken, creating a fault, resulting in the DMS executing a faultmitigation strategy. On the other side, RINA is an autonomic network, that realises much

    18

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    control (and management functionality, in the IPC process layer management) locally withoutnotifying the DMS. The actual possible fault scenarios in such an environment have not yetbeen studied in detail, thus making it very hard to design fault management scenarios.

    • Configuration - every network operation starts with planning (design a network for givenrequirements) followed by deployment (of network nodes and other physical resources andsoftware components and other virtual resources). Once in operation, a network might(and usually does) require re-configuration. This requirement can be the result of a changein requirements (up to business goals set by the operator), faults in the network (faultswhich can be mitigated by re-configuration), a required shift in the operation due to changedtraffic (traffic behaviour or traffic mix impeding for instance required performance), theintroduction of new accounting measurements required for new accounting (and finallycharging) strategies, or changes for the security of the network operation (or parts of it).From this viewpoint, configuration management is the facilitator of all other functionalmanagement areas. Thus the main target for this experiment is Configuration Management(CM). Another reason to focus on CM is the current situation of support for deploying aRINA network. The RINA Demonstrator is the most efficient way to define a RINA networkand deploy it. The Demonstrator automates a large amount of the underlying configurationspecification and the deployment actions. In defining strategies for CM for this experiment,we can (i) benefit from the developed Demonstrator (specification requirements and workflowhave been developed, tested, and are in use for demonstrations for a long time) and (ii)provide a simpler, more dynamic, and more flexible solution than the demonstrator is today.A DMS with a set of CM strategies can supplement the Demonstrator and help to increasethe automated deployment of RINA networks. Adding reconfiguration (in the DMS) willfurther enhance demo capabilities.

    2.1.5 Aspects, KPIs, Scenarios and Use Cases

    Measuring a network management system for performance, cost, and quality is a very difficult task.All three aspects depend on many variables, e.g. network topology, network size, operational opti-misation strategies, available resources (compute, storage, network) for the network managementsystem, different deployment options (e.g. clustering) of the network management system, andmore. Beside measurable technical KPIs, a network management system has to support KPIs thatdepend on operator-specific goals and metrics, because a network management system will be usedin a specific environment of a particular operator, often a Network Operation Centre (NOC). SomeKPIs used are:

    • network server availability

    • server (NMS) to administration (NOC personal) ratio

    19

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    • network availability

    • capacity of network path (to and from the NMS and in the managed network)

    • Critical outage time (how long does it take the NMS to mitigate critical outages (i) in thenetwork and (ii) for itself)

    • Frequency of monitoring checks (how often is the NMS required to pull monitoring informa-tion for its own operation)

    • Number of affected users (NOC users) on an average basis, i.e. how many people are requiredto use the NMS to operate a network

    • Indices of smoothness of network changes, service introduction, etc.

    • Complexity of network management operations, e.g. how many different actions need to beinvoked to solve a problem

    • Performance of network management operations, e.g. how many similar actions are need tobe invoked solve a problem

    • Workflow complexity, e.g. how complex are workflows for problem solving strategies

    • Touchpoints, e.g. how many touchpoints are required to realise a network managementoperation (zero-touch here means none, as the ultimate goal and key indicator for a highdegree of automation in the network management system)

    Detailed statistics and performance of Operation Support Systems (OSS) used by mobileoperators cannot be obtained. This information is often considered to be privileged by operators aswell as OSS vendors. The same situation applies to metrics on NMS ease of use, involved personaland other costs, and quality.

    The closet equivalent to the ARCFIRE DMS in the real world are SNMP based managementsystem for mobile, core, and transport networks as well as for LAN/MAN deployments. However,as [4] states, the performance of SNMP (and related NMSs) has never been fully and properlystudied. There is no commonly agreed measurement methodology to assess network management(and SNMP) performance). In addition, similar to OSS, little is known about SNMP (and SNMP-based NMS) usage patterns in operational networks, which impedes design of realistic scenariosfor an NMS analysis.

    The search for relevant, measurable, and comparable KPIs for this experiment is furthercomplicated by the radical new value propositions of a RINA network. For the first time we have afully autonomic network available, including explicit specifications of the scope of layers (DIFs),automatic mechanisms for sharing state between layer elements (the IPC process with their RIBinside a DIF), fully policy-based local control (data transfer and data transfer control), and the

    20

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    availability of a complete set of layer management functions (the layer management functions insidean IPC process). Multi-layer (multi-DIF) coordination now becomes a very different and simplertask (compared to current OSS and NMS systems). Even comparing it to cross-layer optimisationis difficult, since this optimisation assumes a role of control over the layers while the ARCFIREDMS operates in terms of monitoring and repair. The starting point of management activities isvery different, making comparisons extremely difficult (or arbitrary).

    To provide a consistent, measurable, and comparable set of performance indicators for the DMSwe will focus on three aspects:

    • Speed: the speed of management operations can be measured by the number of collected,required, and adjusted variables (policies and configurations of DIFs); the number of notifica-tions for a management activity; and experienced delays (in the management activity).

    • Cost: can be measured by CPU usage (or wider usage of compute resources in a cloudenvironment), memory usage (or the usage of storage resources in a wider cloud environment),and bandwidth usage (how much overhead in required bandwidth does management create).

    • Quality: quality can be measured in a spatial-temporal context. Spatial here refers todifferences in particular variables (configuration and policies) between the DMS and theRINA networks. Temporal refers to errors due to temporal aspects, such as monitoringroundtrip times. Loss can be added to evaluate if information is lost in transport.

    For these three aspects, we can now look into relevant KPIs. We have identified the followinggeneral KPIs for this experiment:

    • Speed: the speed of management strategies

    • Scale: the required scale of the DMS for speedy operation

    • Time and cost for scale-in and scale-out: how timely can the DMS be scaled out (extended)in case of cascading event storms, how costly is this change of scale

    • Touch: how many touches are required for management strategies to succeed (this can beused as an indicator for cost since it determines the extent of human input required for anoperation)

    • Complexity: how complex are the used management strategies, i.e. how many differentoperations do they require (internally to make decisions and externally to realise them)

    • Degree of automation: using the touch KPI we can estimate the degree of automation theDMS will provide in a network operation

    21

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    Aspects and general KPIs need to be measured in experiments. Those experiments can beclassified as a combination of a scenario with a use case. There are many possible scenarios anduse cases to select from. The following scenarios are considered:

    • No network: this scenario can be used to benchmark the DMS and all developed strategies.Strategy triggers (and responses) need to be simulated but actions are not performed. Thefocus is on the DMS performance only. The DMS configuration is shown in Figure 11.

    • Small (minimal) RINA network: a simple deployment of two hosts, each connected to adedicated border router, both border routers connected to the same interior router. The DMSconfiguration is shown in Figure 12. The RINA network setup is shown in Figure 13.

    • Medium RINA network: a medium size RINA network as for instance used in the renumber-ing demonstration (30 nodes to show a network on European scale as used in experiment 3,cf Figure 21) or the data center demonstration (38 nodes for a medium size data center usinga spine-leaf configuration). The DMS configuration is shown in Figure 12.

    • Large RINA network: a large RINA network like the network used in the experimentresembling the AT&T transport network for managed services (as used in experiment 3, cfFigure 22) The DMS configuration is shown in Figure 12.

    DMS

    DMSStrategy Engine

    RIB

    ManagementShell / GUI

    OperatorOSS/NMS and

    OpenNMS

    Message Bus (W3C WebSocket)

    DMSStrategy Engine

    Figure 11: DMS: Stand-alone System for Benchmark Experiments

    Each these scenarios can be combined with each of the following use cases:

    • Deploy network from zero: no RINA network exists, the DMS is triggered by an operatormechanism and deploys a complete RINA network.

    • Deploy a new service (DIF) in existing network: A RINA network exists, the DMS managesit, now add services and applications to the network by creating a new DIF (service) anddeploying applications.

    22

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    RINA

    DMS

    ManagedResource

    (RINA)

    ManagementAgent(MA)

    DMSStrategy Engine

    API Calls, etc.CDAP

    RIB RIB RIBSynchroniza�on Synchroniza�on

    ManagementShell / GUI

    OperatorOSS/NMS and

    OpenNMS

    Message Bus (W3C WebSocket) CDAPConnector

    DMSStrategy Engine

    Figure 12: DMS: Full System for Experiments with a RINA Network

    Distributed IPC Facility (DIF)

    Border RouterInterior Router

    Host Host

    App A App B

    DIF DIF

    DIFDIF

    Border Router

    DIF

    IPCPIPCP

    IPCP

    IPCP

    IPCPIPCPIPCPIPCPIPCP

    IPCP

    IPCP IPCPIPCPIPCP

    IPCP

    Figure 13: Experiment 1: Minimal RINA System for simple Experiments

    23

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    • Deploy network and service from zero: Deploy the network nodes, all DIFs, and an applica-tion.

    • Add a new node to network: how many touch points are required to add a new node, e.g. is itpossible to create a zero touch strategy for adding new nodes?

    The resulting 4-tuple for this experiment then is aspects, general KPIs, scenarios, and use cases.For concrete experiments we should define concrete KPIs and design the required strategies. It isalso clear that a separation into sub-experiments is required. This separation can be done usingthe scenarios or the use cases. We have decided to take the use cases as key separator and run allscenarios in each use case.

    The following sub sections detail the resulting 4 sub experiments. Each sub experimentaddresses one use case for all scenarios using all KPIs covering all aspects. Therefore, the subexperiment description is in parts repetitive. We chose to leave the repetition in this documentation,because different teams may only work a subset of the sub experiments.

    2.2 Specifications

    2.2.1 Testbeds

    The Virtual Wall will be the main testbed of reference as reported in D4.2 [5]. The Virtual Wallcan provide access to bare metal hardware to run the RINA implementation and the applicationsrequired for the experiment, providing access to an adequate number of resources.

    Initially the jFed experimenter GUI will be used to reserve and setup the resources in theFED4FIRE+ testbeds. Once it becomes available, the experimentation and measurement frameworkunder development in WP3 will provide a more automated front-end for jFed as well as other IRATIdeployment and configuration tools such as the Demonstrator.

    2.2.2 Scenarios

    All experiments will be run in four different scenarios.No RINA Network: This scenario uses triggers from the operators OSS/NMS (here simulated

    by a skeleton application) to benchmark the DMS strategy execution. All developed strategies willbe tested for speed and scale. The DMS will be a stand-alone DMS as shown in Figure 11.

    Minimum RINA Network: This scenario uses the minimum RINA network (2 hosts, 2 borderrouters, 1 interior router, cf. Figure 13) with an associated strategy for the experiment. The DMSwill be a full configuration as shown in Figure 12.

    Medium Size RINA Network: This scenario uses a medium size RINA network, for instance theEuropean network used in experiment 3 as shown in Figure 21 and an associated strategy for theexperiment. The DMS will be a full configuration as shown in Figure 12.

    24

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    Milestone Month Description

    MS1 M16 DMS Software with Strategy Executor, OSS/NMS trigger, MA/Demonstrator Integration

    MS2 M17 Strategy for experiment defined and tested

    MS3 M18 Strategy for experiment defined and tested

    MS4 M19 Strategy for experiment defined and tested

    MS5 M20 Strategy for experiment defined and tested

    MS6 M21 reference experiment with measurements on LMI reference server

    MS7 M22 continuous experiments for scenario 1 (benchmarking)

    MS8 M24 continuous experiments for scenario 2 (minimal RINA network)

    MS9 M26 continuous experiments for scenario 3 (medium size RINA network)

    MS10 M28 continuous experiments for scenario 4 (large size RINA network)

    Table 1: Milestones for experiment 1

    Large Size RINA Network: This scenario uses a large size RINA network, for instance the USnetwork used in experiment 3 as shown in Figure 22 and an associated strategy for the experiment.The DMS will be a full configuration as shown in Figure 12.

    2.2.3 Planning

    The milestones for the design and specification of experiment 1 are shown in Table 1.

    2.2.4 Experiment 1-1: Deploy Network from Zero

    Objectives This experiment assesses the DMS’s capability to build a network from scratch usingthe specification of an operator’s network planning, e.g. topology and required network services asinput. At the start, there is no network, i.e. no network node is up and running. The first step is tostart the first network node. Each following step adds a new network node according to the giventopology.

    The configurations of all nodes is handled by the DMS. A single strategy should be specified,which takes a topology (and all required information for specific nodes, e.g. required services andDIFs) and once triggered, builds the network step by step (i.e. node by node).

    The experiment is finalised by a second strategy, which deploys a number of tests in the newlybuild RINA network to evaluate the correctness of the configuration. Those tests can be active(deploy a test service and evaluate correctness) or passive (analyse the configuration of each nodeas represented in the RIB against the given topology).

    The strategy which builds the network can be configured for the three RINA network scenarios.The experiment can then be run for the benchmark scenario and all three RINA network scenarios.We do not anticipate different strategies for the different scenarios.

    25

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    The initial trigger for building the RINA network comes from the operator’s OSS/NMS. Thiswill be simulated by a skeleton OSS/NMS trigger application in the DMS. Evaluation of the builtnetwork in the benchmark scenario can only be performed passively.

    Metrics and KPIs This experiment uses the KPIs introduced in section 2.1.5 in terms of speed,scale, time and cost, touch, complexity (of the strategy), and degree of automation. They are furtherspecialised for this experiment as shows in Table 2.

    KPI Metric Current state of the art ARCFIRE Objective

    E1-1.1 Speedy node add ms or s Node creation and config-uration is often realised byrather complex workflowsthat can take several minutesup to an hour (for automatedcreation)

    The strategy should run insub-second speed for singlenode creation

    E1-1.2 Speedy networkcreation

    ms or s or min Creation and configurationof a whole network can,depending on the networkcomplexity, take severalhours to several weeks

    Even the large RINA net-work should be created inminutes, rather than hours

    E1-1.3 DMS scale Number of strategy execu-tors

    Multiple workflows executedin parallel (or realised byteams working in parallel)

    Ideally only 1 executorrequired (sufficient speed),more (with parallelisedexecution) if speed KPIcannot be met

    E1-1.4 DMS Scale Out s and CPU usage Scaling out a managementsystem that is in operationis often impossible or ex-tremely costly (takes hours,requires multiple touches, isprocessing intensive)

    Scaling out the DMS shouldbe done in seconds, with 1or zero touch, and minimalCPU costs (for the scalingitself)

    E1-1.5 Strategy Complexity number of different opera-tions required

    Currently, CM operationscreate large sets of CMprofiles, one per node

    From early experiments, weestimate 4-5 different oper-ations per node replicatedfor each node to create thenetwork

    E1-1.6 Touches / Degree ofAutomation

    number of touches Beside the (not counted)original touch (installationof hardware node or triggerfor software installation),adding a node often requiresmultiple additional touchesfor configuration. Early pro-totypes for VNF deploymentcan operate on a minimum(potentially zero) touchbasis.

    Ideally zero touch

    Table 2: KPIs for experiment 1-1

    26

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    2.2.5 Experiment 1-2: Deploy new Service in Existing Network

    Objectives This experiment assesses the DMS’s capability to add a new network service toan existing RINA network. The starting point is a RINA network, which can be built using thestrategies of experiment 1-1. Once the network is built, a new network service (a new DIF) isinjected into the existing network. The configuration and scope of this DIF is provided by theoperator, e.g. a planning task. This experiment has only 1 step: add a new DIF to an existingnetwork.

    The configuration and scope of the new DIF is handled by the DMS. A single strategy shouldbe specified, which takes the operator’s planning specification and, once triggered, creates the DIF.

    The experiment is finalised by a second strategy, which deploys a number of tests in the newlycreated DIF to evaluate the correctness of its configuration and scope. Those tests can be active(deploy a test service and evaluate correctness) or passive (analyse the configuration of the DIFagainst the given planning specifications).

    The strategy which adds a new DIF can be configured for the three RINA network scenarios.The experiment can then be run for the benchmark scenario and all three RINA network scenarios.We do not anticipate different strategies for the different scenarios.

    The initial trigger for adding a new DIF comes from the operator’s OSS/NMS. This will besimulated by a skeleton OSS/NMS trigger application in the DMS. Evaluation of added DIF in thebenchmark scenario can only be performed passively.

    Metrics and KPIs This experiment uses the KPIs introduced in section 2.1.5 in terms of speed,touch, complexity (of the strategy), and degree of automation. They are further specialised for thisexperiment as shown in Table 3.

    KPI Metric Current state of the art ARCFIRE Objective

    E1-2.1 Speedy DIF add ms or s Service creation and config-uration is often realised byrather complex workflowsthat can take several min-utes or hours (for automatedcreation)

    The strategy should run insub-second speed

    E1-2.2 Strategy Complexity number of different opera-tions required

    Currently, CM operationscreate large sets of CMprofiles, one per node

    1 operation replicated percreation of IPC process onparticipating nodes

    E1-2.3 Touches / Degree ofAutomation

    number of touches Creation of new services inthe network is an extremelycomplex and complicatedprocess, with multiple teamsand touches

    Ideally zero touch

    Table 3: KPIs for experiment 1-2

    27

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    2.2.6 Experiment 1-3: Deploy Network and Service from Zero

    Objectives This experiment assesses the DMS’s capability to build a new RINA network and addall required network services, application services, and applications to it. The starting point for thisexperiment is a topology based on the operator’s topology for the network, the requirements fornetwork and application services based on operator’s specifications, and a set of applications thatmust be supported by the network as defined by the operator.

    At the start, there is no network, i.e. no network node is up and running. The first step is tostart the first network node. Each following step adds a new network node according to the giventopology. Then, the required services are added (e.g. related DIFs created). Finally, all requiredinfrastructure for the applications are deployed in the network. For the applications, we will use thestandard IRATI applications rina-echo-time and rina-tgen.

    All configurations are handled by the DMS. A single strategy should be specified that createsthe network, adds all required services, and finally deploys the application infrastructure.

    The experiment is finalised by a second strategy, which deploys a number of tests in the newlycreated network to evaluate the correctness of its configuration and scope. Those tests can be active(deploy a test application and evaluate correctness) or passive (analyse the network’s configurationagainst the given operator’s specifications).

    The creating strategy can be configured for the three RINA network scenarios. The experimentcan then be run for the benchmark scenario and all three RINA network scenarios. We do notanticipate different strategies for the different scenarios.

    The initial trigger comes from the operator’s OSS/NMS. This will be simulated by a skeletonOSS/NMS trigger application in the DMS. Evaluation the network in the benchmark scenario canonly be performed passively.

    Metrics and KPIs This experiment uses the KPIs introduced in section 2.1.5 in terms of speed,scale, time and cost, touch, complexity (of the strategy), and degree of automation. They are furtherspecialised for this experiment as shown in Table 4.

    2.2.7 Experiment 1-4: Add new Network Node - Zero touch

    Objectives This experiment assesses how many touch points are required to add a new node toan existing RINA network. We consider any required manual action or necessary interaction ofthe DMS with an administrator as a touch point. Ideally, the DMS can add a new node with zerotouches, i.e. fully automated. It is important for this experiment to establish how close the DMScan be to a zero touch management system and what information is required to achieve that. Oncedescribed, it will also be of value to minimise the information required.

    The starting point for this experiment is a network of at least 1 node, since the first node mightrequire more configuration than any following node. The experiment can then run for every nodedescribed in a given topology. Ideally, the DMS can add all new nodes automatically.

    28

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    KPI Metric Current state of the art ARCFIRE Objective

    E1-3.1 Speedy node add ms or s Node creation and config-uration is often realised byrather complex workflowsthat can take several minutesup to an hour (for automatedcreation)

    The strategy should run insub-second speed for singlenode creation

    E1-3.2 Speedy networkcreation

    ms or s or min Creation and configurationof a whole network can,depending on the networkcomplexity, take severalhours to several weeks

    Even the large RINA net-work should be created inminutes, rather than hours

    E1-3.3 Speedy DIF add ms or s Service creation and config-uration is often realised byrather complex workflowsthat can take several minutesor an hours (for automatedcreation)

    the strategy should run insub-second speed

    E1-3.4 DMS scale Number of strategy execu-tors

    Multiple workflows executedin parallel (or realised byteams working in parallel)

    Ideally only 1 executorrequired (sufficient speed),more (with parallelisedexecution) if speed KPIcannot be met

    E1-3.5 DMS Scale Out s and CPU usage Scaling out a managementsystem that is in operationis often impossible or ex-tremely costly (takes hours,requires multiple touches, isprocessing intensive)

    Scaling out the DMS shouldbe done in seconds, with 1or zero touch, and minimalCPU costs (for the scalingitself)

    E1-3.6 Strategy Complexity number of different opera-tions required

    Currently, CM operationscreate large sets of CMprofiles, one per node andservice

    From early experiments, weestimate 4-5 different oper-ations per node replicatedfor each node to create thenetwork plus deploying therequired network services

    E1-3.7 Touches / Degree ofAutomation

    number of touches Adding nodes and servicesoften requires multipletouches, realised by multipleteams. Early prototypes forVNF deployment can operateon a minimum (potentiallyzero) basis.

    Ideally zero touch

    Table 4: KPIs for experiment 1-3

    29

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    All configurations are handled by the DMS. A single strategy should be specified that addsa new node to the network. After adding a new node, a second strategy should be triggered toevaluate the correct configuration of the newly added node against the given topology.

    The initial trigger comes from the operator’s OSS/NMS. This will be simulated by a skeletonOSS/NMS trigger application in the DMS. Evaluation of the network in the benchmark scenariocan only be performed passively.

    Metrics and KPIs This experiment uses the KPIs introduced in section 2.1.5 in terms of touchand complexity (of the strategy). They are further specialised for this experiment as shown in Table5. This experiment should quantify:

    KPI Metric Current state of the art ARCFIRE Objective

    E1-4.1 Touches / Degree ofAutomation

    number of touches Beside the (not counted)original touch (installation ofhardware node or trigger forsoftware installation), addinga node often requires multi-ple touches for configuration.Early prototypes for VNFdeployment can operate ona minimum (potentially 0)touch basis.

    zero touch

    E1-4.1 Strategy Complexity number of different opera-tions required

    Currently, CM operationscreate large sets of CMprofiles per node

    From early experiments, weare estimate 4-5 differentoperations per node

    Table 5: KPIs for experiment 1-4

    2.3 Summary

    In this section we have detailed the ARCFIRE experiment 1 focusing on multi-layer coordinationin a converged service provider network. We have defined the main objective, the current problem,and the solution that a multi-layer approach should provide in the context of RINA. We havealso detailed why we are focusing on configuration management. The experiment then is basedon a 4-tuple of aspects, KPIs, scenarios, and use cases. This tuple lead to the definition of 4sub-experiments covering the four described scenarios, providing quantitative results for all fouruse cases, specifying individual KPIs and addressing all described aspects.

    30

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    3 Experiment 2: Scaling up the deployment of resilient multimediaservices over heterogeneous physical media

    3.1 Introduction

    Deploying resilient multimedia services is one of the most challenging aspects that networkoperators are currently facing. The large number of nodes and users, together with the strict QoSrequirements of nowadays network services, makes the network design phase a complex one. Theresiliency aspect add further complexity, as the network operator must cope with different kind ofsoftware and hardware failures, both in the network and in the service applications.

    A first observation is the generality of the RINA architecture leads to the overall expectationthat the effectiveness of RINA will increase the further we move up the network hierarchy (i.eaway from physical layer constraints) and the larger we consider the network, in terms of end userapplications, deployed routers, management domains, and so on. From this general observation,we expect RINA to outperform 5G proposals built on SDN solutions due to a general reduction ofcomplexity, reducing the number of elements necessary to operate the network to the same level ofefficiency and performance.

    Another area where we expect RINA to perform very well is reliability. Using a routedsolution in small DIFs, we can tailor routing updates to react very fast to changes in networkconnectivity. This work has already been investigated in the PRISTINE project at a smallerscale [6]. Whatevercast naming has also the potential to simplify multicast delivery and mitigateconnection interruptions that cannot be handled within a DIF (when the graph becomes separatedinto different components). Combined with the RINA application naming scheme, we expectmeasurable decreases in connection downtime in the presence of adverse network conditions.

    Following these directions, experiment 2 aims at exploring how to deploy resilient multimediaservices over scaled-up RINA networks, when different types of access networks are used. Theexperiment will show how RINA can operate services over heterogeneous physical media, such asFixed access, LTE Advanced and Wifi. These access technologies are described in depth in [1].

    3.2 Objectives

    This experiment will focus on delivering multicast multimedia services at scale to users connectingover different access technologies. The most important aspects considered are scalability, in termsof network elements (nodes such as routers and end-user devices), user connections, services.The objectives are minimizing overhead between the end user and the base station for LTE users,optimising overall network resource usage (bandwidth, routing tables) and providing resilienceto node and link outages due to failures and planned maintenance. Multicast and anycast are notpart of the RINA prototypes yet, and have not been investigated experimentally before. With theimproved software frameworks that are developed by ARCFIRE, we will be able to provide resultsin much more detailed scenarios and at larger scale.

    31

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    Sticking points for a RINA deployment will be where hardware constraints will be felt themost, close to the transmission equipment. LTE has a very efficient solution for reducing overheadbetween the eNodeB and the UE in the RObust Header Compression (ROHC) technology used byPDCP, replacing L3 and L4 headers (IP/UDP/RTP) by a very small token, reducing 40 (or even 60bytes in case of IPv6) to 2-3 bytes [7]. RINA’s layered structure, with encapsulation at each layer tomaintain transparency may introduce additional overhead at this point. As part of this experiment,we will investigate policies for header compression to reduce the Protocol Control Information inthe PDUs on the eNodeB-UE link.

    3.3 Experiment scenario

    Experiment 2 considers fixed and mobile end users accessing a variety of services. There will be 4services available in the experiments (Web, Multimedia, Gaming and File Transfer). These userswill reside across Europe, connected with each other over a single core network. The referenceapplications for the services are the nginx web server, the Midori web browser, VLC for themultimedia service, ioquake3 for the gaming service and Filezilla for the file transfer. Ports ofthese applications to the RINA POSIX-like API are already available for nginx (the port will bereleased as open-source in the next month) and ioquake3 [8]. The other services will be validatedby simulating traffic with the RINA tools like rina-tgen and rinaperf (described in [9], Section 3.1).If ports of the other applications should become available (e.g. a browser or a video streamingserver/client), they will be adopted in the experiments as well. All these services can be started andstopped by means of the Rumba framework.

    We will look into the steps required to provide multicast in RINA networks, since it is commonlyused in provider networks. If the solution can be implemented within the project timeframe wewill add it to one of the prototypes. We can then perform measurements relevant for multicastservices, such as video streaming, which will be simulated if a ported application is not available.Relevant metrics will be the overhead to setup a multicast flow, the bandwidth consumption and thereliability of the multicast flow.

    The physical network graph of the experiment is shown in Figure 14. The access networksare servicing users and are connected to metropolitan networks. The metropolitan networks areconnected to each other via a core network. In the access networks, we will investigate three accesstechnologies: Fixed (Gb Ethernet), LTE and Wi-Fi (not shown in the Figure). The LTE accessnetwork in the figure consists only of the User Equipment (UE) and the eNodeB, since this is theconnection we are mostly interested in, to validate that RINA can offer a similar solution to RObustHeader Compression (ROHC). Usage statistics will probably be monitored at the shim IPCPs onthe eNodeBs. In the metro and core network, we will build on physical machines with 1G and10G Ethernet links for bare-metal measurements. For scaled up experiments, we will extend theaccess networks by using virtual machines in GENI and iLab.t (interconnected wherever possible)to perform experiments that are larger in terms of hosts, but less demanding in terms of bandwidthusage. The VMs will run applications that can work also with low bandwidth, like web browsing

    32

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    Figure 14: Extended experiment

    and measurement tools (ping, netperf, rina-tgen, rinaperf, ...). More demanding host applicationswill also run on bare metal.

    The initial scenario of the experiment will consist of different users arriving to use the networkso that they can use their services, and leaving when they are done. This arrival and departure ratewill be Poisson distributed. Of course not all users share the same intensity of service usage, so wewill also use a Poisson distributed model to model the usage of the different services. For RINAthis means that the number of flows that will be setup will vary depending on the user.

    We will investigate any possible problems while the network is operating (either architecturalproblems, or implementation problems). If everything is operating as expected, then we will gatherstatistics while the experiment is running. On the one hand, we will get output from some ofthe applications that were launched. On the other hand, we will gather interesting data from theIPCPs itself. As an example, the different routing table sizes in the network may be an interestingobservation. We will also look into other events, such as resiliency of the network. We will firstinvestigate what the consequence is of link failures in a RINA network. Later on we will extendthis to nodes, and of course to application failures, most notably IPCP failures.

    When we obtained results from this original setup we will focus on scaling up the RINAscenario to more connected access networks (6 or 7), serving a combined total of 1000 users. The 3metropolitan networks will be scaled up as well by first adding more nodes, and later on by addingmore metro networks. Obviously we will also scale up the number of core nodes to support all theextra traffic. This of course will depend also on the available resources as the nodes in the corenetwork are preferably interconnected by 10G links, something which is a limited resource in most

    33

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    testbeds.

    3.4 Metrics and KPIs

    The objectives of this experiment are to quantify how the overhead due to Protocol ControlInformation (PCI) - the network headers - can be reduced in the network when contraint linkresources are encountered. Routing scalability will be investigated by dumping routing tablesto storage at certain intervals and assessing the number of routing entries. These functionalitiesare already supported by the available RINA implementations. Applications such as netperf orrina-tgen can measure peak, instantaneous and average bit rates and the total time to completea file transfer. By using tcpdump and RINA tools we can measure gaps in packet transfers toassess failure recovery times and the packet loss during recovery. By logging all traffic usingtcpdump/wireshark, we can analyse the total bandwidth consumed on each network link to assessnetwork resource usage efficiency.

    These KPIs are summarised in Table 6.

    KPI Metric Current state of the art ARCFIRE Objective

    PCI overhead bits RObust Header Compression (ROHC) comparable to ROHC

    Routingscalability

    entries in a routing table logarithmic in theory, superlinear in practice logarithmic in practice

    Failure in-terruptiontime

    ms sub 50 ms restoration sub 20 ms measured in the testbed

    Applicationgoodput

    b/s application-dependent application-dependent

    Link resourceutilisation

    % scenario-dependent scenario-dependent

    Packet lossduring failures

    number of SDUs 0 0

    Table 6: KPIs for experiment 2

    These KPIs will be measured under different circumstances and traffic loads. The traffic will begenerated by gradually scaling up the number of end user devices, the number of network nodes(routers), the number of deployed services and the total number of user connections. These resultswill show how each of these KPIs scales, and, when taken as a whole provide insights in overallnetwork scalability.

    3.5 Testbeds

    The experiment will use the testbeds in Table 7 from the selection made in D4.2 [5]. These testbedshave been chosen taking into account previous know-how of T4.3 partners, and to support the

    34

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    number of nodes required during the various phases of the experiment (i.e. ranging from ∼30to ∼1000 nodes). Moreover, running the experiment on both a VM-based testbed (GENI) and abaremetal-based testbed (Virtual wall) has been considered an important factor for the evaluation ofRINA software stability, as the sensitive difference in I/O timings 1 can result into a larger softwaretesting coverage.

    Testbed Purpose

    iLab.t Virtual wall Host the first experiments up to large scale testing. Measurements of bandwidth, delay, jitter andburst characteristics on dedicated links and bare metal hardware

    w-iLab.t Validation of the LTE setup and measurements of eNodeB-UE bandwidth usage

    GENI / planetlab Europe Emulation of a continent-wide core network, evaluation of scalability (routing table sizes) on VMs

    Table 7: Testbeds for experiment 2

    3.6 Planning

    Table 8 details the expected milestones for experiment 2 throughout the execution of ARCFIRE.The experiment will start with small-scale setups (∼ 30 nodes) and with the end goal of scaling itup to ∼ 1000 nodes.

    Milestone Month Description

    Deploy RINA software in the initial scenario and detectany problems

    M16 This will provide feedback to WP3.

    Obtain measurements from RINA networks in normaloperating conditions

    M20 This will conclude the first small scale RINA experi-ment.

    Scale up RINA deployment to mid-scale experiment M22 5 connected access networks, more nodes in metronetworks, and an overall increase in the number of usersand services

    Document measurements from the mid-scale experi-ments

    M24 This will conclude the mid-scale experiments

    Scale up RINA deployment to large-scale experiment M26 7 connected access networks, more metro networks,more nodes in the core network, and a huge increase innumber of users and services used

    Wrap up experimentation M29 Detailed analysis of the results and preparation for publi-cation of the final results.

    Table 8: Planning of experiment 2

    The risk factors for experiment 2 are mostly related to the maturity of the experimentationsoftware. The used RINA prototypes do not (and cannot) have the same degree of maturity as

    1I/O operations involving Virtual Machine can be an order of magnitude slower in terms of throughput and/or latencywhen compared to I/O operations on baremetal

    35

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    well-established IP-based software. This relative lack of maturity can reveal itself in two ways.Moving an implementation from its development environment to the testbeds may reveal cornercases (or in general code paths never taken before) that are related to the specific hardware it isrunning on. Machines with different number of processors, processor microarchitecture, memoryspeed, caches size and NIC models can reveal timing issues and data races that were previouslyless visible. Scaling up the implementation will increase the rate at which any deficiency revealsitself. In addition to that, the likelihood of software failures increases (or at least does not decrease)as the duration of experiment increases.

    As a quantitative indication of the largest and longest RINA experiments conducted to dateon the available implementations, IRATI has been scaled up on networks of Virtual Machinescontaining about 40 nodes; rlite has been deployed on networks of VMs with about 90 nodes.Moreover, these test networks have been operational only for some hours, while ARCFIRE targetsat least a week of uninterrupted operation.

    These risks are mitigated in two ways:

    • T3.4 (which runs until the end of the project) is dedicated to fixing bugs, memory leaks andother problems found during the various planned phases of the experiment.

    • Three independent RINA implementations are available to be used for experiment 2, sothat fallback options are available if major issues should arise with either one of theseimplementations.

    36

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    4 Experiment 3: RINA as an alternative to MPLS

    4.1 Objectives

    This experiment will explore a couple of configurations in which RINA is used in an equivalentrole to that of MPLS (Multi Protocol Label Switching) and its associated control plane protocols.MPLS is widely used in the industry as a network substrate that can transport a diverse set of IP andEthernet services for the residential, business and mobile markets over a consolidated infrastructure.RINA can play the same role as MPLS, providing a more flexible, scalable yet simpler frameworkthat can be tailored to different deployment scenarios. Experiment 3 will explore a BGP-free serviceprovider core network, supporting layer 3 VPNs and transport of Internet traffic. This scenario hasbeen analysed in deliverable D2.1 [1], specifically in sections 2.3, and 2.6.1.

    MPLS networks forward traffic based on fixed labels present in the header of MPLS packets.Each MPLS router looks up the incoming packet’s label in an internal database and obtains anoutput interface and an outgoing label (routers swap the labels in the packet header). Ingress MPLSrouters push an initial label according to a classification of the higher-level protocols transported bythe MPLS networks (IPv4 or v6 Internet, VPLS instances, Layer 3 VPNs, etc.). This classificationis known as FEC - Forwarding Equivalent Class - in MPLS terms. This initial label determines howthe flow of packets belonging to the higher-level protocols will be forwarded through the MPLSnetwork. A specific path through the MPLS network is called Label Switched Path (LSP), and isdefined by a set of labels at the MPLS routers traversed by the LSP.

    P1

    L2/L1

    L2/L1 PE PE

    P1

    IP (Internet) or IP (VPN) or Ethernet (VPN)

    Core Network (MPLS-based)

    MPLS (LSP)

    L2/L1

    L2/L1 L2/L1

    MPLS (service – VPN/PseudoWire)

    L2/L1 L2/L1

    P2 P2

    MPLS (LSP)

    VPN label LSP2 label LSP1 label L2 header IP header Headers in the packet

    Figure 15: Data plane of 2-layer hierarchical LSP scenario for a core service provider network

    MPLS packets can carry stacks of labels that are used to multiplex various instances of higher-layer services over the same LSP, as it is the case of IP VPNs: one label identifies the VPN instance,while the other label is used to forward the MPLS packet through a particular LSP. Label stacks arealso used in the case of hierarchical LSPs that help scaling up large MPLS networks: in this case

    37

  • D4.3: Design of experimentalscenarios, selection of metricsand KPIs

    Document: ARCFIRE D4.3

    Date: May 5th, 2017

    the MPLS network is divided in two or more domains; e.g. metro and core. Core routers setup amesh of LSPs between them, while LSPs between metro routers are multiplexed over a core LSP atthe metro/core edge [10]. An example of such configuration is depicted in Figure 15.

    Multiple control plane protocols are used to choose, distribute and maintain the label setsrequired for the MPLS forwarding plane. LDP, the Label Distribution Protocol, is typically used tosetup LSPs with no hard QoS guarantees in an automated fashion. LDP requires the use of a routingprotocol within the MPLS network (IGP or Interior Gateway Protocol), since it just negotiateslabels locally between adjacent nodes. RSVP-TE is used to signal a traffic engineered LSP across aset of MPLS routers. RSVP-TE can provide hard QoS guarantees and fast restoration times, butrequires a manual setup and keeps more state than LDP in the MPLS routers. iBGP (interior BorderGateway Protocol) or T-LDP (Targeted LDP) are typically used to distribute the MPLS labels thatdifferentiate service instances (e.g. VPNs).

    P1

    L2/L1 L2/L1

    PE PE P1