28
Test Plan UKERNA QoS Development Project Version 1 Anthony Ryan, University of Manchester Tim Chown, University of Southampton Steve Williams, Swansea Rina Samani, UKERNA Victor Olifer, UKERNA February 2004 ND/ATG/QOS/DOC/017 1

Qos Testplan Final

Embed Size (px)

Citation preview

Page 1: Qos Testplan Final

Test Plan

UKERNA QoS Development Project

Version 1

Anthony Ryan, University of Manchester

Tim Chown, University of Southampton

Steve Williams, Swansea

Rina Samani, UKERNA

Victor Olifer, UKERNA

February 2004

ND/ATG/QOS/DOC/017 1

Page 2: Qos Testplan Final

1. Introduction

This document outlines a framework for the testing environment and architecture of the JANET QoS prototype service within the framework of the JANET QoS Development Project. The goals of the Project are:

- to evaluate the effectiveness of QoS services for the applications used by the JANET community;

- to asses efforts required to deploy and support QoS services on the production network;

- to evaluate the capability of modern routers supporting QoS services.

The focus of the test phase will be on the classes of service, they are: the IP Premium and Less than Best Efforts (LBE),, and the testing will comprise of a range of real applications being used with different timing and reliability requirements.

Included within this document are sections providing detailed descriptions on a number of these applications that will be used to test the QoS prototype service, the type of testing to be carried out with indicative timescales and what we hope to achieve from the tests. Any implications associated with the testing will also be highlighted.

In order to verify the QoS model deployed on JANET, a separate monitoring infrastructure will also be implemented to assess the performance of the QoS prototype service during the testing phase. An evaluation of the results obtained from the various tools, together with an evaluation of the tools itself will also be carried out during the test phase.

2. QoS Testing The aims and objectives of the QoS testing phase are to verify that the QoS service model, as described in the QoS policy http://www.ja.net/development/qos/PolicyFrameworkV3-0.pdf and technical framework documents http://www.ja.net/development/qos/TechnicalFramework.pdf work, and provide certain applications with the type of service (either better than best efforts, or less than the best efforts service), that they require during congestion periods. The policy framework defines DiffServ technology with point-to-cloud model (destination-unaware) as a basis for QoS services to be deployed as part of this project. The main feature of this model is that each router of a network processes packets in a differentiated manner, this differentiation being made on the basis of the value of DSCP field of a packet, disregarding a destination address (path) of a packet. This feature results in a simplicity of deploying and managing QoS services but at the same time has a negative side effect which arise when a great amount of Premium flows go to the same output interface of the router, so that an interface becomes congested by the Premium traffic. The point-to-cloud model cannot prevent this situation completely, as routers and network administrators don't check paths of flows and network users have the right to direct their flows wherever they like. To decrease the probability of these situations occurring and therefore to increase the probability of providing a stable QoS service, the border routers of the DiffServ network should police Premium [traffic] to maintain it at relatively low levels. Another possible

ND/ATG/QOS/DOC/017 2

Page 3: Qos Testplan Final

preventive method to keep DiffServ model working properly is to limit a number of legal sources of Premium traffic and remark all others sources of Premium to BE. During the tests described in this document an accidental overloading of an output interface by Premium traffic is rather unlikely as: a) Premium traffic flows of the tested applications are under control of testers; b) natural Premium flows across JANET are negligibly low; a chosen 5% policing limit for border routers of JANET backbone, Regional Networks and Campus networks protects (to some extent) the networks from flooding by Premium traffic. The overall plan is to configure queuing and policing on all the routers within the JANET core and on the backbone access routers (BAR), as well as configure appropriate queuing and policing on all the routers within the Regional Networks and site networks who are participating in the QoS testing phase.

Once the configuration has been complete, various tests will be conducted, these include:

• local testing within a site's network;

• scheduled tests between different sites across the JANET backbone during busy hours under natural load;

• scheduled tests between sites across the JANET backbone carried out during the JANET at risk period under controlled congestion.

2.1. Local Testing Local testing (LAN) has two main aims:

1. To allow the QoS project partners to test and gain some experience in traffic generation, traffic measurement, router configuration and polling of router counters on a live network. This experience will be useful when evaluating a varied range of routers and their QoS capabilities, as well as serve as a basis for conducting WAN tests on the JANET production network.

2. To conduct majority of the tests and the explore the router behaviour, under extreme loading conditions with different parameters. The local test environment allows more frequent tests to be conducted in comparison to the scheduled tests on the production network. Some of the partners of the project will be using their local lab networks to carry out local experimenting so that there no risk of affecting the production service.

2.2. Scheduled Testing under natural congestion (WAN-natural) These kind of tests aim to compare the behaviour of real applications in a production network where congestion at certain points on the network exists. The behaviour of the applications' traffic will be examined under two conditions: when traffic (e.g. VoIP traffic) is processed as BE (i.e. receives 'common' processing) and when traffic is processed as a special DiffServ class, i.e. Premium for VoIP and Video and LBE for AccessGRID. This comparison of the traffic behaviour will allow assessing the impact of QoS deploying on real applications.

Some naturally loaded links have been identified by the universities and regional networks that are participating in this project; these links are sufficiently loaded with

ND/ATG/QOS/DOC/017 3

Page 4: Qos Testplan Final

traffic and can be used for the scheduled testing so that no artificial load needs to be produced. These type of tests will not need to be conducted during the JANET at risk period, as it will not affect the production traffic, therefore the duration of these tests do not need to be defined (during the at-risk periods, tests will be conducted only three times for no more than 10-15 minutes each time).

It is envisaged that these tests will also help the project group to prepare for the tests that are scheduled to take place during the JANET at risk period.

2.3. Scheduled Testing under controlled congestion (WAN-controlled) The intention is to schedule three tests over a period of one month with duration of 10-15 minutes, which are to take place during the JANET at risk period. The aim is to run a range of applications which are marked as IP Premium, BE and LBE through JANET and assess the behaviour of the QoS services under extreme congested conditions. The monitoring infrastructure will be enabled to allow both traffic generation and to monitor/measure the effects and performance of the application with and without QoS enabled on the send and receive path.

Appropriate points on the production network’s participating in the tests will be explored where artificial traffic load can be applied, and the state of the network’s involved are monitored by examining the MIB counters of the various routers.. As the JANET backbone is lightly loaded, the most likely place to find congestions is on the campus and Regional networks. NOTE: it was agreed that during QoS test phase the JANET backbone routers will not be artificially loaded.

3. Subjective and Objective testing During the local and scheduled testing phase, it is clear that two types of measurements will be required, there are:

• Subjective assessment

• Objective measurement

3.1. Subjective assessment For each application a set of subjective parameters (e.g. echo, cracking or word clipping for VoIP) will be observed. The subjective assessment parameters for each application are described within the sections below:

3.2. Objective testing

For objective testing, a testing infrastructure will be created. This infrastructure should include the following elements:

- a set of real-world applications;

- traffic-generating tools;

- monitoring infrastructure.

ND/ATG/QOS/DOC/017 4

Page 5: Qos Testplan Final

• The application set will include: VoIP, IP videoconferencing and GRID bulky applications.

• Traffic-generating tools for creating artificial load will include IPERF, RUDE/CRUDE.

• The monitoring infrastructure will consist of mainly two types of tools, other tools will be used to allow the results to be compared and allow an evaluation of the monitoring tools to be carried out:

− Tools to be used for end-to-end measuring of basic QoS metric parameters (such as One Way Delay, Jitter, Packet Loss, Packet Misordering) include: Cisco’s SAA and IPERF. Both tools will generate and use synthetic packets which are added to the application traffic. QoS parameters of synthetic packets will be measured, then used for the evaluation of the corresponding traffic parameters of the applications that are tested. Both tools are manually controlled in the sense that they do not support programmed scenarios.

− Tools for monitoring the state of the routers queues (e.g. Cisco QoS MIB counters or NetFlow parameters) during the experiments. This type of information is necessary to make sure that observed effects (delays, jitter, loss, misordering) are direct results of a change in the controlled parameters (load, router configuration etc) in the tested router(s) rather than am outcome which is beyond our control e.g. link break, external traffic burst etc. NOTE: A particular set of MIB counters will be specific to each router vendor (and possibly router model). NetSight (a special customized development version) and SNMP walkers or MRTG will be used. These tools can periodically poll specified MIB counters in specified intervals (5 minutes by default and 1 minute as a minimum).

Section 4 of this document describes in detail the traffic-generating and monitoring infrastructure to be deployed.

4. Testing schemes A possible scheme of LAN testing is shown in fig. 1.

ND/ATG/QOS/DOC/017 5

Page 6: Qos Testplan Final

1 2ABC

3DEF

4 5JKL

6MNOGHI

7 8TUV

9WXYZP QRS

* 0OPER

#

79 40CISC O IP PH ON E

imessages directories

settingsservices

1 2ABC

3DEF

4 5JKL

6MNOGHI

7 8TUV

9WXYZP QRS

* 0OPER

#

79 40CISC O IP PH ON E

imessages directories

settingsservices

Load2Load1

Router under test

Application-generator

Application-receiver

IPERF-client

SAA-responder

SAA-probe (measures reflected traffic)

IPERF-server

Load-destination

Hub/switch/router

NetSight/MRTG

Hub/switch/router

Start!Start!

QoS MIB objects

(every 1 min)

Monitored queues

Fig.1. LAN testing scheme – an example

This scheme is designed specifically for testing the IP Premium service, using the VoIP application. An artificial congestion is created on the output interface of the router that is being tested using several traffic generators (noted as: Load1 and Load2 in the figure above).

The IPERF-client and SAA-responder are measurement tools that add synthetic packets to the application traffic in order for the IPERF-server and SAA-probe to measure the QoS metrics at the other end of the network. SAA traffic is generated by an SAA probe, which is then reflected by SAA responder and then the SAA probe measures the QoS traffic metrics. Direct SAA traffic flow from the probe is not shown on fig.1.

Please note that the above scheme implies only for one-way measurement. To measure QoS metrics in both directions the scheme would need to be mirrored (or each node would need to play a dual role, e.g. a workstation would need to act as both the IPERF client and the IPERF server).

During the experiment, NetSight/MRTG was used to polls QoS MIB counters from the router’s output and input ports. Polling should be synchronized from the commencement of the experiment. Each MIB counter is an accumulated value and does not represent a history of several values; therefore it is not difficult to evaluate the effects of the test. It would be preferred if a poll can be done shortly after every change in the tested traffic (see fig 2).

ND/ATG/QOS/DOC/017 6

Page 7: Qos Testplan Final

BE flow

Premium flow

BE packet loss counter

Time

Slow growth

Fast growth Slow growth

Poll time

Fig.2. Shows the polling and synchronization of the traffic changes during the testing of the BE traffic prior to the pre-emption on Premium traffic.

This synchronization of traffic changes and polling is necessary for ensuring of the comparability of results and also better understanding of the causes and effects of various tested processes. The synchronization can be carried out manually (but the accuracy will be very poor) or by means of NTP timing (this, however, needs further investigation).

Taking into account that a minimal interval of polling is 1 minute, changes should not be more frequent than this interval. The longer the period of traffic stability is, the simpler it is to ensure the synchronization. So if manual synchronization is used we should carry out those experiments which needs changing of traffic parameters (e.g. load increasing) separately for every value of a changing parameter. Fig 2 shows an example of an experiment for one value (one point) of Premium load.

An example of the scheme for WAN tests is shown on fig 3 (please note that while it shows both natural and controlled tests, background load generators are only necessary for the latter ones).

ND/ATG/QOS/DOC/017 7

Page 8: Qos Testplan Final

1 2ABC

3DEF

4 5JKL

6MNOGHI

7 8TUV

9WXYZP QRS

* 0OPER

#

79 40CISC O IP PH ON E

imessages directories

settingsservices

1 2AB C

3DEF

4 5JK L

6MNOGHI

7 8TUV

9WXYZPQRS

* 0OPER

#

7 94 0CI SCO IP PH ON E

imessages dir ectories

settingsservices

Load2Load1

Router under test

Application-generator

Application-receiver

IPERF-client

SAA-responder

SAA-probe (measures reflected traffic)

IPERF-server

Load-destination

Hub/switch/router

NetSight/MRTG

Hub/switch/router

Start!

Start!

QoS MIB objects

(every 1 min)

Example of WAN scheme

Fig 3. An example of the WAN testing scheme

The machines that are creating the load can be connected locally or remotely (as seen in the diagram in fig.3) or locally to the router that is to be tested. A possibility for remote version depends on topology, i.e. whether several paths to the tested router exist or not.

In remote version a different load boxes apparently will be located in different sites of QoS testing participants.

5. Types of Test In addition to categorising tests as LAN, WAN-natural and WAN-controlled ones, it is also useful to classify them according to their expected effects described below.

The following kinds of tests could be conducted:

5.1 Check configuration accuracy

Before any QoS metrics can be measured, it is useful to make sure that the configuration for all the paths between the end points (Points of Testing (POT) is correct. This test involves checking the correctness of the following statements:

- All packets keep the DSCP’s values assigned by the source;

- Artificial traffic generators create packets of a certain size, necessary inter-packet intervals and DSCP values.

ND/ATG/QOS/DOC/017 8

Page 9: Qos Testplan Final

- Packets marked with the appropriate DSCP values are served by the appropriate queue in each router/switch.

Tests of this kind will be conducted for LAN, WAN-natural and WAN-controlled tests.

5.2. Benefits of the QoS service These tests aim to assess the benefits of the QoS service for a number of applications, as their traffic will be treated differently. The tests will involve comparing the QoS metrics of an application traffic when treated as BE and as Premium/BE/LBE classes under medium congestion (60%-70% utilisation) that corresponds to real conditions on the production network.

To evaluate these benefits, both the application and synthetic traffic should be directed to the network each time it is differently marked: first as BE, and then either as Premium or LBE, according to the QoS class of service required. For both tests, the congestion conditions should be identical where possible.,

Tests of this kind will be conducted for LAN, WAN-natural and WAN-controlled tests.

5.3. Protection of each class of the QoS service The QoS service model must work correctly under any conditions, including extreme conditions. To verify a degree of protection appropriate for any particular class of the QoS service, it is necessary to conduct some testing by creating artificial load of others classes and measuring the degradation (in terms of QoS metrics) of this particular traffic class.

• The Premium class should be verified against BE and LBE overloads by injecting approximately 100% artificial load of the link capacity).

• The BE class should be verified against the pre-empted behaviour of the Premium class. A BE class should also be verified against LBE overload.

• The LBE class should be examined to assess its capability to keep within a specified minimum rate of traffic during congestion when there is an overload of BE and Premium marked traffic.

Tests of this kind will be conducted for LAN and WAN-controlled tests (for the latter ones, an extreme load will be applied at negotiated points on the University and Regional network’s.

5.4. Evaluation of policing and scheduling parameters All affected routers are configured with values which correspond to a number of parameters, which include the following:

- Type of scheduling (priority, WRR queues) for each traffic class;

- Which interface should differentiated scheduling be applied (output only or both output and input) ?

- Parameters of scheduling (namely, weight for WRR), drop thresholds and probabilities for RED;

- Policy limits (rate limits, bucket depths)

ND/ATG/QOS/DOC/017 9

Page 10: Qos Testplan Final

During the experiments, some exploration of the efficiency of these parameters will be required, such as examining the limit of the premium class that can be accepted without degrading the performance of the premium class of service. For example, a technological limit of the Premium rate is quite important (a definition of this term is given in Policy Framework document, written by Chris Cooper)

The results obtained from the test will help in the developing the Service Level Specifications.

6. Application to be tested The QoS prototype service will initially consist of three classes of service provided on the JANET backbone, IP Premium, BE and the LBE, where testing of the IP Premium and LBE class of service will be carried out.

The applications that will test the IP Premium class of service are IP Videoconferencing and Voice over IP (VoIP); the Less than Best Efforts (LBE) class of service will be tested using the AccessGRID application which uses multicast.

Ownership of managing the test for each application has been agreed by the JANET QoS Partners, see table below:

Application to test Owner of the test Test Participants

IP Videoconferencing University of Wales, Swansea

Imperial College

Lancaster University

UKERNA

Voice over IP University of Manchester

University of Wales, Swansea

Imperial College

University of Manchester

Lancaster University

UKERNA

University of Southampton

AccessGRID (LBE and Multicast traffic)

University of Southampton

University of Manchester

Lancaster University

Imperial College

A PoTs of all participants are shown on fig 4.

ND/ATG/QOS/DOC/017 10

Page 11: Qos Testplan Final

Amended by Victor Olifer for the purpose of the JANET QoS Development Project

UKERNAdev.net

Manchester

Lancaster

Imperial

Soton

Swansea

Tests participants

Fig.4. Points of Testing

Full details of the testing to be carried out for each application is outlined in sections 6.1, 6.2 and 6.3.

6.1 IP Videoconferencing (IP Premium) It is certainly the case that we have seen instances of high incoming traffic that will affect the institution’s network(s) and videoconferencing. It should be noted that in many cases it is not the bandwidth utilization per se that has detrimental consequences – rather it is the load that the traffic places on the institutions’ core network equipment. Cisco 7206VXR/NPE400 or NPE-G1 and Cisco 6509/MSFC-II/PFC-II/Sup-II are not immune to having CPU consumed by high incoming traffic of certain types – at levels sometimes below that which would be expected to cause any issues.

It is also the case of course that the offending traffic can come from within the campus, rather than from outside. In these cases physical and logical network engineering as well as traffic engineering have roles to play within the campus itself.

While videoconferencing from an institution to the outside world, it is true that instantaneous load on links can cause glitches that will be subjectively noticed as audio or video break-up – and it is these that the provision of QoS is aimed at clearing rather than very high, general link loading.

The South Wales MAN (SWMAN) has already provisioned QoS on all Site Access Routers. This was based on testbed implementation and testing. Further testing of the production network was withheld and will tie in with the wider JANET testing phase.

The focus will primarily be on areas where link speed reduces – i.e.RN CORE ↔ Institution links.

ND/ATG/QOS/DOC/017 11

Page 12: Qos Testplan Final

The testing will aim to prove that instantaneous loading on links, deliberate or accidental, will not interrupt the smooth flow of a videoconference. In parallel, the monitoring infrastructure should show that the provision of QoS measurably improves the traffic throughput for EF flows under high link load.

Initial configuration of the RN equipment will once again take place in a testbed environment. Three traffic classes will be configured:

- Premium – EF based priority limited to 20% of overall link bandwidth

- Assured – AF based limited to 15% of overall link bandwidth

- BE – unlimited bandwidth

These classes match those already configured at all RN edge equipment – note that once configuration begins on core equipment in the testbed, it is possible – or even likely - that limitation may be present that may force some changes to the ideal configuration.

Testing in the testbed will commence by configuring the test-core router without any queuing implementation and verifying that behaviour is as expected with H.323 traffic – i.e. as bandwidth output to a link exceeds the available bandwidth the videoconference will degrade and eventually fail.

At this point QoS queuing will be configured for EF traffic and the test conducted again – this time the videoconference is expected to be unaffected – at least up to the point that the packet switching limit of the platform is reached.

QoS will also then be configured for the AF class and be tested to ensure that packet loss does not occur whilst loading increases in the BE class above 100% of link speed.

Finally the tests will be combined to ensure that both EF and AF classes continue to provide throughput simultaneously while BE increases above 100% link speed.

Detailed configuration will then be passed to the RN operations group at Cardiff University who will confirm and implement the configuration on at least one of the SWMAN routers.

Testing will then be carried out with the agreement of participating sites, at predetermined times, to simply verify that the QoS provision operates on the production network as it did in the testbed. These tests are likely to have a maximum duration of a few seconds. The institutions involved, and the RN will be aware of the testing and a ticket will be placed with the JANET Operations Desk as necessary.

Network monitoring will be put in place to monitor different traffic classes between participating sites in the RN. This will also be done for UK wide participating sites. It is intended to run BE and Premium measurement probes continuously across the network to detect differences, if any, in performance during normal network operation. In principle some additional loading could be applied to certain paths to see if performance diverges at data rates well below link speed. Divergence between BE and premium class monitoring probes would indicate the correct functioning of the Premium class - thereby providing the ability to non-destructively test QoS implementation in the future.

H.323 testing will then take place on the production network in parallel with the monitoring. Participating sites would, at times to be agreed and ticketed, be targeted

ND/ATG/QOS/DOC/017 12

Page 13: Qos Testplan Final

with enough traffic to overload agreed locations – nominally their access link. H.323 videoconferences running at the same time that the load traffic is delivered would be analysed subjectively for any impact. If possible the test would be run twice. Once with the videoconferencing traffic marked as BE and secondly as Premium to prove the functioning of the queuing mechanisms. These tests should be done in conjunction with VoIP testing to minimise disruption.

It will, of course, be necessary to take the outcomes from these tests and make assumptions that the rest of the network will function in the same manner. It will not be possible to test all links and all routers.

The outcome from these tests is expected to be a confirmation that correctly configured priority queuing will protect H.323 traffic from instantaneous traffic bursts that would normally cause tail drop in routers, and hence audio and video degradation in H.323 videoconferences.

These tests can then be repeated with sites participating in the UK wide testing programme – to ensure that configuration at their edges is also correct and to ensure that DSCP marking is not disturbed while transiting the JANET network.

6.2. Voice over IP (IP Premium)

Introduction

Telephony systems have traditionally run on circuit switched networks. Circuit switched networks offer fixed bandwidth and consistent transmission rate once the call has been connected. This is desirable for interactive voice traffic and has been shown to give an acceptable service. Delays have been made small enough to make two way conversations possible. The concept of jitter does not really exist in circuit-switched networks. In effect, for the duration of a call, a fixed bandwidth is reserved on a circuit from end-to-end.

In contrast, efforts to run two-way conversations over packet switched networks run into problems due to delay, jitter and possible packet loss which could make call quality unacceptable. The problems arise because packet switched networks do not even necessarily guarantee to deliver packets never mind guarantee bandwidth, ordered arrival or constant transmission rates.

The intention of these experiments is to show if Quality of Service technology might be used successfully within a congested IP network in order to emulate the success of circuit switched networks for interactive two-way voice conversations.

Objective and subjective measurements will be taken to assess the suitability of the underlying network for carrying two-way voice conversations under different loads while employing QoS techniques to try to provide preferential treatment for voice traffic.

Objective Metrics

Metrics reported by the handsets and /or the Call Manager software [jitter, delay, packet loss] under different congestion conditions and between different participating sites.

• Packet loss in voice traffic leads to noise, crackles and spikes.

ND/ATG/QOS/DOC/017 13

Page 14: Qos Testplan Final

• Delays in voice traffic leads to stilted human conversations and to potentially confusing delayed echoes being produced.

• Jitter leads to noise, distortion and clipping

Handsets can mitigate for the effects of jitter to some extent and of out-of-sequence packets by buffering incoming voice packets. However if buffers are necessary this adds to the overall delay. Handsets may also attempt to mitigate for a lack of full 2-way instantaneous bandwidth by detecting silences (in normal voice conversation there is only one person speaking so handsets may conserve bandwidth by not transmitting packets when silence is detected.) Handsets may also make some attempt to defeat echo problems caused by excessive delay.

Due to these attempts to mask network performance problems masking by handsets it may be interesting to compare the statistics reported by the call manager or handsets with samples of the underlying behaviour of the network without the masking effects of buffering by the handsets. Underlying network metrics [jitter, delay, packet loss] can be sampled by SAA or IPERF under different congestion conditions and between different participating sites.

Further details about the monitoring infrastructure can be found in section 7.

Subjective Metrics

Subjective assessment of the call quality under different network conditions could be made using some or all of the following criteria:

− How long is the wait for a dial-tone ?

− Are there variations in the time taken to set up calls ?

− Are there perceived delays in speech ?

− Are there echoes of speech coming back to the originator ?

− Is there noise ? Distortion ?

− Is there clipping ? Are the beginning of words clipped ?

(More subjective metrics from “Enterprise Network Assessment for VoIP Call Quality – Is it Adequate ?” by Hermann, Martell, Mattei, Regini)

LAN Tests

The handsets at the University of Manchester will be used to talk directly to each other locally. This will give us more opportunity initially to create more extreme local congestion while testing call quality. (Initial handset configuration and subsequent call set-up is done via the Call Manager at the University of Wales, Swansea. After this initial call set-up the actual conversation traffic goes direct between the IP phone handsets)

The handsets will be deployed on either side of a pair of Cisco 6500 routers at the University of Manchester (fig. 5 ) and objective and subjective tests done locally assessing network and call quality first with no QoS in place and then with QoS configured on the path between the two phone handsets and with the handsets allowed

ND/ATG/QOS/DOC/017 14

Page 15: Qos Testplan Final

to use the premium queues. In both scenarios, the background load in the links between the phones at A and B can be ramped up by producing traffic between several Linux boxes running IPERF (or other traffic generation software.)

As well as making the subjective and objective measurements of this network to assess the suitability for IP calls there would also be the objective to confirm that the premium traffic is protected from both BE and LBE and to assess how accurately the configuration of the Cisco QoS parameters allow us to define the various properties of the various queues.

Note: The links between the two handsets in Manchester will not be carrying any production traffic in this scenario so the increase in background load can be done without affecting the users.

1 2ABC

3DEF

4 5JKL

6MNOGHI

7 8TUV

9WXYZPQRS

* 0OPER

#

7960CISCO IP PHONE

imessages directories

set tingsservices

1 2ABC

3DEF

4 5JKL

6MNOGHI

7 8TUV

9WXYZPQRS

* 0OPER

#

7940CISCO IP PHONE

imessages directories

settingsservices

Fig 5: Manchester Local Phone Testing across Local Gigabit Lan

For each of the scenarios below in table 1 the following subjective and objective observations would be recorded:

• Jitter from SAA simulated VoIP traffic

• Delay from SAA simulated VoIP traffic

• Packet loss from SAA simulated VoIP traffic

• Jitter from Manager software or handsets

• Delay from Manager software or handsets

• Packet loss from Call Manager software or handsets

• Subjective observations

• The objective of these tests would be:

ND/ATG/QOS/DOC/017 15

Page 16: Qos Testplan Final

• Show the degradation in VoIP performance as background traffic grows when no QoS is deployed.

• Gain some feeling of the loading conditions which degrade/break VoIP with no QoS deployed.

• Prove that the QoS configuration employed does protect the premium traffic from competing BE traffic

• Gain some feeling of the loading conditions which degrade/break VoIP with QoS deployed.

Table 1: VoIP Lan Experiments

VoIP Lan Experiment1: First with No QoS enabled (all traffic treated equally) and varying background load

QoS Configured ? Extra loading between routers A/B VoIP treated as: Observations recorded with the network in this state:

VoIP Lan experiment 1a No None BE Jitter/delay/loss from SAA + CM plus subjective obs.

VoIP Lan experiment 1b No 300 mbps BE

VoIP Lan experiment 1c No 400 mbps BE

VoIP Lan experiment 1d No 500 mbps BE

VoIP Lan experiment 1e No 600 mbps BE

VoIP Lan experiment 1f No 700 mbps BE

VoIP Lan experiment 1g No 800 mbps BE

VoIP Lan experiment 1h No 900 mbps BE

VoIP Lan experiment 1i No 1000 mbps and over BE

VoIP Lan Experiment 2: Then with QoS enabled and varying background BE background traffic while VoIP traffic is treated as IP Premium

QoS Configured ? Extra loading between routers A/B VoIP treated as: Observations recorded with the network in this state:

VoIP Lan experiment 2a Yes None Premium

VoIP Lan experiment 2b Yes 300 mbps BE Premium

VoIP Lan experiment 2c Yes 400 mbps BE Premium

VoIP Lan experiment 2d Yes 500 mbps BE Premium

VoIP Lan experiment 2e Yes 600 mbps BE Premium

VoIP Lan experiment 2f Yes 700 mbps BE Premium

VoIP Lan experiment 2g Yes 800 mbps BE Premium

VoIP Lan experiment 2h Yes 900 mbps BE Premium

VoIP Lan experiment 2i Yes 1000 mbps and over BE Premium

VoIP Lan Experiment 3: Then with QoS enabled and mixed background traffic (fixed BE but varying premium) while VoIP traffic is treated as IP Premium

QoS Configured ? Extra loading between routers A/B VoIP treated as: Observations recorded with the network in this state:

VoIP Lan experiment 3a Yes 500 mbps BE/no premium Premium

VoIP Lan experiment 3b Yes 500 mbps BE/10mbps premium Premium

VoIP Lan experiment 3c Yes 500 mbps BE/20mbps premium Premium

VoIP Lan experiment 3d Yes 500 mbps BE/30mbps premium Premium

VoIP Lan experiment 3e Yes 500 mbps BE/40mbps premium Premium

VoIP Lan experiment 3f Yes 500 mbps BE/50mbps premium Premium

VoIP Lan experiment 3g Yes 500 mbps BE/60mbps premium Premium

VoIP Lan experiment 3h Yes 500 mbps BE/70mbps premium Premium

VoIP Lan experiment 3i Yes 500 mbps BE/80mbps premium Premium

VoIP Lan experiment 3j Yes 500 mbps BE/90mbps premium Premium

ND/ATG/QOS/DOC/017 16

Page 17: Qos Testplan Final

WAN Tests

For wider ranging WAN tests between diverse geographical sites we do not have the luxury of being able to control the levels of congestion so readily. However we can take advantage of the natural traffic on the production network. There is quite a large difference in the traffic levels we see at 8am compared to the levels we see at mid-afternoon. There may additionally be a possibility

of injecting some extra congestion traffic during one or two Tuesday morning at risk periods. There are concerns over whether this can be done in a controlled manner in the short time slots available. Ironically, the ‘at-risk’ period during which we would like to add extra congestion traffic coincides with the lowest naturally occurring traffic levels. Apart from the management and technical challenges arising from the problem of injecting congestion in a controlled manner, there is a limit to the number of IP telephony calls which could be humanly placed, measured and observed in the limited time slot available.

Because of the potential problems with generating controlled extra background traffic on the production network, we may first concentrate on tests using the naturally occurring traffic as background. First some tests would be done with either no QoS configured at the edge networks or, if that is difficult to arrange, with the VoIP traffic marked as BE.

For each of the scenarios below in table 2 the following subjective and objective observations would be recorded:

• Jitter from SAA simulated VoIP traffic

• Delay from SAA simulated VoIP traffic

• Packet loss from SAA simulated VoIP traffic

• Jitter from Manager software or handsets

• Delay from Manager software or handsets

• Packet loss from Call Manager software or handsets

• Subjective observations

The objective of these tests would be:

• Show the degradation in VoIP performance as background traffic grows when no QoS is deployed.

• Gain some feeling of the loads which degrade/break VoIP with no QoS deployed.

• Prove that the QoS configuration employed end-to-end does protect the premium traffic from competing BE traffic

• Gain some feeling of the loading conditions which degrade/break VoIP with QoS deployed

ND/ATG/QOS/DOC/017 17

Page 18: Qos Testplan Final

Table 2: VoIP WAN Experiments

VoIP WAN Experiment1:

First with No QoS enabled (all traffic treated equally) and naturally occuring background load during quiet network activity (8-9am)

QoS Configured ? Natural loading between sites A/B VoIP treated as: Observations recorded with the network in this state:

VoIP WAN experiment 1a

Manchester - Swansea No Attempt to quantify the loading of relevant links at time of call here…. BE

VoIP WAN experiment 1b

Manchester – Lancaster No BE

VoIP WAN experiment 1c

Manchester - Imperial No BE

VoIP WAN experiment 1d

Manchester - Southampton No BE

VoIP WAN experiment 1e

Manchester - UKERNA No BE

VoIP WAN experiment 1f

Swansea - Lancaster No BE

VoIP WAN experiment 1g

Swansea – Imperial No BE

VoIP WAN experiment 1h

Swansea – Southampton No BE

VoIP WAN experiment 1i

Swansea - UKERNA No BE

VoIP WAN experiment 1j

Lancaster – Imperial No BE

VoIP WAN experiment 1k

Lancaster - Southampton No BE

VoIP WAN experiment 1l

Lancaster - UKERNA No BE

VoIP WAN experiment 1m

Imperial – Southampton No BE

VoIP WAN experiment 1n

Imperial – UKERNA No BE

VoIP WAN experiment 1o

Southampton - UKERNA No BE

VoIP Lan Experiment 2:

Then with no QoS enabled (all traffic treated equally) and naturally occuring background traffic at peak periods (mid afternoon)

QoS Configured ? Natural loading between routers A/B VoIP treated as: Observations recorded with the network in this state:

VoIP WAN experiment 2a

Manchester – Swansea No BE

VoIP WAN experiment 2b

Manchester – Lancaster No BE

VoIP WAN experiment 2c

Manchester – Imperial No BE

VoIP WAN experiment 2d

Manchester - Southampton No BE

VoIP WAN experiment 2e

Manchester – UKERNA No BE

VoIP WAN experiment 2f

Swansea – Lancaster No BE

VoIP WAN experiment 2g

Swansea – Imperial No BE

VoIP WAN experiment 2h No BE

ND/ATG/QOS/DOC/017 18

Page 19: Qos Testplan Final

Swansea – Southampton

VoIP WAN experiment 2i

Swansea – UKERNA No BE

VoIP WAN experiment 2j

Lancaster – Imperial No BE

VoIP WAN experiment 2k

Lancaster – Southampton No BE

VoIP WAN experiment 2l

Lancaster – UKERNA No BE

VoIP WAN experiment 2m

Imperial – Southampton No BE

VoIP WAN experiment 2n

Imperial – UKERNA No BE

VoIP WAN experiment 2o

Southampton – UKERNA No BE

VoIP Lan Experiment 3: Then with QoS enabled and naturally occurring background traffic at ‘quiet’ times while VoIP traffic is treated as IP Premium

QoS Configured ? Natural loading between routers A/B VoIP treated as: Observations recorded with the network in this state:

15 site combinations as above Yes Premium

VoIP Lan Experiment 4: Then with QoS enabled and naturally occurring background traffic at ‘busy’ times while VoIP traffic is treated as IP Premium

QoS Configured ? Natural loading between routers A/B VoIP treated as: Observations recorded with the network in this state:

15 site combinations as above Yes Premium

VoIP Lan Experiment 5: Then with QoS enabled and extra background traffic injected during scheduled ‘at-risk’ periods with VoIP traffic is treated as IP Premium

QoS Configured ? Extra loading between routers A/B VoIP treated as: Observations recorded with the network in this state:

15 site combinations as above Yes To be confirmed…. Premium

Requirements from Participating Sites

• understand their local QoS configurations

• have some vision/insight/monitoring into what is happening in their own queues and networks

• commit to getting the handsets connected and working with the Swansea Call Manager application

• be able to manage/alter the category of traffic coming from the IP Phone handset IP address

• have an IPERF and SAA server available which accepts connections from other participants

ND/ATG/QOS/DOC/017 19

Page 20: Qos Testplan Final

• commit some time to make/receive short calls to participating sites during mornings/afternoons

6.3 AccessGRID (LBE)

Overview

This section describes the Differentiated Services (DiffServ) Less than Best Effort (LBE) experiments to be performed during the testing phase in March 2004. The focus of the LBE tests is to demonstrate LBE as a class of service that is appropriate to low priority (lower than Best Efforts class of service) traffic to consume available bandwidth on a link without impacting adversely on high priority traffic (Best Effort or better) when congestion occurs on that particular link (possibly on a specific interface).

Significant work in the LBE area has been carried out by the members of the TERENA TF-NGN, in the context of QoS in the backbone1. As a result of this work, the GÉANT network has deployed LBE. The TF-NGN LBE work was inspired by and based on the original work carried out by the Internet2 community for the Scavenger service2 and efforts to standardise the model within the IETF (which in recent weeks has beeen completed and published as RFC36623). The TF-NGN work was primarily undertaken on the Juniper router platform, but DiffServ (including LBE) configuration also exists for other platforms, including Cisco routers.

Expected results

Within the JANET-QoS experiments the focus was to apply LBE at the edge of the access networks, rather than in the core/backbone networks. The aim is to validate the behaviour of LBE as a non-disruptive means to utilise “spare” capacity for low priority applications.

It is up to the site administrator to decide on the policy for the application/traffic priority, e.g. LBE could be used for high-bandwidth, low priority file transfers, just-in-time network backups, or to reduce the priority of traffic coming from a student halls of residence network.

LBE can be deployed incrementally, as it isn’t required end-to-end – providing that the DiffServ marking is retained LBE policing can be deployed purely at the points of congestion (e.g. into and out from site networks).

Classification

Traffic is tagged as LBE within a site network, either by classification of IP traffic at an LBE-configured router, or by setting the correct Differentiated Services Code Point (DSCP) value in the application. As a result of cooperation with the GÉANT and Internet2 community, a DSCP value of 8 has been agreed as a common value to use to indicate LBE or Scavenger traffic. For end-to-end applicability of LBE, it is important that intermediate routers do not alter the DSCP value in transit.

1 http://archive.dante.net/tf-ngn/D9.9-lbe.pdf 2 http://qbone.internet2.edu/qbss/ 3 “A Lower Effort Per-Domain Behavior (PDB) for Differentiated Services”, http://www.ietf.org/rfc/rfc3662.txt

ND/ATG/QOS/DOC/017 20

Page 21: Qos Testplan Final

Within the JANET QoS test phase, the intention is to primarily generate bulk LBE traffic by using a traffic generator (see below), which is configured to set the DSCP value of the traffic to LBE.

Applications

The applications that were used to produce bulk LBE traffic during the tests are listed below. For each application we are testing LBE congestion at the edge, i.e. we assume that the core network is lightly loaded (or may even be protected with the IP Premium).

• AccessGrid4 - this is a high-bandwidth, multi-channel videoconferencing

system, based on the vic and rat tools5 from UCL. It has been developed and enhanced to a new AG v2.0 by the Argonne community. A typical AG session will generate multiple video (vic) and audio (rat) sources per participating site. With 6 or 7 sites in a session, 30-40Mbit/s of traffic would not be unusual. AG is the primary demonstrator application for LBE in these experiments. It primarily uses IP multicast, but can be run unicast through an appropriate unicast-multicast reflector.

• VoIP – there are Cisco handsets, which are primarily used for the IP Premium

tests, but can also be used for calls in the presence of bulk LBE background traffic.

• FTP – this is a simple application to use for throughput testing; we only plan

to use this for internal site tests. It is IPv6 capable.

• AWM – this is the Application Workload Modeler6 traffic generator from

IBM, which we will use to simulate simple application traffic (e.g. HTTP) for internal tests. It is IPv6 capable.

• iPerf7 – this can be seen as an application or an objective testing tool; we plan

to use both iPerf and SAA where possible, and compare the results from both measurement infrastructures. It is IPv6-capable (Note: full details about the monitoring infrastructure is outlined in section of this document).

For the IPv6-capable applications, we only plan to test internally, due to the unavailability of a native IPv6 network infrastructure across the regional networks involved.

4 http://www.accessgrid.org 5 http://www-mice.cs.ucl.ac.uk/multimedia/software/ 6 http://www.ibm.com/software/network/awm/ 7 http://dast.nlanr.net/Projects/Iperf/

ND/ATG/QOS/DOC/017 21

Page 22: Qos Testplan Final

Multicast considerations

We are not aware of explicit demonstrations of LBE with multicast applications elsewhere. This may be because multicast applications do not tend to have the ability to back off in the presence of congestion, in the way a typical unicast TCP application can. However, in our tests we are not (as a focus of the experiment) classifying the multicast traffic as LBE, rather showing that IP multicast traffic can flow in the presence of congestion.

One issue here is that we might hit multicast forwarding limitations in our tests; it is thus important that we calibrate multicast performance before introducing competing LBE bulk traffic on an interface.

Most AccessGrid (AG) sites use native IP multicast, some use multicast tunnels (mbone), others use unicast and a multicast reflector (most commonly QuickBridge8). In our tests, we plan to use native multicast, but may also be able to test the reflector configuration.

Note that IP multicast provisioning may be more complex in some deployments where exit paths are dynamic on the multicast group memberships.

Tests

We plan to run both initial internal and then subsequent external tests.

Intra-site The primary aim is to show AG working in the presence of LBE background traffic, but we also plan to test other applications internally, including AWM and FTP.

The key infrastructure for the tests is a Cisco 7206 series router, which has three 10/100/1000Mbit/s interfaces, plus two 100Mbit/s interfaces. The test configuration will be:

• External interface to campus and LeNSE network • Internal interface to loading network (primary source of bulk LBE traffic) • Internal interface to application/monitoring network (including the AG node

and Cisco SAA node).

For the intra-site tests, the external interface can host receiving applications.

For AG tests, LBE can be used on regular AG sessions without a requirement for active participation from other sites in the session, because we are concerned with performance at the site border itself.

University of Southampton plan to add their Nokia IP740 firewall in series after successful initial tests. An interesting issue here is whether the Cisco policing should be inside or outside firewall (i.e. whether it is more useful to drop LBE traffic inbound or outbound); the key is where the congestion is occurring, or likely to occur if use of LBE-enabled applications is promoted. (It should also be noted that in a RN, all participating universities should agree on use of LBE, if it is policed at a JANET BAR, else the university using LBE is at a disadvantage.)

8 http://www.accessgrid.org/agdp/howto/quickbridge.html

ND/ATG/QOS/DOC/017 22

Page 23: Qos Testplan Final

Inter-site The candidate sites for AG testing are NetNortWest, Lancaster University and Imperial College.

There is no built-in method to measure performance of AG sessions; thus only subjective AG measurements may be taken.

We plan to use VoIP between sites, as a means to validate that VoIP continues to operate in the presence of bulk LBE traffic at the site boundary. The VoIP application has some built-in measurement tools.

We plan to use iperf between sites for objective measurements.

The relevant link capacities are: • The Soton (University of Southmapton)-LeNSE link is 1Gbit/s. • The LeNSE-JANET BAR link is 2.5Gbit/s. • The ECS-Soton link will initially be set for experiments to 100Mbit/s.

The main requirements on other sites are to install the applications to be tested, to apply the QoS methods where appropriate (in particular LBE policing), to help ensure transparent transport of DSCP values between sites, and to run and inspect the measurement/monitoring tools.

Objective tests

The key observations to make are round trip time (RTT) and packet loss, both of which are available from the Cisco SAA or iperf. We are also interested in jitter measurements (which may be derived from the measurement infrastructure or the application in the case of VoIP) and any observed packet reordering (which is important as it can affect TCP throughput, as noted in the TF-NGN LBE tests).

We believe that one-way delay (OWD) measurements are not required; these would also require more sophisticated NTP and synchronisation setups.

The key goal is to assess whether BE or LBE traffic is starved during congestion.

The main measurement tools are Cisco SAA and iperf. We plan to compare results from these infrastructures.

We also plan to record/monitor with MRTG, though this may be a problem if the observed link is congested.

Subjective tests The general principle here is to run the LBE congestion during either a test AG session between the participating sites, or during a real “production” session by Soton researchers with other AG sites, and to make observations on the session quality.

For the video or audio we may see delays, signal break-up, etc, or synchronisation issues with the video and audio.

The basic metrics would be agreed prior to the tests being run. Experience from previous UKERNA Videoconferencing over IP Project tests would be used.

ND/ATG/QOS/DOC/017 23

Page 24: Qos Testplan Final

Traffic generation To generate LBE congestion, we plan to use rude9 (Real-time UDP Data Emitter).

LBE behaviour for voice/video in AccessGrid As a secondary test, we plan to classify AG video channels as LBE, leaving audio as BE, such that the video would degrade before audio in the case of congestion.

Usage of DSCP values The agreed DSCP value for LBE is 8.

End-to-end LBE The availability of end-to-end DSCP transport without modification can be tested with the NANOG traceroute package. This tool will be used to verify compliance in our tests, to ensure the DSCP value is retained end-to-end.

LBE is applied most commonly either on egress from a source or on ingress to the destination, but it may also be used on any link on the path, if the end-to-end marking property holds.

IPv6 We will undertake local tests with IPv6, using the AWM and FTP applications that support IPv6, subject to suitable IOS images being available from Cisco. iperf supports IPv6 for measurement; we do not know the status of Cisco SAA and IPv6, but iperf should be sufficient. These tests will also be fed into the EC funded 6NET10 project.

7. Monitoring Infrastructure A separate monitoring infrastructure is required to verify the behaviour of the prototype service and carry out objective performance testing of the applications that will be used during the testing phase.

The partners of the JANET QoS project agreed that a minimum of two different type of monitoring tools should be used to carry out the monitoring, and to allow the project to do a comparison of the tools selected, as well as cross examine the test results obtained from both of these tools. The tools that have been selected are Cisco’s Service Assurance Agent (SAA) and IPERF. These tools are fairly low cost (to no cost) and provide an effective solution to gather the statistical information required.

Within this project, Steve Williams (University of Wales, Swansea) is responsible for assisting the project partners to get the SAA monitoring infrastructure established; and Tim Chown (University of Southampton) is responsible for assisting a number of the project partners to establish the IPERF monitoring infrastructure.

For the SAA monitoring infrastructure Cisco 805 routers to act as SAA agents were purchased.

9 http://rude.sourceforge.net/ 10 http://www.6net.org

ND/ATG/QOS/DOC/017 24

Page 25: Qos Testplan Final

Some of the links on the network are under utilised, therefore it is necessary to create some artificial load to congest some router/switch interfaces. To do that we need to:

- choose a traffic generator tool (or tools)

- choose interfaces where congestion will be applied

- choose points on the network where traffic generators will be deployed

During the experiments, the parameters of the applications will need to be measured, as well as the parameters of the routers where the congestion is being applied.(packet loss, queue length etc). The project group will use one or more of the following methods to measure: manually polling SNMP counters or use other software

7.1 Cisco’s Service Assurance Agent

The Service Assurance Agent (SAA) is embedded software within Cisco IOS devices that performs active network monitoring by running probes between SAA enabled routers. Cisco SAA provides data on latency (RTT and OWD), loss and jitter.

The SAA infrastructure will consist of Monitoring Points (MPs) at all participating sites. The MPs at sites will run on a Cisco 805 router which will be connected next to, and should have the same QoS provisioned as, the machine running the test application. Each SAA will be configured as probe and responder (with known IP addresses for all partners) at the same time. Probes will run in a mesh between institutions and can be set with DSCP values as appropriate for the testing. Multiple probes can be run concurrently between MPs, allowing simultaneous monitoring of e.g. LBE, BE and Premium service behavior.

Cisco SAA hold the statistical information from the last probe, which is then gathered by a Collection Agent and stored on a SQL database for analysis. Some work needs to be done on the data analysis and display system to allow it to display near real-time results from the probes.

The MPs will be deployed at all the sites who will be actively participating in the testing phase. The site would manage and have access to routers in order to carry out the necessary configuration. The site where the router is deployed will be responsible for maintaining the router throughout the duration of JANET QoS project.

A total of five routers will be purchased and deployed at various sites. Imperial will configure SAA on an available Cisco router, and Swansea already have SAA deployed and will make it available for the QoS project.

The results obtained from the SAA monitoring boxes can be made available on the JANET public website.

A chosen structure for monitoring system is shown on fig.6. System has a hybrid structure combining star centralised connectivity with partly mesh connectivity. Central SAA probe working at Cisco 7206 router located at Reading C-POP sends probe traffic to peripheral SAA located at participants’ sites and working as responders. Probe traffic paths in this case don’t follow tested application traffic paths exactly but as probe packets come through the same congested interfaces of tested routers as application packets do the results of measurement are adequate. The centralised structure is very economical way of measurement of QoS parameters as all

ND/ATG/QOS/DOC/017 25

Page 26: Qos Testplan Final

data store in one SAA probe and could be obtained by polling only this probe. Polling will be conducted by the software with interval of 1 minute and obtained data will be stored in centralised SQL database in Swansea. Besides this centralised probe structure several point-to-point probe-responder pairs will be established: between Manchester and Lancaster; between Swansea and Manchester; between Swansea and Lancaster. This partial mesh of connectivity will be used for reliability and check of centralised structure.

7.2 IPERF IPERF is a freeware tool, distributed by NLANR (http://dast.nlanr.net/ .

IPERF can be used for measurement of performance both TCP and UDP traffic. In this project IPERF will be used in UDP mode.

IPERF can measure an average value of an one-way jitter. For this purpose IPERF should be installed on destination node in UDP mode: iperf –s –u or iperf –s –u –D to run as daemon (UNIX) or service (Windows).

To measure one-way jitter it is necessary to start IPERF on source node in client mode: Iperf –c <iperf-server-ip-address> -u By default IPERF client send packets during 10 seconds to produce 1 Mbps traffic with TOS field equals zero.

IPERF client allows specifying some traffic parameters:

- period of packets generating

- size of UDP datagram

- TOS value

- destination port

IPERF-server measure jitter of arriving packets on the base of its own clock. After receiving the last packet a server send client an answer containing average jitter value.

7.3 NetSight NetSight supports a web-based user interface, and it allows us to define a particular MIB object which we are able to trace

For the purpose of the project we will deploy a Netsight box at UKERNA and use only part of it’s functionality, namely polling MIB objects, storing results in a database and presenting them in graphical form on web pages.

ND/ATG/QOS/DOC/017 26

Page 27: Qos Testplan Final

NTP synchronization will allow to compare results generated by the Cisco’s SAA’s with QoS MIB values registered by Netsight during the testing phase.

8. Timescales

The timescales for the testing phase are as follows:

Work Items To Complete By:

UKERNA to produce the overall draft QoS testplan document End August 2003

Partners to submit their appropriate subsections for the QoS testplan

End November 2003

Define monitoring equipment requirements End November 2003

Implement monitoring infrastructure End January 2004

UKERNA to circulate the first complete draft of the QoS testplan document to the project group

Mid February 2004

Final Version of the QoS testplan (publish on JANET public website)

End June 2004

Conduct application trialling (commence in January 2003)

- LAN tests

- WAN-natural tests

- - WAN –controlled tests

End April 2004

January – Februry 2004

February – March 2004

March – April 2004

Final report with the results of the testing phase July 2004

10. Post Testing

Following the testing phase (includes both LAN and WAN testing), a report will be produced by UKERNA, which will include the results of the test phase, together with contributions by each of the partners of the project that participated in the testing.

The report will be disseminated to the community via the web and at the JANET QoS event on 14th July 2004.

ND/ATG/QOS/DOC/017 27

Page 28: Qos Testplan Final

ND/ATG/QOS/DOC/017 28

UKERNA

Manchester

Swansea

Fig. 6. SAA Monitoring Infrastructure

Lancaster

Imperial College

Southampton

Application / Site Edge

Application / Site Edge

Application / Site Edge

Application / Site Edge

Application / Site Edge

Collection Agent

SAA-responder

SQL

Keys

Probes and responses between SAA probes and responders

Results from SAA probes going to SQL database SAA-responder

Application / Site Edge

Reading

SAA-probe

SAA probe/responder

SAA probe/responder

SAA probe/responder

SAA-responder