Upload
amory
View
61
Download
0
Tags:
Embed Size (px)
DESCRIPTION
APAN meeting, Fukoka, Japan 24th January 2003. The Kent Ridge Advanced Network (KRAN). Lek-Heng NGOH PhD Deputy Director, SingAREN & Research Manager Institute of Infocomm Research A*STAR, Singapore. Goal. - PowerPoint PPT Presentation
Citation preview
The Kent Ridge Advanced Network (KRAN)
Lek-Heng NGOH PhD
Deputy Director, SingAREN &
Research Manager
Institute of Infocomm Research
A*STAR, Singapore
APAN meeting, Fukoka, Japan24th January 2003
Goal
To research and develop an advanced IP-over-optical network infrastructure with
support for grid computing
Approach
Work Focuses on the following Layers:
Advanced IP Layer
Optical Layer
Grid Middleware Layer
Approach
Design and setup optical testbed Test and evaluate three emerging LAN/WAN
technologies – GE, POS and RPR Trial and study of optical plane signaling and
control solutions Evaluate and test KRAN with grid middleware
& applications Conclusion
Timeline
KRAN Formation
1 MarKRAN launch
15 MarA*STAR grant
Network design
12 AprClosed Tender
Tender Process & optical technologies selection
13 MayCSCO/SCS
SolutionImplemented
25 AprKRAN kick-off
1st SCM
1 JunBII-trainee
Project Planning
1 JulOfficialstaging
10 JulEquipment
Arrival
11 Jul1st power up test
Detailed Test plans & logistics planning
18 Jul2nd SCM
Network connectivity, IP addressing and
configuration
Staging Tests
Time Schedule
Complete RPR indoor
Complete POS indoor
Complete GE indoor
Deployment Complete outdoor tests
Application tests
Early Oct
Early Nov
Early Dec
Late Dec to Early Jan 03
Outdoor tests
Early Mar 03
End Aug 03
Items in red are completed. The progress of the KRAN project is on schedule.
KRAN Project Working Group
Wong Yew Fai (CC) Wong Chiang Yoon (LIT) Nigel Teow Teck Ming (BII-CC)
Cisco Systems SCS (Singapore) Ltd,
Detailed Physical Map
NUS, CC
I2R, BII
SoCNUS, EE
Optical Node
IP/Layer-2 Node
Optical Fibre
IMCB
The IP Layer
Staging Connections
AC-DC
15194
10720
10720
SMB SMB
10720
Attenuator
Deployment Connections
AC-DC
15194
10720
10720
SMB
SMB
10720
NUS
1km
0.75km
Fibre Drum
Addressing and Naming
I2R (.3) SOC (.4)
CC (.2)
Switch (.1)
/26
172.18.36.252
33
34/30
42
41
38
37/30/30
/30
44 45
172.18.36.1
/26
/26
NUSNET
194
66
130
172.18.44.0/24Loopback 0 to 31Backbone 32 to 63CC 64 to 127I2R 128 to 191SOC 192 to 254
Project Plans
9 main major items to test Throughput/Delay/Loss/Jitter QoS Fault Recovery Service Provisioning Network Management IP support Multicast MPLS Others
Apparatus Used SmartBits as Traffic Generator SmartFlow software to drive SmartBits 3 x 10720 routers 1 x ONS15194 IP traffic aggregator 6 x 15km fiber drums 6 x 10dB attenuators Relevant fiber patch cords Optional: Catalyst 3550 switches
Test Matrices
Item 1- Throughput…
Test GE POS RPR
Distance (km) 1 15 1 15 1 15
Throughput, D
elay, Jitter, Loss
RawMulticast
Unicast
UDP
TCP
Single Conn.
Multiple Conn.
May subject to changes
Network Performance (4)
CC (2GE) and LIT (6FE) towards SOCThroughput over Loading
(with ONS15194 with fibre drums)
0
0.5
1
1.5
2
2.5
3
64bytes
96bytes
128bytes
192bytes
256bytes
512bytes
1024bytes
1518bytes
Frame Size
Lo
ad (
GB
)
Throughput Packets Sent
Throughput = Packets sent w/o lost
Thru’put is better for large frame sizes
Limitation of router to handle too many packets/sec
For large frame sizes, thru’put approaching line rate
Network Performance (5)
As loading inc, frame lost inc.
Frame Lost is huge and starts at low loading conditions for small frame sizes
Same reasoning – router limitation
For large frame sizes (>512 bytes), lost is about 7% at 2.6G loading.
CC (2GE) and LIT (6FE) towards SOCFrame Lost over Loading
(with ONS15194 with fibre drums)
0
10
20
30
40
50
60
70
80
90
100
0.1
0.3
0.5
0.7
0.9
1.1
1.3
1.5
1.7
1.9
2.1
2.3
2.5
Loading (GB)
% F
ram
es L
ost
64 bytes 96 bytes 128 bytes 192 bytes
256 bytes 512 bytes 1024 bytes 1518 bytes
Network Performance (6)
As loading inc, latency inc.
Again, large frame sizes outperform small frame sizes
3 platforms: Lowest is minimum
time it takes packets to traverse about 22.5km
2 other queues (e.g. interface and processor)
CC (2GE) and LIT (6FE) towards SOCLatency over Loading
(with ONS15194 with fibre drums)
1
10
100
1000
10000
100000
1000000
0.1
0.3
0.5
0.7
0.9
1.1
1.3
1.5
1.7
1.9
2.1
2.3
2.5
Loading
Lat
ency
(m
icro
seco
nd
s),
Lo
g S
cale
64 bytes 96 bytes 128 bytes 192 bytes
256 bytes 512 bytes 1024 bytes 1518 bytes
Network Performance (7)
As loading inc, latency dev inc. (intuitive)
Similarly, large frame sizes outperform small frame sizes (router limitation)
Platforms also evident – due to queuing; inherits from the latency graphs earlier
CC (2GE) and LIT (6FE) towards SOCLatency Standard Deviation over Loading
(with ONS15194 with fibre drums)
1
10
100
1000
10000
100000
0.1
0.3
0.5
0.7
0.9
1.1
1.3
1.5
1.7
1.9
2.1
2.3
2.5
Loading
Devia
tio
n (
mic
roseco
nd
s)
Lo
g S
cale
64 bytes 96 bytes 128 bytes 192 bytes
256 bytes 512 bytes 1024 bytes 1518 bytes
Network Performance (8)
What is presented is only a portion of the experiments conducted.
Other experiments include: Using attenuators, instead of fiber drums Stressing the GE/FE module instead of the
RPR module Driving symmetric traffic (1.3 + 1.3) rather than
asymmetric traffic (2 + 0.6) TCP/UDP/IP testing
Network Performance (9)
Some conclusions include: Fibre drum (7db) results better than attenuator
(10dB) results GE/FE module does not handle 2.6G of input
traffic and creates a bottleneck even before packets can be sent out of RPR interface.
No difference between TCP/UDP in terms of frame loss, latency and latency standard deviation.
Multiple TCP flows and single TCP flows do not affect performance.
Throughput Test
Throughput Performance Comparison between RPR, POS and GE
0%
20%
40%
60%
80%
100%
64bytes
96bytes
128bytes
192bytes
256bytes
512bytes
1024bytes
1518bytes
Frame Sizes
Th
rou
gh
pu
t
RPR POS GE (sw itches) GE (routers)
POS results poor (hardware card related)
RPR better for larger frame sizes.
GE seemingly better for smaller frame size.
GE (routers) worse than GE (switches) because of IP processing overheads
Frame Loss Test
Frame Loss against Loading for RPR, POS, GE (switching), GE (routing)
over varying frame sizes
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
4% 12%
20%
28%
36%
44%
52%
60%
68%
76%
84%
92%
100%
Loading (% of line rate)
Fra
me
Lo
ss
(% o
f p
acke
ts s
ent)
Related to throughput results
RPR performs best at large frame sizes
GE (switching) is generally better than other technologies (except RPR large frame size)
POS results are the worst again due to hardware card.
Item 2 - QoS
Test GE POS RPR
Distance (km) 1 15 1 15 1 15
QoS
Voice
Video
Data
May subject to changes
Example Test Item – RPR QoS
10720Configured SRP queues on all 10720s
10720 maps SRP/bits to appropriate traffic
2.4GB RPR Ring
SMB measures Throughput, Delay,
Jitter, Loss
SMB measures Throughput, Delay,
Jitter, Loss
1072010720
0.48GB
KRAN07-R2-QoS-RPR.doc
Layer 2 QoS Testing (4)
Class High
5 6 7
Slicer
80%
Class Default
0 1 2
3 4
20%
Scheduler
CBWFQ
Mapper
Maps Class High to SRP 7 and
the Class Default to
default SRP 0
7
7
0
0
0
SRP 5 – 7 goes to HI queue, the
rest goes to default LO
queue
SRP transmit interface
HI LO
Item 3 - Fault
Test GE POS RPR
Distance (km) 1 15 1 15 1 15
Fault Recovery
Time
Node
Link
Links
May subject to changes
Example Test Item – RPR Fault
10720
2.4GB RPR Ring
SMB measures Throughput, Delay,
Jitter, Loss
SMB measures Throughput, Delay,
Jitter, Loss
1072010720
KRAN10-R1-QoS-fault.doc
Fault Recovery
RPR (IPS) recovers in less than 5ms, well within 50ms telecom standard for voice.
POS recovers in 7.5s
GE (STP) recovers in almost 1 min.
The GE (RSTP) recovers in about 1.65s.
RPR is the clear winner
Comparing fault recovery times for RPR, POS and GE
4.5325
7555.42
1651.242
4.5433
7562.3157416.42
1
10
100
1000
10000
100000
RPR (IPS) POS (IP) GE (STP) GE (RSTP)
Technology
Fau
lt R
eco
very
Tim
e (m
illi
seco
nd
s)
with fibre drums without fibre drums
Item 4 – Service Provisioning
Test GE POS RPR
Distance (km) 1 15 1 15 1 15
Svc Provisioning
Ease of node addition, removal,
auto-configuration
May subject to changes
Item 5 – Network Management
Test GE POS RPR
Distance (km) 1 15 1 15 1 15
Network Management
SNMP MIBs
May subject to changes
Item 6 – IP Support
Test GE POS RPR
Distance (km) 1 15 1 15 1 15
IP Support
Multicast
QoS
Reroute
May subject to changes
Item 7 – Multicast
Test GE POS RPR
Distance (km) 1 15 1 15 1 15
Multicast Layer 2
May subject to changes
Item 8 - MPLS
Test GE POS RPR
Distance (km) 1 15 1 15 1 15
MPLSVPN
Layer 2
Layer 3
Fast Reroute ?
May subject to changes
Item 9 – Others
Test GE POS RPR
Distance (km) 1 15 1 15 1 15
Others
Spatial Reuse
BW Fairness
May subject to changes
Optional Items
IPv6 Security Features Jumbo Frame Support
Time Table
Mid-Jul – End Aug(5 wks)
Early Sep – Mid Nov (10
wks)
Mid Nov – Mid Jan (8 wks)
Mid Jan – End Feb (6 wks)
Mar03 – Aug 03(6 mths)
MPLS, Svc Pro, Fault, IP Mcast,
Mcast, IP reroute
RPR POS GE Application Layer
ProjectsQoS, IP QoS, IP reroute, MPLS VPN,
Throughput/Delay, SNMP, SRP
Staging Deployment Application
Switch-over Deploy best network
Deliverables
1 x Safety Document (end July) - Done
1 x RPR indoor Test Report (mid Oct) - Done
1 x POS indoor Test Report (mid Nov) - Done
1 x GE indoor Test Report (mid Dec) – Almost Done
1 x Staging Test Report (early Jan) – in progress
1 x Final Report (End Apr)
Inferring from the experimental results, GE is strong in Network Stress + QoS +
Pricing POS is strong in Multicast RPR is strong in QoS + Fault recovery
If not for fault recovery, GE may be a good choice for many networks.
Evaluation
However, a more systematic approach has been considered to determine the best of the three techs (RPR, POS, GE)
For each category (e.g. stress, QoS, Fault recovery), ranking was given.
Weights are assigned to each category depending on network requirements. (e.g. if the network requirement is strict on fault recovery times, then the fault recovery category will receive higher weigtage than other categories.)
Evaluation
Evaluation
Fault Recovery
213Ranking
Data 1.65s7.5s4.5ms
GEPOSRPR
A rank of 3 is better than 2, and 2 is better than 1
Evaluation
Other categories (QoS, Stress, etc.) are ranked similarly. Table below briefly illustrates. The actual ranking has more details.
132Multicast
312Stress
213QoS
Ranking GEPOSRPR
NB: A rank of 3 is better than 2, and 2 is better than 1
Evaluation Weights (example weights in blue) are assigned to each
category depending on its importance on the user network.
311Costs (4)
132Multicast (1)
312Stress (3)
213QoS (2)
213Fault (2)
Ranking GEPOSRPR
Evaluation Preferred tech based score = on the product of the two matrices
(weight matrix and Tech Eval matrix).
301424Score
311Costs (4)
132Multicast (1)
312Stress (3)
213QoS (2)
213Fault (2)
Ranking GEPOSRPR
Evaluation
The table indicates that GE has the highest score of 30 and is the most desired tech for the given weights.
301424Score
Ranking GEPOSRPR
Suppose weights were given to favour fault recovery timings more than pricing, RPR would have been the winner.
Conclusion All indoor tests have been completed. Experimental results were presented (fault recovery,
stress test, QoS, multicast). All 10720 routers have been deployed at CC, SOC and
I2R. Backbone connectivity between deployed nodes are up. Half the milestones were achieved and more than half of
the deliverables were completed. Will commence outdoor tests. Evaluation of the best tech after comparisons were
provided. GE -> QoS, Stress, Pricing POS -> Multicast RPR -> QoS, Fault Recovery
Optical Plane
Objectives
To experiment and identify suitable optical network signalling and control software solutions (GMPLS, OGSI) for the following cross-layer activities: traffic Engineering/QoS Management Fault Protection and Recovery To support Data-in-Network Research
GMPLS-based Control Plane Functions
VINTraffic
EngineeringProtection
& Recovery
IP Channel (KRAN)
Optical Channel (ONFIG-GMPLS)
GMPLS Software
Node Resident Module
RSVP-TE/CRLDP-TE
Module
OSPF-TE/ISIS-TEModule
LMPModule
Node Resident Module
LMPModule
RSVP-TE/CRLDP-TE
Module
OSPF-TE/ISIS-TEModule
Management andMonitoring Module
KRAN Optical Plane
Repeater
VIN NormalR
MC
MC
RM
C
MC
O
O
OR
MC
MC
KRAN Optical Node
1 2 3 4 4 3 2 1
2x2 switch
4
3
2
1
4
3
2
1
Media ConverterMedia Converter
Optical
1000BaseSX
Electrical
Conversion
1 2
GEGE
8508501000BaseSX
Grid Middleware Testing
Objectives
1.) To develop test methodologies and instrumentation techniques for the measurement and evaluation of Grid middleware performance over KRAN
2.) To further quantify key network parameters (fault recovery, QoS etc.) for the purpose of supporting Grid middleware and applications
Thank You!
Questions?