15
ETSI NFV ISG Andy Reid, BT Founding Member of ETSI NFV ISG

Andy Reid, BT Founding Member of ETSI NFV ISGdocbox.etsi.org/Workshop/2013/201304_FNTWORKSHOP/S07_NFV/BT_REID.pdf · Andy Reid, BT Founding Member of ETSI NFV ISG. Background to ETSI

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

ETSI NFV ISGAndy Reid, BT

Founding Member of ETSI NFV ISG

Background to ETSI NFV ISG

• Many carriers independently progressing research on NFV technology

– independently concluded that technology is ready but would not be commercialised quickly

for scale deployment without industry cooperation and support

• Cooperation amongst the carriers began with informal discussions in April 2012

• With wide carrier support, started informal discussions on convening an industry

forum

• A meeting in Sept 2012 decided - after consideration of several options - to

parent under ETSI as an “Industry Specification Group”

© British Telecommunications plc

parent under ETSI as an “Industry Specification Group”

• The joint white paper on Network Functions Virtualisation was published to

coincide with presentations at the OpenFlow/SDN World Congress, Darmstadt

(Oct 2012), and the ETSI Board approved creation of the NFV ISG (Nov 2012)

• Founding members:

– AT&T, BT, Deutsche Telekom, Orange, Telecom Italia, Telefonica, Verizon

• First formal meeting in ETSI HQ, Sophia Antipolis, Jan 2013

Why we believe NFV is the future for Networks

© British Telecommunications plc

3

• Standard high volume servers have sufficient packet processing performance to cost effectively virtualise network appliances.

• The hypervisor need not be a bottleneck.

• LINUX need not be a bottleneck.

• TCO advantages are scenario specific but expect significant benefits.

• Plus a significant reduction in energy consumption.

ETSI NFV

The basic concept

NFV ApproachClassical Network Appliance Approach

CDN WAN

Acceleration

Message

Router

Session Border

Controller Independent

SoftwareVendors

Co

mp

etitive

& In

no

vative

Eco

system

ISVs

© British Telecommunications plc

BRAS

FirewallDPI Tester/QoE

monitor

Radio Access

Network

Controller

Carrier

Grade NAT

PE RouterSGSN/GGSN

Vendors

High volume

Ethernet switches

High volume

standard servers

High volume

standard storage

Orchestrated, automatic, & remote install

Co

mp

etitive

& In

no

vative

NFV Infrastructure

• Fragmented non-commodity hardware.• Physical install per appliance per site.• Hardware development large barrier to entry for

new vendors, constraining innovation & competition.

NVF Organization and Structure

Technical Mgt

© British Telecommunications plc

Technical Steering Committee (chaired by Technical Manager)

WGExpert

Task

GroupWG WG…

ETSI NFV 5

NFV Scope

© British Telecommunications plc

ETSI NFV 6

NFV Applications Domain

VI Container Interface

Carrier

Management

12

5

43

4

Basic Domain Architecture

© British Telecommunications plc

Infrastructure Network Domain

Virtual Network Container Interface

Orchestration and Management Domain

Compute Domain

Compute Container Interface

Hypervisor Domain

Virtual Machine Container Interface

Exi

stin

g N

etw

ork

5

6 7

8

9

10

11

12

13 14

Example Use Cases

• Mobile networks:

– HLR/HSS, MME, SGSN, GGSN/PDN-GW, Base Station, vEPC

• NGN signalling:

– SBCs, IMS

• Switching elements:

• Security functions:

– Firewalls, virus scanners, intrusion detection systems, spam protection

• Tunnelling gateway elements:

– IPSec/SSL VPN gateways

• Converged and network-wide

© British Telecommunications plc

• Switching elements:

– BNG, CG-NAT, routers

• Home environment:

– home router, set top box

• Application-level optimisation:

– CDNs, Cache Servers, Load Balancers, Application Accelerators

• Converged and network-wide functions:

– AAA servers, policy control and charging platforms

• Traffic analysis/forensics:

– DPI, QoE measurement

• Traffic Monitoring,

– Service Assurance, SLA monitoring, Test and Diagnostics

Benefits

• Reduced equipment costs (CapEx) through equipment consolidation equipment and due to economies of scale

• Reduced operational costs (OpEx): labor, power, space

• Increased speed of Time to Market by minimising the typical network

operator cycle of innovation.

• Availability of network appliance multi-version and multi-tenancy, which allows use of a single platform for different applications, users and

© British Telecommunications plc

allows use of a single platform for different applications, users and tenants.

• Flexibility to easily, rapidly dynamically provision and instantiate new

services in various locations (no need for new equipment install)

• Improved operational efficiency by taking advantage of the higher

uniformity of the physical network platform and its homogeneity to other

support platforms.

• Encouraging innovation to bring new services and generate new

revenue streams

• Mobility of skillset and talent (easy to move around, on need basis)

ETSI NFV ISG

• Carrier-led Industry Specification Group (ISG) under the auspices of ETSI (20 carriers and mobile operators). Wide industry support (more than 50 vendors).

• Open membership to everyone

– ETSI members sign the “Member Agreement”

– Non-ETSI members sign the “Participant Agreement”

• Operates by consensus (formal voting only when required)

• Deliverables: White papers addressing challenges and operator requirements, as input to standardisation bodies

© British Telecommunications plc

• Face-to-face meetings quarterly

• Currently four (4) WGs and two (2) expert groups (EG)

– WG1: Infrastructure Architecture EG: Security

– WG2: Management and Orchestration EG: Performance and Portability

– WG3: Software Architecture

– WG4: Reliability & Availability

• Network Operators Council (NOC)

– governing and technical advisory body

• Technical Steering Committee:

– Technical Manager

– WG Chairs, EG Leaders

Do join and contribute

© British Telecommunications plc

© British Telecommunications plc

EXTRAS

Implementing Hierarchical-QoS in Software

Progress� The January 2012 vBRAS test implemented Priority QoS and implementing

Hierarchical-QoS in software was seen as a barrier.

� BT & Intel ® initiated a project to implement high performance H-QoS in

software.

� Currently implemented a Hierarchical scheduler with:

5 levels, 64K queues, traffic shaping, strict priority and weighted round robin.

� Preliminary performance per CPU core is close to line rate for hierarchical

scheduling and packet transmission for one 10GbE port at 64 byte packet size

i.e. 13.3 Mpps

13

© British Telecommunications plc

Slide 13

i.e. 13.3 Mpps

� Hardware: 2x Intel Xeon E5-2680 CPUs @2.7GHz, 8 cores, 20MB L3 cache, 8GT/s QPI, 4x DDR3

memory, 32 GB DDR3 memory: 2x 2GB DIMMs per each of the 4x memory channels of each CPU, 1x Intel X520-SR2 Dual Port 10Gbps Ethernet Controller connected to CPU0 through one PCI-Express Gen2 x8

slot.

� Software: Fedora release 16 (Verne) with Linux kernel 3.1.0-7.fc16.x86_64. Kernel boot time

configuration: 16x 1GB huge memory pages reserved (8 pager for each CPU), CPU isolation enabled to restrict kernel scheduler to CPU0 core 0; Intel DPDK 1.4 Early Access Release 1

� Subject to further development and testing.

� H-QoS may be included in the Intel ® DPDK.

Hierarchical Scheduler Performance

Gbps p

er C

PU

Core

Virtualising Content Distribution Networks� Ran Verivue (now Akamai Aura) HyperCache node and IneoQuest adaptive

stream monitor, measuring Video QoE, virtual machines on VMware ESXi 5.0 on

an HP BL460 G8 server with 2 x 10GigE ports.

Results shown below

� The video traffic from the virtual HyperCache node was “mirrored” to the virtual

IneoQuest ASM using the standard VMware Vswitch.

Currently investigating bottlenecks and testing new version of ASM.

� For BT’s UK network the virtualised solution 8Gbps level of performance would

be sufficient for 77% of Metro nodes.

Virtualisation reduces box count, saving CAPEX & OPEX.

14

© British Telecommunications plc

Slide 14

18,7

16,6

10,7

8 8 8

4,5 4,5 4,5

0

2

4

6

8

10

12

14

16

18

20

All PDL All ABR VoD

All ABR Live

Peak Ntwk Thro'put Cache (Gbit/s) with ASM Off

Peak Ntwk Thro'put Cache (Gbit/s) with ASM On

ASM Monitor Capacity (Gbit/s)

Gb

ps

Rank order UK Metro Nodes

Mb

ps C

ach

e T

raff

ic P

er

No

de

<8 Gbps > 77% of

Nodes in 2013/14

<32 Gbps > 77% of

Nodes in 2017/18

PDL = Progressive DownLoad. ABR = Adaptive Bit Rate. VoD = Video on Demand. Live = live linear TV. ASM = Adaptive Stream Monitor from IneoQuest.

Running on 1 HP BL460c G8

Where Virtualisation Improves Performance

� Widely accepted that virtualisation reduces performance compared to running

on “bear metal” but here’s a real application where it improves performance:

� Scalable IPsec solutions are required for FONera roaming WiFi and LTE

services.

Investigated lowest cost IPsec solution for BT’s FON WiFi service.

Requirements: Null encryption, 3DES IKE, ~80Kbps/tunnel, millions tunnels,

high tunnel set-up rate.

� Tested the KAME solution bundled in the Linux kernel (Ubuntu 10.04 LTS)

15

With Virtualisation

© British Telecommunications plc

Slide 15

� Tested the KAME solution bundled in the Linux kernel (Ubuntu 10.04 LTS)

achieved 7K tunnels.

IPse

c tu

nn

els

pe

r DL

36

0 s

erv

er

Number of E5-2667 Cores

3.2

Gbps

3.8

Gbps

Packets

dro

pped!1.2

G

Tunnel set-uprate

= 100/sec

� Bottleneck was a single core being

used to terminate all IPsec tunnels.

� How to use more CPU cores?

� Rewrite the code �

� Or use KVM and run multiple

virtual Linux kernels to load share

the IPsec tunnels across multiple

cores ☺

Used KVM (redhat 6.3) with Ubuntu 10.04 LTS virtual machines

WithoutVirtualisation

With Virtualisation