Upload
kaylynn-tenison
View
223
Download
2
Tags:
Embed Size (px)
Citation preview
JUNIPER METAFABRIC Westcon 5 daagse
Washid Lootfun
Sr. Pre-Sales Engineer
FEBRUARY, 2014
META-FABRIC ARCHITECTURE PILLARS
Save time,improve performance
• Elastic (Scale-out) Fabrics
• Qfabric
• Virtual Chassis
• Virtual Chassis Fabric
Smart
Easy to deploy & use
• Mix- and match deployment
• One OS
• Universal buidling block for any network architecture
• Seamless 1GE 10GE 40GE 100GE upgrades
Simple
Maximize flexibility• Open Standards-based
interfaces L2,L3 MPLS
• Open SDN protocol support, VxLAN, OVSDB, OpenFlow
• IT Automation via Open Interfaces; Vmware, Puppet, Checf, Python
• JUNOS Scripting & SDK
• Standard Optics
Open
METAFABRIC ARCHITECTURE PORTFOLIO
Flexible building blocks; simple switching fabricsSwitching
Universal data center gatewaysRouting
Smart automation and orchestration toolsManagement
Simple and flexible SDN capabilitiesSDN
Adaptive security to counter data center threatsData Center Security
Reference architectures and professional servicesSolutions & Services
EX SWITCHES
EX SERIES PRODUCT FAMILY
One JUNOS
Network Director
FIXED
EX2200EX2200-C
EX3300 EX4200 EX4300 EX4550
Entry Level Access
Switches Proven Access Switch
Versatile Access Switch
Powerful Aggregation
Switch
ACCESS
MODULAR
EX6210
EX8208EX8216
EX9204EX9208EX9214
Dense Access/Aggregation
Switch
Core/Aggregation
Switch
Programmable Core/Distribution
Switch
AGGREGATION/ CORE
EX4300 SERIES SWITCHES
Product Description• 24/48x 10/100/1000 TX access ports • 4x 1/10G (SFP/SFP+) uplink ports• 4x 40G (QSFP+) VC / uplink ports• PoE / PoE+ options• Redundant / Field Replaceable components (power supplies,
fans, uplinks)• DC power options
Notable Features• L2 and basic L3 (static, RIP) included
• OSPF, PIM available with enhanced license • BGP, ISIS available with advanced license
• Virtual Chassis• 10 members• 160-320 Gbps VC backplane
• 12 hardware queues per port• Front to Back & Back to front airflow options
Target Applications• Campus data closets• Top of Rack data center / High Performance 1G
server attach applications• Small Network Cores
SKU # Ports PoE/PoE+ Ports PoE power budget
EX4300-24P 24 24 550 W
EX4300-24T 24 - -
EX4300-48P 48 48 900 W
EX4300-48T 48 - -
EX4300-48T-AFI 48 - -
EX4300-48T-DC 48 - -
EX4300-48T-DC-AFI 48 - -
AFI AFO
• L2, L3 switching
• MPLS & VPLS /EVPN*
• ISSU• Junos Node Unifier
• 1M MAC addresses
• 256K IPv4 and 256K IPv6 routes
• 32K VLANs (bridge domains)
• Native programmability (Junos image)
• Automation toolkit
• Programmable Control/Management planes and SDK (SDN, OpenFlow, etc.)
• 4, 8 & 14 slots; 240G/slot
• 40x1GbE, 32x10GbE, 4x40GbE & 2x100GbE
• Powered by Juniper One Custom Silicon
INTRODUCING THE EX9200 ETHERNET SWITCHAVAILABLE MARCH 2013
Juniper One Custom Silicon
Roadmap
EX9204
EX9208
EX9214
EX9200 LINE CARDS
40 x 10/100/1000BASE-T 40 x 100FX/1000BASE-X SFP
1GbELine Cards
10GbELine Card
40GbELine Card
100GbELine Card
32 x 10GbE SFP+ Up to 240G throughput
4 x 40GE QSFP+ Up to 120G throughput
2 x 100G CFP + 8 x 10GbE SFP+ Up to 240G throughput
EX9200-4QS
EX9200-2C-8XS
EX9200-32XS
EX9200-40F/40T
EX9200 FLEXIBILITY VIRTUAL CHASSIS
Management
AccessSwitch
AccessSwitch
High Availability Redundant RE, switch fabric Redundant power /cooling
Performance and Scale Modular configuration High-capacity backplane
Easy to Manage Single image, single config One management IP address
Single Control Plane Single protocol peering Single RT/FT
Virtual Chassis–A Notch Up Scale ports/services beyond one chassis Physical placement flexibility Redundancy beyond one chassis One management and control plane
13.2R2
Require Dual RE’s Per Chassis
ACCESS
DISTRIBUTION
CORE
ON ENTERPRISE SWITCHING ARCHITECTURES
Multi-Tier Collapsed Distribution & Core Distributed Access
Solution: Virtual chassis at both Access and Distribution layers
Solution: Collapse Core and Distribution, Virtual chassis at Access layer
Solution: Virtual chassis at Access layer acrosswiring closets
Network Director
Problem: Existing architectures lack scale, flexibility and are operationally complex
Benefit: Management Simplification, Reduced Opex
Benefit: Simplification through Consolidation, Scale, Aggregation, Performance
Benefit: Flexibility to expand and grow, Scale, Simplification
COLLAPSE A VERTICAL BUILDING
WLA
VIRTUAL CHASSIS DEPLOYMENT ON ENTERPRISE
Span Horizontal or VerticalCONNECT WIRING CLOSETS
EXSeries Virtual Chassis
CLOSET 2
EX4300Aggregation/
CoreAccess
10GbE/40GbE uplinks
10/40GbE
40G VCP
CLOSET 1
BUILDING A BUILDING B
WLA
WLAEX4300VC-2a
WLA
EX4300VC-3a
WLA
EX3300VC-1a
LAG
EX4550VC-1a
LAG
WLA
EX6200-1b
SRX Series Cluster
LAG
WLA
App Servers
Centralized DHCP and
other services
LAG
EX9200VC-1b
WLCCluster
Internet
WLA
WLA
Private MPLS Campus Core with VPLS
or L3VPN
DEPLOYING MPLS AND VPN ON ENTERPRISE— METRO/DISTRIBUTED CAMPUSStretch the Connectivity for a Seamless Network
Core Switch (PE)
Access Switche (CE)
MPLS
VLAN
Access Switche (CE)
Wireless Access Point
Wireless Access Point
SITE 1
Core Switch (PE)
VLAN1
VLAN2 R&D VPN
Marketing/ Sales VPN
Finance/ Business Ops VPN
Core Switch (PE)
Access Switche (CE)
MPLS
VLAN
Access Switche (CE)
Wireless Access Point Wireless
Access Point
SITE 3
Core Switch (PE)
Core Switch (PE)
Access Switches (CE)
MPLS
VLAN
Access Switches (CE)
Wireless Access Point
Wireless Access Point
SITE 2
Core Switch (PE)
VLAN3
JUNIPER ETHERNET SWITCHING
Simple Reliable Secure #3 market share in 2 years
20,000+ switching customers
Enterprise & Service Providers
23+ Million ports deployed
Copyright © 2013 Juniper Networks, Inc.
QFX5100 PLATFORM
QFX5100 SERIES
•Next Generation Top of rack switches– Multiple 10GbE/40GbE port
count options
– Supports multiple data center switching architectures
•New Innovations:– Topology-Independent In-Service Software
Upgrades
– Analytics
– MPLS
– GRE tunneling
Low Latency
Rich L2/L3 features including MPLS
SDN ready
QFX5100 NEXT GENERATION TOR
Low latency │ Rich L2/L3 feature set │ Optimized FCoE
QFX5100-48S
48 x 1/10GbE SFP+
6 x 40GbE QSFP uplinks
1.44 Tbps throughput
1U fixed form factor
QFX5100-96S
96 x 1/10GbE SFP+
8 x 40GbE QSFP uplinks
2.56 Tbps throughput
2U fixed form factor
QFX5100-24Q
24 x 40GbE QSFP
8 x 40GbE expansion slots
2.56 Tbps throughput
1U fixed form factor
QFX5100-48S
Each 40GbE QSFP interface can be converted to 4 x 10GbE interfaces without reboot
Maximum 72 x 10GbE interfaces, 720Gbps
CLI to change port speed:
set chassis fpc <fpc-slot> pic <pic-slot> port <port-number> channel-speed 10Gset chassis fpc <fpc-slot> pic <pic-slot> port-range <low> <high> channel-speed 10G
Q4CY2013
48 x 1/10GbE SFP+ interfaces 6 x 40GbE QSFP interfaces
Mgmt0(RJ45)
Mgmt1(SFP)
Console USB
4+1 redundancy fan tray, color coded (orange: AFO, blue: AFI), Hot-swappable
1+1 redundancy 650W PS color coded, hot-swappable
Front side (port side) view
QFX5100-96S
Supports two port configuration modes:
96 x 10GbE SFP plus 8 x 40GbE interfaces
104 x 10GbE interfaces
1.28Tbps (2.56Tbps full duplex) switching performance
New 850W 1+1 redundant color-coded hot-swappable power supplies
2+1 redundant color-coded hot-swappable fan tray
Q1CY2014
96 x 1/10GbE SFP+ interfaces 8 x 40GbE QSFP interfaces
Front side (port side) view
QFX5100-24Q
Port configuration has 4 modes, mode change requires reboot
1. Default (Fully Subscribed mode):
1. Doesn’t support QIC
2. Maximum 24x40GbE interfaces or 96x10GbE interfaces; line rate performance for all packet sizes
2. 104-port mode
1. Only first 4x40GbE QIC are supported with last 2 40GbE interfaces disabled; first 2 QSFPs work as 8x10GbE
2. 2nd QIC slot cannot be used; no native 40GbE support.
3. All base ports can be changed to 4x10GbE ports (24x4=96), so total is 104x10GbE interfaces
3. 4x40GbE PIC mode
1. All base ports can be channelized
2. Only 4x40GbE QIC is supported; works in both QIC slots but can’t be channelized.
3. 32X40GbE or 96X10GbE + 8X40GbE
4. Flexi PIC mode
1. Support all QICs but QIC can’t be channelized
2. Only base port 4-24 can be channelized. Also supports 32x40GbE configuration
Q1CY2014
24 x 40GbE QSFP interfaces Two hot-swappable 4x40GbE QSFP modules
Front side (port side) view (Same FRU side configuration as QFX5100-24S
ADVANCED JUNOS SOFTWARE ARCHITECTURE
Provides the foundation for advanced functions
• ISSU (In-Service Software Upgrade). ENABLE HITLESS UPGRADE• Other Juniper applications for additional service in a single switch• Third-party application• Can bring up the system much faster
Linux Kernel (Centos)Host NW Bridge KVM
JunOSVM
(Active)
JunOSVM
(Standby)3rd Party Application3rd Party Application Juniper AppsJuniper Apps
Junos VM (Master)Junos VM (Master) Junos VM (Backup)Junos VM (Master)
QFX5100 HITLESS OPERATIONSDRAMATICALLY REDUCES MAINTENANCE WINDOWS
Network Resiliency
Net
wor
k P
erfo
rman
ce
QFX5100 Topology- Independent ISSU
CompetitiveISSU Approaches
Data Center Efficiency DuringSwitch Software Upgrade
High-Level QFX5100 Architecture
x86 Hardware Broadcom Trident II
Kernal Based Virtual Machines
Broadcom Trident II
PFE PFE
Linux Kernel
F l e xi b
l e H
i t l e ss
S i m p l e
Benefits:• Seamless Upgrade
• No Traffic Loss• No Performance impact• No resilient risk• No port flap
INTRODUCING VCF ARCHITECTURE Leafs - Integrated L2/L3 gateways
Connects to Virtual and bare metal servers
Local switching
Any to Any connections
Single Switch to Manage
OVM VM VM
vSwitch
Virtual Server
OVM VM VM
vSwitch
Virtual Server Bare Metal
Spine Switches
1 RU, 48 SFP+ & 1 QIC
Leaf switches
Spines – Integrated L2/L3 switches
Connects leafs , Core, WAN and services
Services GW
Any to Any connections
PLUG-N-PLAY FABRIC
OVM VM VM
vSwitch
Virtual Server
OVM VM VM
vSwitch
Virtual Server Bare Metal
1 RU, 48 SFP+ & 1 QIC
Services GWWAN/Core
New leafs are auto-provisioned Auto configuration and image Sync Any non-factory default node is treated as network device
QFX5100-24Q
EX9200
Virtual Chassis Fabric (VCF) – 10G/40G
1 RU, 48 SFP+ & 1 QIC
QFX5100-48S EX4300
10G access Existing 1G access
QFX3500
Existing 10G access
VIRTUAL CHASSIS FABRIC DEPLOYMENT OPTION
QFX5100 – SOFTWARE FEATURES
Q4 2013 Q1 2014
• Planned FRS Features*• L2: xSTP, VLAN, LAG, LLDP/MED• L3: Static routing, RIP, OSPF, IS-IS, BGP, vrf-lite, GRE• Multipath: MC-LAG, L3 ECMP• IPv6: Neighbor Discovery, Router advertisement, static
routing, OSPFv3, BGPv6, IS-ISv6, VRRPv3, ACLs• MPLS, L3VPN, 6PE• Multicast: IGMPv2/v3, IGMP snooping/querier, PIM-
Bidir, ASM, SSM, Anycast, MSDP• QoS: Classification, Cos/DSCP rewrite, WRED,
SP/WRR, ingress/egress policing, dynamic buffer allocation, FCoE/Lossless flow, DCBx, ETS. PFC, ECN
• Security: DAI, PACL, VACL, RACL, storm control, Control Plane Protection
• 10G/40G FCoE, FIP snooping• Micro-burst Monitoring, analytic• Sflow, SNMP• Python
• Planned Post-FRS Features• Virtual Chassis – Mixed mode
• 10 Member Virtual Chassis: Mix of QFX5100, QFX3500/QFX3600, EX4300
• Virtual Chassis Fabric: 20 nodes at FRS with mix of QFX5100, QFX3500/QFX3600, and EX4300
• Virtual Chassis features:• Parity with standalone• HA: NSR, NSB, GR for routing protocols,
GRES• ISSU on standalone QFX5100 and all QFX5100
Virtual Chassis, Virtual Chassis Fabric• NSSU in mixed mode of Virtual Chassis or Virtual
Chassis Fabric• 64-way ECMP• VXLAN gateway*• OpenStack, Cloudstack integration*
* After Q1 time frame*Please refer to release notes and manual for latest information
New
Virtual Chassis FabricUp to 20 members
QFX5100
Spine-Leaf
…
Virtual Chassis
Improved
Up to 10 members
QFabricImproved
Managed as a Single Switch
Layer 3 Fabric
L3 Fabric
QFX5100
… Up to 128 members
VCF OVERVIEWSimple
Single device to manage Predictable performance Integrated RE Integrated control plane
Automated Plug-n-Play Analytics for traffic monitoring Network Director
Available 4 x Integrated RE GRES/NSR/NSB ISSU/NSSU Any-to-Any connectivity 4 way multi-path
Flexible Up to 768 ports 1,10,40G 2-4 spines 10 and 40G spine L2 , L3 and MPLS
….
CDBU SWITCHING ROADMAP SUMMARY
2T20142T2013 3T2013 1T2014
Har
dwar
eSo
ftw
are
EX4300
QFX5100 (24SFP+)
QFX5100 10GBASE-T
Solu
tions
VXLAN Gateway Opus
Future
EX4550 10GBASE-TEX4550 40GbE
Module
EX9200 2x100G LC
QFX5100 (48SFP+)
QFX5100 (96SFP+)
VXLAN Routing EX9200
EX9200 6x40GbE LC
EX9200 400GbE per slot
Virtual Chassis w/ QFX Series
QFX3000-M/G10GBASE-T Node
DC 1.0Virtualized IT DC
DC 1.1ITaaS & VDI
QFX3000-M/GL3 Multicast
40GbE
QFX3000-M/GQinQ, MVRP
ISSU on Opus
OpenFlow 1.3
ND 1.5
AnalyticsD
DC 2.0IaaS /w Overlay
EX9200 MACsec
ND 2.0
Opus PTPQFX5100 (24QSFP+)
QFX3000-M/G QFX5100 (48 SFP+)
Node
V20
EX4300 Fiber
Campus 1 .0
Copyright © 2013 Juniper Networks, Inc.
MX SERIES
SDN AND THE MX SERIES Delivering innovation inside and outside of the data center
Flexible SDN enabled silicon to provide seamless workload
mobility and connections between private and public cloud
infrastructures
ORE(Overlay Replication
Engine)
A hardware-based, high-performance services
engine for broadcast and multicast replication within
SDN overlays
The most advanced and flexible SDN bridging and
routing gateway
USG(Universal
SDN Gateway)
Next-generation technology for connecting multiple data
centers and providing seamless workload mobility
EVPN(Ethernet
VPN)
VMTO(VM Mobility Traffic
Optimizer)
Creating the most efficient network paths for mobile
workloads
VXLAN PART OF UNIVERSAL GATEWAY FUNCTION ON MX
Bridge-Domain.NVLAN-ID: N
LAN interface #N
LAN interface #K
VTEP #N
VNID N
IRB.N
VPLS, EVPNL3VPN
Bridge-Domain.1VLAN-ID: 1002
LAN interface #3
LAN interface #4
VTEP #1
VNID 1
Bridge-Domain.0VLAN-ID: 1001
LAN interface #1
LAN interface #2
VTEP #0
VNID 0
IRB.1
IRB.0
• - High scale multi-tenancy– VTEP tunnels per tenant– P2P, P2MP tunnels
• - Tie to full L2, L3 functions on MX
– Unicast, multicast forwarding– IPv4, IPv6– L2: Bridge-domain, virtual-
switch
• - Gateway between LAN, WAN and Overlay
– Ties all media together– Giving migration options to
the DC operator
1H 2014
Tenant #0: virtual DC #0
Tenant #1, virtual DC #1
Tenant #N, virtual DC
#N
DC GW
USG(Universal SDN Gateway)
Bare Metal Servers
• Databases
• HPC
• Legacy Apps
• Non x86
• IP Storage
• Firewalls
• Load Balancers
• NAT
• Intrusion Detection
• VPN Concentrator
L4 – 7 Appliances
• NSX ESXi
• NSX KVM
• SC HyperV
• Contrail KVM
• Contrail ZEN
SDN ServersVirtualized Servers
• ESX
• ESXi
• HyperV
• KVM
• ZEN
NETWORK DEVICES IN THE DATA CENTER
USG (UNIVERSAL SDN GATEWAY)Introducing four new options for SDN enablement
Provide SDN-to-non-SDN translation, same IP subnet
SDN to IP (Layer 2)Layer2 USG
RemoteData
Center
BranchOffices Internet
Layer3 USG
Provide SDN-to-non-SDN translation, different IP subnet
SDN to IP (Layer 3)
Provide SDN-to-SDN translation, same or different IP subnet, same or different overlay
SDN USG
SDN to SDN
WAN USG
Provide SDN-to-WAN translation, same or different IP subnet, same or different encapsulation
SDN to WAN
USG(Universal SDN Gateway)
USGs INSIDE THE DATA CENTER
DATA CENTER 1
Legacy PodsSDNPod 1
Layer2 USG
Layer3 USG
SDN USG
WAN USG
L4 – 7Services
USG(Universal SDN Gateway)
Using Layer 2 USGs to bridge between devices that reside within the same IP
subnet:1. Bare metal servers like high-performance databases,
non-x86 compute, IP storage, non-SDN VMs
2. Layer 4–7 services such as load balancers, firewalls, Application Device Controllers, and Intrusion Detection/Prevention gateways.
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN
Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2
Native IP L2
Native IP L2 Native IP L2 Native IP L2 Native IP L2
Native IP L2
Native IP L2
Native IP
L2 N
ative IP L
2 Native IP
L2 Native IP L2 Native
USGs INSIDE THE DATA CENTER
DATA CENTER 1
Legacy PodsSDNPod 1Layer3 USG
SDN USG
WAN USG
L4 – 7Services
USG(Universal SDN Gateway)
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN
Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3
Native IP L3
Native IP L3 Native IP L3 Native IP L3 Native IP L3
Native IP L3
Native IP L3
Native IP
L3 N
ative IP L
3 Native IP
L3 Native IP L3 Native
Using Layer 3 USGs to route between devices that reside within different IP
subnets:1. Bare metal servers like high-performance databases,
non-x86 compute, IP storage, non-SDN VMs
2. Layer 4–7 services such as load balancers, firewalls, Application Device Controllers, and Intrusion Detection/Prevention gateways.
Layer2 USG
GRE MPLSoverGRE MPLSoverGRE MPLSoverGRE MP
NSXSDN Pod 2
USGs INSIDE THE DATA CENTER
DATA CENTER 1
SDNPod 1
Layer2 USG
Layer3 USG
SDN USG
WAN USG
Using SDN USGs to communicate between islands of SDN:
1. NSX to NSX – Risk, scale, change control, administration
2. NSX to Contrail – Multi-vendor, migrations
USG(Universal SDN Gateway)
ContrailSDN Pod 1
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN
VxLAN
VxLAN
VxLAN
MPLSover
VxLAN VxLAN VxLAN VxLAN
VxL
AN
VxL
AN
VxL
AN
VxL
AN
VxL
AN
VxL
AN
MPLSover
LS
overG
RE
MP
LS
overG
RE
MP
LS
VxLAN
BRANCH OFFICES
NSX SDN Pod 2
Internet
USGs FOR REMOTE CONNECTIVITY
DATA CENTER 1
SDNPod 1
Layer2 USG
Layer3 USG
SDN USG
WAN USG
USG(Universal SDN Gateway)
DATA CENTER 2
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3
Native IP L3
VxLAN
GRE GREGRE GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE GRE
GRE GRE
EVPN EVPN
EVPN EVPN
EVPN EVPN EVPN EV
PN
EV
PN
EV
PN
EV
PN
EV
PN
EVPN EVPN
VxLAN
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN
Using SDN USGs to communicate to resources outside the local data center:
1. Data Center Interconnect – SDN to [VPLS, EVPN, L3VPN]
2. Branch Offices – SDN to [GRE, IPSec]
3. Internet – SDN to IP (Layer 3)
EVPN EVPN
Internet
ContrailSDN Pod 1
L4–7Services
Native IP L3 Native IP L3 Native IP L3 Native IP L3
Native IP L3 Native IP
L3 N
ative IP L
3 Native IP
L3 Native IP L3 Native
Native IP L2 Native IP L2 Native IP L2 Native IP L2
Native IP L2
Native IP
L2 N
ative IP L
2 Native IP
L2 Native IP L2 Native
MPLSoverGRE MPLSoverGRE MPLSoverGRE
MPLSover
LS
overG
RE
MP
LS
overG
RE
MP
LS
VxLAN
VxLAN
VxLAN VxLAN VxLAN VxLAN
Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3
NSXSDN Pod 2
UNIVERSAL GATEWAY SOLUTIONS
DATA CENTER 1
SDNPod 1
Layer2 USG
Layer3 USG
WAN USG
USG(Universal SDN Gateway)
Legacy Pods
DATA CENTER 2
SDN Pod 2
BRANCH OFFICES
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN
Native IP L3
VxLAN VxLAN VxLAN VxLAN VxLAN VxLAN
VxLAN
Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2
Native IP L2
VxL
AN
VxL
AN
VxL
AN
VxL
AN
VxL
AN
MPLSoverNative IP L2
Native IP L3
Native IP L3
Native IP
L3 N
ative IP
EV
PN
EVPN EVPN
GRE GRE
VxLAN
VxLAN
SDN USG GRE GRE GRE GRE GRE GRE GRE
GR
E G
RE
GR
E
GRE GRE
VxLAN VxLAN VxLAN
VxL
AN
USG COMPARISONS
Description
QFX5100
MX Series/EX9200
Layer 2
USG
Provide SDN-to-non-SDN translation, same IP subnet
✔
✔
NSX or Contrail talk Layer 2 to non-SDN VMs, bare metal and L4-7 services
Use Cases
Layer 3
USG
Provide SDN-to-non-SDN translation, different IP
subnet
✔
NSX or Contrail talk Layer 3 to non-SDN VMs, bare metal and L4-7 services
and Internet
SDN
USG
Provide SDN-to-SDN translation, same or different IP subnet, same or different Overlay
✔
NSX or Contrail talk to other PODs of NSX or
Contrail
WAN
USG
Provide SDN-to-WAN translation, same or different IP subnet
✔
NSX or Contrail talk to other remote locations –
branch, DCI
X86 Appliance ✔ ✔
Competing ToRs ✔
Competing Chassis ✔
Description
USG(Universal SDN Gateway)
Next-generation technology for connecting multiple data centers and providing seamless workload mobility
EVPN (Ethernet VPN)
PRIVATE MPLS WAN without EVPN
VLAN 10
PRE-EVPN: LAYER 2 STRETCH BETWEEN DATA CENTERS EVPN
(Ethernet VPN)
DATA CENTER 1
VLAN 10
DATA CENTER 2
✕
Without EVPN
Data Plane
• Only one path can be active at a given time• Remaining links are put into standby mode
Control Plane
• Layer 2 MAC tables are populated via the data plane (similar to a traditional L2 switch)
• Results in flooding of packets across WAN due to out of sync MAC tables
MAC: AA
Server 1xe-1/0/0.10
xe-1/0/0.10 xe-1/0/0.10
xe-1/0/0.10
MAC: BB
Server 2
ge-1/0/0.10
ge-1/0/0.10
MAC VLAN Interfaces
BB 10 xe-1/0/0.10
Router 2’s MAC Table
ge-1/0/0.10
ge-1/0/0.10
MAC VLAN Interfaces
AA 10 xe-1/0/0.10
Router 1’s MAC Table
PRIVATE MPLS WAN without EVPN
VLAN 10
POST-EVPN: LAYER 2 STRETCH BETWEEN DATA CENTERS
EVPN(Ethernet VPN)
DATA CENTER 1
VLAN 10
DATA CENTER 2
With EVPN
Data Plane
• All paths are active• Inter-data center traffic is load-balanced across all
WAN links
Control Plane
• Layer 2 MAC tables are populated via the control plane (similar to QFabric)
• Eliminates flooding by maintaining MAC table synchronization between all EVPN nodes
MAC VLAN Interfaces
AA 10 xe-1/0/0.10
BB 10 ge-1/0/0.10
Router 1’s MAC Table
MAC: AA
Server 1xe-1/0/0.10
xe-1/0/0.10 xe-1/0/0.10
xe-1/0/0.10
MAC: BB
Server 2
ge-1/0/0.10
ge-1/0/0.10
MAC VLAN Interfaces
BB 10 xe-1/0/0.10
AA 10 ge-1/0/0.10
Router 2’s MAC Table
ge-1/0/0.10
ge-1/0/0.10
VMTO(VM Mobility Traffic Optimizer)
Creating the most efficient network paths for mobile workloads
VMTO(VM Mobility
Traffic Optimizer)
PRIVATE MPLS WAN
VLAN 10VLAN 10
Scenario without VMTO
THE NEED FOR L2 LOCATION AWARENESS
DC1 DC2
Scenario with VMTO enabled
PRIVATE MPLS WAN
VLAN 10 VLAN 10
DC1 DC2
DC 2VLAN 10
10.10.10.100/24
DC 3
10.10.10.200/24
VLAN 10
VLAN 20
Server 2 Server 3
Server 1
PRIVATE MPLS WAN
DC 1
20.20.20.100/24
Active VRRPDG: 10.10.10.1
Standby VRRPDG: 10.10.10.1
Standby VRRPDG: 10.10.10.1
Standby VRRPDG: 10.10.10.1
WITHOUT VMTO: EGRESS TROMBONE EFFECT
Task: Server 3 in Data Center 3 needs to send packets to Server 1 in Data Center 1.
Problem: Server 3’s active Default Gateway for VLAN 10 is in Data Center 2.
Effect: 1. Traffic must travel via Layer 2 from Data Center 3 to Data Center
2 to reach VLAN 10’s active Default Gateway.2. The packet must reach the Default Gateway in order to be routed
towards Data Center 1. This results in duplicate traffic on WAN links and suboptimal routing – hence the “Egress Trombone Effect.”
VMTO(VM Mobility
Traffic Optimizer)
DC 2VLAN 10
10.10.10.100/24
DC 3
10.10.10.200/24
VLAN 10
VLAN 20
Server 2 Server 3
Server 1
PRIVATE MPLS WAN
DC 1
20.20.20.100/24
Active IRBDG: 10.10.10.1
Active IRBDG: 10.10.10.1
Active IRBDG: 10.10.10.1
Active IRBDG: 10.10.10.1
WITH VMTO: NO EGRESS TROMBONE EFFECT
Task: Server 3 in Datacenter 3 needs to send packets to Server 1 in Datacenter 1.
Solution: Virtualize and distribute the Default Gateway so it is active on every router that participates in the VLAN.
Effect: 1. Egress packets can be sent to any router on
VLAN 10 allowing the routing to be done in the local datacenter. This eliminates the “Egress Trombone Effect” and creates the most optimal forwarding path for the Inter-DC traffic.
VMTO(VM Mobility
Traffic Optimizer)
DC 2VLAN 10
10.10.10.100/24
DC 3
10.10.10.200/24
VLAN 10
VLAN 20
Server 2 Server 3
Server 1
PRIVATE MPLS WAN
DC 1
20.20.20.100/24
WITHOUT VMTO: INGRESS TROMBONE EFFECT
Task: Server 1 in Datacenter 1 needs to send packets to Server 3 in Datacenter 3.Problem: Datacenter 1’s edge router prefers the path to Datacenter 2 for the 10.10.10.0/24 subnet. It has no knowledge of individual host IPs.Effect:1. Traffic from Server 1 is first routed across the
WAN to Datacenter 2 due to a lower cost route for the 10.10.10.0/24 subnet.
2. Then the edge router in Datacenter 2 will send the packet via Layer 2 to Datacenter 3.
10.10.10.0/24 Cost 5 10.10.10.0/24 Cost 10
Route Mask Cost Next Hop
10.10.10.0 24 5 Datacenter 2
10.10.10.0 24 10 Datacenter 3DC 1’s Edge Router Table Without VMTO
VMTO(VM Mobility
Traffic Optimizer)
DC 2VLAN 10
10.10.10.100/24
DC 3
10.10.10.200/24
VLAN 10
VLAN 20
Server 2 Server 3
Server 1
PRIVATE MPLS WAN
DC 1
20.20.20.100/24
WITH VMTO: NO INGRESS TROMBONE EFFECT
Effect: 1. Ingress traffic destined for Server 3 is sent directly
across the WAN from Datacenter 1 to Datacenter 3. This eliminates the “Ingress Trombone Effect” and creates the most optimal forwarding path for the Inter-DC traffic.
Task: Server 1 in Datacenter 1 needs to send packets to Server 3 in Datacenter 3.
Solution: In addition to sending a summary route of 10.10.10.0/24 the datacenter edge routers also send host routes which represent the location of local servers.
10.10.10.0/24 Cost 5 10.10.10.0/24 Cost 10
Route Mask Cost Next Hop
10.10.10.0 24 5 Datacenter 2
10.10.10.0 24 10 Datacenter 3
10.10.10.100 32 5 Datacenter 2
10.10.10.200 32 5 Datacenter 3DC 1’s Edge Router Table WITH VMTO
10.10.10.100/32 Cost 510.10.10.200/32 Cost 5
VMTO(VM Mobility
Traffic Optimizer)
NETWORK DIRECTORSMART NETWORK MANAGEMENT FROM A SINGLE PANE OF GLASS
VirtualNetworks
PhysicalNetworks
Network Director
API
Visualize Physical and virtual visualization
Analyze Smart and proactive networks
Control Lifecycle and workflow automation
CONTRAIL SDN CONTROLLER
SDN Controller
Configuration Analytics
Control
Virtualized Server
VM VM VM
Virtualized Server
VM VM VMIP fabric(underlay network)
Juniper Qfabric/QFX/EX or 3rd party underlay switches
Juniper MXor 3rd party gateway routers
Tenant VMs
BGP Federation
Horizontally scalable
Highly available
FederatedBGP
Clustering
JunosV Contrail Controller
KVM Hypervisor +JunosV Contrail vRouter/Agent (L2 & L3)
REST
XMPP
MPLS over GRE or VXLAN
SDN CONTROLLER
Control
Orchestrator
OVERLAY ARCHITECTURE
XMPPBGP + Netconf
METAFABRIC ARCHITECTURE: WHAT WILL IT ENABLE?
SIMPLE SMARTOPEN
www.juniper.net/metafabric
Accelerated time to value and increased value over time
VMVM
VMVMVM
VM
VMVM
VMVMVM
VMVMVM
VM
THANK YOU