29
White Paper © YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 1 of 29 Deploying and Configuring MPLS Virtual Private Networks In IP Tunnel Environments Russell Kelly [email protected] Craig Hill [email protected]

Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

Embed Size (px)

Citation preview

Page 1: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 1 of 29

Deploying and Configuring MPLS

Virtual Private Networks In IP Tunnel Environments

Russell Kelly [email protected]

Craig Hill

[email protected]

Page 2: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 2 of 29

Table  of  Contents  

Introduction................................................................................................................. 3  

MPLS Over GRE Deployment Examples ......................................................... 4  MPLS L2VPN Deployment – EoMPLS over GRE .......................................... 6  Tunnel Scenarios ...................................................................................................... 8  Forwarding Plane Label Stack information.................................................... 9  Detailed Packet and Label Path Information ...............................................11  Complete  GRE  encapsulated  IP  packet  is  as  shown  below................................. 11  L3  VPN  Control  Plane  Information............................................................................. 11  L3  VPN  Forwarding  Plane  Information .................................................................... 12  L2  VPN  (EoMPLS)  Forwarding  and  Control  Plane  Information........................ 12  

Fragmentation in MPLS over GRE Networks ..............................................13  Resolve IP Fragmentation, MTU, MSS, and PMTUD - Issues with GRE........................................................................................................................................14  

Example Configuration of an L3 MPLS VPN over GRE PE Router .....15  Inter-AS over GRE Hub and Spoke Scaling MPLS over GRE Designs........................................................................................................................................16  VPNv4  routes  distribution  between  ASBRs  (Option  B)....................................... 16  Application  of  Inter-­AS  Option  B  to  a  VPN  Hub  and  Spoke  Designs ................ 17  

Encryption in MPLS over GRE Networks ......................................................18  QoS in MPLS over GRE Networks ...................................................................20  Design Notes for MPLS over GRE ...................................................................25  LxVPNoGRE Case Study......................................................................................26  Case  overview.................................................................................................................... 26  

The L2VPN Case: .......................................................................................................................27  The L3VPN Case: .......................................................................................................................27  

Case  Study  Router  Configurations.............................................................................. 28  

Page 3: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 3 of 29

Introduction Service providers (SPs), and Enterprises alike are migrating from existing ATM, Frame Relay (FR), and Time Division Multiplex (TDM) infrastructures to an IP-based backbone. Current IP backbones can no longer be designed just to transport IP packets. Instead, Next Generation (NG) Internet Protocol (IP) backbones must be capable of providing multiple IP services over a single physical infrastructure, using techniques such as differentiated quality of service (QoS) and secure transport layer. In addition, NG IP backbones should provide Layer 2/3 VPNs, IP multicast, IPv6, and granular traffic-engineering capabilities.

Ultimately, these IP backbones should be scalable and flexible enough to support the mission-critical, time-sensitive applications that all modern networks require and to meet new demands for applications, services, and bandwidth. Multiprotocol Label Switching (MPLS), when used on an IP backbone, provides the mechanism to offer rich IP services and transport capabilities to the routing infrastructure.

Additionally providing the capabilities to offer MPLS based VPNʼs over a non-MPLS capable IP core offers an extremely flexible, cost efficient virtualized WAN design that is simple to configure, whilst at the same time maintaining the support for core infrastructure services such as security and QoS

An example of a converged tunneled MPLS VPN architecture providing L2 & L3 VPN Services is detailed in the diagram below. This deployment example is well suited to a high bandwidth deployment running tunneled MPLS between regional locations, where the number of tunnels is relatively few. However the throughout required for each tunnel may be in the 1 – 10Gbps range.

Figure 1. Converged L2 & L3 MPLS VPN over GRE Deployment Example

This white paper examines the advanced Virtual Private Network (VPN) capabilities in next generation application aware WAN designs specifically focusing on MPLS VPN over an IP-only core; that being deployment of MPLS VPN over IP Tunnels (GRE) and will examine the benefits, deployment options, configurations, as well as the associated technologies such as IPSec, QOS and Fragmentation.

Page 4: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 4 of 29

MPLS Over GRE Deployment Examples The implementation assumes that either the Enterprise or the Service provider has procured a Layer 3-based service such a Layer 3 VPNs from a provider interconnecting the MPLS MAN, or in the case of many customers, to interconnect MPLS MAN across an “inconsistent” IP transport with different MTUs, encryption, and tunneling capabilities – where the a viable option is to encapsulate in GRE. The MAN may have multiple connections between them to provide load balancing and/or redundancy.

Below are two common examples of MPLS over GRE topologies; site to site and hub and spoke.

As shown in Figure 2, the WAN edge router used for interconnecting MAN plays the role of a P device even though it is a CE for the SP VPN service. It is expected to label switch packets between the MAN1 and MAN2 across the SP network. Note: The GRE encapsulating or de-encapsulating router can be either a P or PE router.

Figure 2. Site to Site Tunneled MPLS VPN Deployment Example

A point-to-point GRE tunnel is set up between each edge router pair (if full mesh is desired). From a control plane perspective, the following protocols should be run within the GRE tunnels:

• IGP such as EIGRP or OSPF for MPLS device reach ability (P/PE/RR)

• LDP for label distribution

• MP-iBGP for VPN route/label distribution

Page 5: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 5 of 29

Figure 3. Hub and Spoke Tunneled MPLS VPN Deployment Example

Multiple point-to-point GRE tunnels on the hub or mGRE (if using NHRP/2547oDMVPN). From a control plane perspective, the following protocols should be run within the (m)GRE tunnel(s):

• Routing protocol of the provider to learn the Branch and the Head-endʼs physical interface

addresses (tunnel source address). Static routes could also be used if these are easily summarized.

• GRE tunnel between the branch PE and the head-end P router.

• IGP running in the enterprise global space over the GRE tunnel to learn remote PEʼs and RRʼs loopback address.

• LDP session over the GRE tunnel with label allocation/advertisement for the GRE tunnel address by the branch router.

• MP-iBGP session with RR, where the branch routerʼs BGP source address is the tunnel interface address—this forces the BGP next-hop lookup for the VPN route to be associated with the tunnel interface.

Many more details on these and other deployment examples along with configurations can be found at the following location:

http://www.cisco.com/en/US/docs/solutions/Enterprise/WAN_and_MAN/ngwane.pdf

Page 6: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 6 of 29

MPLS L2VPN Deployment – EoMPLS over GRE EoMPLS technology is currently the solution which best answers the problem of Layer 2 extension over long distances. However, it has traditionally required an enterprise to migrate the core to MPLS switching, which is often a burden when the core is not dedicated to L2VPN extension.

Figure 4. EoMPLS over GRE Tunnel For L2VPN Transport Over an IP Core

MPLS requires specific expertise for deployment; maintenance and migration of the existing IP core to a new MPLS core can be complex. To ease the adoption of Layer 2 extension, the solution is to encapsulate the EoMPLS traffic over a GRE tunnel. This allows for the transport of all Layer 2 flows over the existing IP core, eliminating the need for a complex migration.

This solution creates a high performance hardware switched GRE tunnel that encapsulates the EoMPLS frames within. This allows IP tunneling of L2 MPLS VPN frames at Gigabit per second rates, in the case of the ASR1000, up to 20Gbps, or the to the bandwidth of the forwarding engine, also known as the ESP, installed in the platform.

The L2VPN over IP design is identical to the deployment over MPLS: EoMPLS "port mode xconnect" being the default option for point-to-point connection.

In the following EoMPLSoGRE design, the GRE connection is established between the two Data-center core routers, and then the MPLS LSP is established over this tunnel. This provides an extremely flexible datacenter inter-connect up to 10Gbps bi-directional forwarding (with the ASR1000 ESP20), whereby distributed and smaller datacenters can be L2 integrated over vanilla IP networks.

Page 7: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 7 of 29

Figure 5. IP Tunneled L2 VPN For Datacenter Interconnect

Additionally, with platforms such as the ASR1000, IPSec can be used to encrypt the GRE tunnels. This further expands the use-cases for this deployment in that one can now tunnel these L2 transports securely over un-trusted IP backbones or even the Internet

Page 8: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 8 of 29

Tunnel Scenarios There are three scenarios described below where L2VPN and L3VPN over GRE are typically deployed by customers on PE or P routers.

As shown in Figure 6 the customer has not transitioned any part of their Core to MPLS but would like to offer EoMPLS and Basic MPLS-VPN services. Hence GRE tunneling of the MPLS labeled traffic is done between PEs. This is the most common scenario seen in various customersʼ networks.

Figure 6. PE PE GRE Tunnels

The second scenario shown in Figure 7 is one where MPLS has been enabled between PE and P routers but the network core may have non-MPLS aware routers or IP encryption boxes. In this scenario GRE tunneling of the MPLS labeled packets is done between P routers.

Figure 7. P P GRE Tunnel

Figure 8 demonstrates a network where the PP Nodes are MPLS aware while GRE tunneling is done between a PE P non-MPLS network segments.

Page 9: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 9 of 29

Figure 8. PE P GRE tunnels

Forwarding Plane Label Stack information The following section outlines the label stack and forwarding operation in the three scenarios outlined previously without encryption. Figures 4 & 5 detail the topology in a Branch to Hub configuration. However the label imposition/swapping is the same whether the topology is branch/MAN or MAN/MAN. The important consideration is the header differences and operations. The configuration is the same in all cases with respect to IOS CLI.

Figure 9. MPLS over GRE—Forwarding Plane for P-P Router Tunnel

As shown in Figure 9, the P router receives a labeled packet for the MPLS enabled MAN (LDP1). It label swaps with the appropriate LDP label (LDP2) advertised by the P for the destination next-hop address (Tunnel destination address). It then encapsulates the labeled packet in a GRE tunnel with the hub P as the destination before sending it to the provider. Since in this example SP is providing Layer 3 VPN service, it further pre-pends its own VPN and LDP labels for transport within its network. The hub P receives a GRE encapsulated labeled packet. It de-encapsulated the tunnel headers before label switching it out to the appropriate outgoing interface in the MPLS MAN for the packet to reach the eventual PE destination.

Page 10: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 10 of 29

Figure 10. MPLS over GRE—Forwarding Plane for P-PE Router Tunnel

As shown in Figure 10, the branch router attaches the appropriate VPN label for the destination along with the LDP label advertised by the hub P for the destination next-hop address. It then encapsulates the labeled packet in a GRE tunnel with the hub P as the destination before sending it to the provider. Since in this example SP is providing Layer 3 VPN service, it further pre-pends its own VPN and LDP labels for transport within its network. The hub P receives a GRE encapsulated labeled packet. It de-encapsulated the tunnel headers before label switching it out to the appropriate outgoing interface in the MPLS MAN for the packet to reach the eventual PE destination.

Figure 11. MPLS over GRE—Forwarding Plane for PE-PE Router Tunnel

As shown in Figure 11, the routers attach the appropriate VPN label for the destination advertised by the PE router. It then encapsulates the labeled packet in a GRE tunnel with the hub PE as the destination before sending it to the provider. Since in this example SP is providing Layer 3 VPN service, it further pre-pends its own VPN and LDP labels for transport within its network. The hub PE receives a GRE encapsulated labeled packet. It de-encapsulated the tunnel headers before forwarding it out to the appropriate outgoing interface based on the VPN label information and the VRF routing table.

As can be seen from the headers, this adds a large amount of overhead to the MTU. The two SP headers are for the SP provided VPN – and in the context of this MPLSoGRE testing are not considered – for the SP Core the customers traffic appears as ʻvanillaʼ IP traffic (sourced from the Tunnel source IP interface, IPv4 loopback or other IPv4 interface advertised within the IP Core)

Page 11: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 11 of 29

Detailed Packet and Label Path Information

Complete GRE encapsulated IP packet is as shown below The figure below expands out the packet format for an MPLS VPN Labeled packet tunneled over GRE between two PE routers that’s are connected through the GRE tunnel. One can see from the diagram that the VPN label is appended to the original packet, not however the IGP LDP label as well because the PE’s are effectively directly connected (no P core). Additionally the GRE header and new IP header are appended for the GRE transport.

Figure 12. Detail of the Encapsulated VPN Traffic in a GRE Tunnel

L3 VPN Control Plane Information The following three diagrams detail the control and forwarding paths for L2 and L3 VPN traffic over GRE

Figure 13. Detail of the Control Plane Communication and Messaging For L3VPN Over GRE

Page 12: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 12 of 29

L3 VPN Forwarding Plane Information

Figure 14. Detail of the Forwarding Plane Communication For L3VPN Over GRE

L2 VPN (EoMPLS) Forwarding and Control Plane Information

Figure 15. Detail of the Forwarding and Control Plane Messaging for L2VPN Over GRE

Page 13: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 13 of 29

Fragmentation in MPLS over GRE Networks In general, the rule for dealing with fragmentation in MPLS over GRE environments is the same as in the pure IP traffic over GRE. The most important caveats with MTU when MPLS is being used is that one cannot override the MPLS MTU on a GRE tunnel interface. The interface command mpls MTU <> is blocked on tunnel interfaces. The affect of this block is that if an MPLS packet arrives on a tunnel interface with an MTU > MPLS MTU of the tunnel interface and it has the df bit set it will be dropped. This is an issue because there is no way to clear the df-bit on a P router and the only way to increase the MPLS MTU is to raise the MTU of the physical egress interface. This may not be an option in some tunneling environments, where the provider dictates the core MTU. This is just a corner case to be aware of in the P-P tunneling environments. This is not an issue in provider edge (PE-P or PE-PE) tunneled environments as Policy Based Routing (PBR) can be used on ingress to clear the df-bit on the IP traffic

It is best practice for all routers in the MPLS domain to honor the same MTU, as not doing so forces the receiving router to reassemble fragments. The same best practice holds for GRE tunneled environments, whereby receiving routers are forced to reassemble GRE packets if fragmentation post-encapsulation is occurring. The best approach of course is to control the ip mtu of the sending clients pre-MPLS and GRE encapsulation, as there are performance ramifications with having to reassemble on the router. The main issue is that the router has to account for all streams to all hosts and keep track of all fragments before reassembling and forwarding on to the multitude of end hosts.

On most routing platforms the reassembly of packets is not done in hardware, instead it is sent to the control plane for reassembly. The main reason for this design is simple: in the past the hardware forwarding engines did not have the intelligence or memory management capabilities to track and reassemble packets. The major downside of doing reassembly in the control plane was the lack of performance.

The Cisco ASR1000 series, with the Quantum Flow Processor (QFP) now has the ability to reassemble GRE in hardware, giving an order of magnitude greater performance than any other current routing platform for even fragmented MPLS over GRE packets.

Over all one needs to consider the VPN backbone as a whole, find the low water mark - read lowest MTU in the backbone - and design/account for it. Now all hosts who come in with an MTU larger than the IP MTU of the tunnel will get an icmp ʻcanʼt fragment errorʼ message back so that it can lower its MTU accordingly. Then, even if they still set the DF bit, once they are sending at 1400B, this bit can at least be cleared for the IPSec packet allowing IPSec fragmentation in the core and reassembly at the remote/receiving router.

Page 14: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 14 of 29

Resolve IP Fragmentation, MTU, MSS, and PMTUD - Issues with GRE In general it is best practice to avoid fragmentation across a GRE tunnel (or any tunnel for that matter) to avoid having to fragment and especially reassembling any packet on a routing platform, whether this is reassembling GRE, IPSec or IP in IP packets.

There are numerous methods to avoid fragmentation including setting the end hosts MTU to a value low enough to account for the VPN overhead – be it GRE, IPSec a combination or another encapsulation protocol.

To manage TCP traffic one can employ tcp adjust MSS, as well as setting the IP MTU on the tunnel interface. To account for non-TCP traffic in an IP environment policy based routing can be used to clear any DF bit set by an application (this is by default for PMTUD hosts)

These methods and a further explanation are covered in the links below:

http://www.cisco.com/en/US/tech/tk827/tk369/technologies_white_paper09186a00800d6979.shtml http://www.cisco.com/en/US/tech/tk827/tk369/technologies_tech_note09186a0080093f1f.shtml

Page 15: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 15 of 29

Example Configuration of an L3 MPLS VPN over GRE PE Router

In this example configuration EIGRP is being used to advertise GRE endpoints and the loopback interfaces for the LDP peering. This is the most common implementation; however in the PE-PE use case one does not necessarily need the IGP over the tunnel to advertise the loopbacks, or tunnel endpoints, and the “external” IGP is used to advertise the tunnel endpoints and additionally the MP-BGP peering addresses over the tunnel. If the IGP can be excluded, there is essentially only BGP and LDP running over the GRE tunnel – this will allow for greater control plane scale as the potential scaling issues inherent in an IGP peering in the hub and spoke topologies is removed.

Figure 16. L3 VPN Over GRE Configuration

This can be further enhanced to only run BGP peering between the hub and multiple spokes by running Inter-AS over GRE – whereby one is simply using labeled BGP to pass the VPN information to the spoke. This is particularly useful if one wants to scale the hub and spoke (remote sites) and adhere to a hub->spoke topology. This is covered in the following Inter-AS over GRE section.

Page 16: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 16 of 29

Inter-AS over GRE Hub and Spoke Scaling MPLS over GRE Designs

One additional method of scaling an MPLS over IP topology is to utilize Inter-AS VPN over GRE to pass the MPLS VPN information between the hub location and the numerous remote sites. In this scenario both the hub and all of the spokes act as PE and ASBR routers – and the topology is pure hub and spoke.

The major benefit of using Inter-AS over GRE in this manner is that now the control plane only has to manage N x eBGP sessions, as opposed to a BGP, LDP and an IGP session to every remote site. This makes the solution very scalable and well suited to tunnelled VPN hub/spoke deployments as a single eBGP peer over GRE will carry all customer VPNv4 routes across the AS boundary. As in this case an eBGP peering session is all that is required between the hub (core PE in one AS) and numerous site routers (remote PE routers in a different AS). Additionally any QoS and encryption requirements work just as they would in the traditional MPLSoGRE solution.

VPNv4 routes distribution between ASBRs (Option B)

Traditionally the MPLS VPN InterAS feature provides a method of interconnecting VPNs between different MPLS VPN service providers. This allows sites of a customer to exist on several carrier networks (Autonomous Systems) and have seamless VPN connectivity between these sites.

Figure 8 below illustrates the operation of Inter-AS, where two ASBRs share VPNv4 routes and VPN labels to provide Inter-AS MPLS VPN reach ability

Figure 17. Example of Inter-AS Option B Operation

In option B, the AS border routers (ASBR) peer with each other using an eBGP session. The ASBR also performs the function of a PE router and therefore peers with every other PE router in their AS. The ASBR does not hold any VRFs but instead will hold all or a subset (those that need to be passed to the other AS) of the VPNv4 routes from every other PE router. The VPNv4 routes are kept unique in the ASBR by use of the route-distinguisher.

The ASBR can control which VPNv4 routes it accepts through the use of route-targets. The ABSR then exchanges the VPNv4 routes, plus the associated VPN label with the other ASBR using

Page 17: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 17 of 29

eBGP. This procedure requires more co-ordination between carriers such as eBGP peering and the route-targets that will be accepted.

Application of Inter-AS Option B to a VPN Hub and Spoke Designs This control and data forwarding path detailed previously can also be used in hub and spoke topology where the hub is in one AS and all the spokes are in another AS. The hub peers with all spokes, but the spokes only peer with the hub router.

The solution additionally assumes the backbone network does not carry MPLS customer traffic and hence in this case will only provide native IP connectivity from aggregating ASR 1000 to the remote routers. In order to provide the MPLS functionality for the overlay network, GRE tunnels run as a transport mechanism over the IP backbone. Figure 11 below illustrates the high level WAN connectivity and how the GRE tunnels will be configured between Aggregation and sites for primary and backup routing.

Once the GRE endpoints are reachable and the GRE tunnels are established, another EBGP session will be set up from ASR hub router to all spokes. The configuration of the tunnels is shown below. To enable sending and receiving of MPLS packets over these GRE tunnels when using BGP to advertise the MPLS VPNs, simply configure mpls BGP forwarding on the tunnel interface.

interface Tunnel1 description Primary tunnel to ASR ip address 10.0.0.1 255.255.255.252 mpls bgp forwarding qos pre-classify tunnel source GigabitEthernet0/0 tunnel destination 192.168.1.1

Table 1. Remote Site GRE Tunnel Configuration

interface Tunnel1 description Primary tunnel ip address 10.0.0.2 255.255.255.252 mpls bgp forwarding qos pre-classify tunnel source Loopback0 tunnel destination 192.168.0.1

Table 2. Aggregation GRE Tunnel Configuration

Page 18: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 18 of 29

Encryption in MPLS over GRE Networks

Some integrated services platforms can additionally offer encryption of these IP tunnels to enable tunneling over even “untrusted” IP backbones or the Internet. This is one of the key advantages with the ASR 1000 and the integrated services is that the GRE tunnels can be encrypted at gigabit data rates by simply applying a crypto map to the egress interface, or more commonly tunnel protection directly to the tunnel interface.

An example in the configuration is provided in the section below.

mpls ldp router-id Loopback0 force ! crypto isakmp policy 1 encr 3des authentication pre-share group 2 ! crypto isakmp key cisco123 address 192.168.0.2 crypto ipsec transform-set 3DES esp-3des esp-md5-hmac mode transport ! crypto map ASR 1 ipsec-isakmp set transform-set 3des ! ! interface Tunnel1 ip address 10.10.0.1 255.255.255.0 mpls ip tunnel source 10.0.0.1 tunnel destination 10.1.0.2 tunnel protection ipsec profile ASR ! interface Loopback0 ip address 2.2.2.2 255.255.255.255

Table 3. Example Configuration For MPLSoGREoIPSec

As can be seen, to enable encryption on the IP tunnel all that needs to be configured is a transform set and IPSec profile and for this profile to be applied to the IP Tunnel. This will then ensure all the traffic traversing the IP Core is protected, including the label information.

Cisco routers also provide the capabilities to ensure that fragmentation can be avoided in the core with PMTU discovery; this is especially relevant when there is IPSec involved, as this can add an additional 70+ bytes to the IP Header. The ability therefore to enabled IPSec pre or post encryption to allow for MPLS packets with df-bitʼs set or to allow for legacy applications to not have to reassemble even with this degree of encapsulation is very valuable in a core routing platform.

A full configuration for an L3VPN provided over an encrypted (protected) GRE tunnel is detailed in Table 4 below

Page 19: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 19 of 29

Table 4. Configuration For L3VPN over GRE With IPSec

ip vrf vpn1 rd 100:1 route-target export 100:1 route-target import 100:1 ! crypto isakmp policy 1 encr 3des authentication pre-share group 2 ! crypto isakmp key cisco123 address 192.168.0.2 crypto ipsec transform-set 3DES esp-3des esp-md5-hmac mode transport ! crypto map ASR 1 ipsec-isakmp set transform-set 3des ! interface Tunnel1 ip address 10.1.0.1 255.255.255.0 mpls ip tunnel source 192.168.0.1 tunnel destination 192.168.0.2 tunnel protection ipsec profile ASR ! interface Loopback0 ip address 2.2.2.2 255.255.255.255 ! mpls ldp router-id Loopback0 force ! interface GigabitEthernet0/2/4 no ip address negotiation auto no cdp enable ! interface GigabitEthernet0/2/4.1 encapsulation dot1Q 21 ip vrf forwarding vpn1 ip address 10.0.0.1 255.255.255.0 ! ! interface GigabitEthernet0/2/7.1 encapsulation dot1Q 20 ip address 192.168.0.1 255.255.255.0 ! router ospf 100 log-adjacency-changes network 2.2.2.2 0.0.0.0 area 0 network 10.1.0.1 0.0.0.0 area 0 ! router bgp 100 no synchronization bgp log-neighbor-changes neighbor 10.1.0.2 remote-as 100 no auto-summary ! address-family vpnv4 neighbor 10.1.0.2 activate neighbor 10.1.0.2 send-community extended ! address-family ipv4 vrf vpn1 no synchronization neighbor 10.0.0.2 remote-as 20 neighbor 10.0.0.2 activate exit-address-family

Page 20: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 20 of 29

QoS in MPLS over GRE Networks

In environments where the MPLS is being tunneled over GRE, the customer can use the automatically reflected TOS header in the GRE packet. The diagram below illustrates the TOS/PREC reflection that can be propagated to the IPSec header if cryptography is enabled or to the GRE IP header. The MPLS EXP value (Prec) is set as derived from the initial IP TOS setting. When the packet is decrypted or de-encapsulated at the remote tunnel, the MPLS EXP value will be set on the MPLS packet and can be utilized accordingly.

Figure 18. TOS Reflection In MPLSoGRE and MPLSoGREoIPSec Configurations

A service can also be applied to the egress physical interface to explicitly set the ip precedence (as below)

As an extension on the above schema, there is the option on the ingress IP traffic linkage to set both a qos-group and marking of the IP traffic, such that both can be matched on in the child egress service policy to identify traffic types per vrf. Additionally the tunnel itself can be shaped by matching each tunnel in an ACL that matches GRE source and destination IP Addresses. Up to 255 tunnels can be shaped per physical (or logical – vlan sub-interface) in IOS XE release 2.3.0. This QoS design is an adaptation of the DMVPN QoS model detailed in the QoS and VPN Deployment Guide version 1 white paper.

class-map match-all exp2 match mpls experimental topmost 2 ! policy-map exp2 class exp2 set ip precedence 2 ! interface Tunnel1 ip address 192.168.0.1 255.255.255.0 qos pre-classify mpls ip tunnel source 10.0.0.1 tunnel destination 10.1.0.1 ! int gi0/1/1 service policy exp2 out

Page 21: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 21 of 29

Figure 19. Example QoS Policy for and MPLSoGRE or MPLSoGREoIPSec Topology

This exact configuration can be scaled to such that on any single interface or sub-interface up to 255 tunnels can be matched at the parent level and the MPLS VPN traffic within this site can be allocated bandwidth or prioritized accordingly. The configuration in Table 5 outlines this configuration

For the EoMPLSoGRE case, one can employ a similar schema, setting QoS group, setting the MPLS EXP bits and/or policing on in egress and then using this qos-group and EXP to allocate bandwidth on a per pseudo wire basis within the shaped tunnel. See the configuration below (Table 6) as an example.

Page 22: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 22 of 29

Table 5. QoS Configuration for Figure 16 – For a Single Tunnel

class-map match-all vrf1-high match qos-group 40 match mpls experimental topmost 5 class-map match-all vrf1-medium match qos-group 40 match mpls experimental topmost 4 class-map match-all vrf1-low match mpls experimental topmost 0 match qos-group 40 class-map match-all vrf2-high match qos-group 30 match mpls experimental topmost 5 class-map match-all vrf2-medium match qos-group 30 match mpls experimental topmost 4 class-map match-all vrf2-low match mpls experimental topmost 0 match qos-group 30 class-map match all Site1 match access-group name Site1 class-map match-any control match ip precedence 6 7 match mpls experimental topmost 6 7 ! policy-map child class vrf1-high priority level 1 police 1000000 class vrf2-high priority level 2 police 500000 class vrf1-medium bandwidth remaining ratio 8 class vrf2-medium bandwidth remaining ratio 15 class control bandwidth remaining ratio 1 class vrf1-low bandwidth remaining ratio 2 class vrf2-low bandwidth remaining ratio 4 policy-map Parent class Site1 shape average 5120000

service-policy child Int Gi0/0/0 The egress physical interface Service policy output Parent ip access-list extended Site1 permit gre host <tunnel source> host <tunnel destination>

Page 23: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 23 of 29

Ingress: ===== class-map match-any BestEffort-EoMPLS match cos 0 7 class-map match-any Business-EoMPLS match cos 1 2 class-map match-any Multimedia-EoMPLS match cos 3 4 class-map match-any Realtime-EoMPLS match cos 5 policy-map Ingress-EoMPLS class Realtime-EoMPLS police cir 128000 bc 8000 be 8000 conform-action set-mpls-exp-transmit 5 exceed-action drop class Multimedia-EoMPLS police cir 128000 bc 8000 be 8000 conform-action set-dscp-transmit 3 exceed-action drop class Business-EoMPLS police cir 128000 bc 8000 be 8000 conform-action set-qos-transmit 2 exceed-action drop class BestEffort-EoMPLS set qos-group 1 Egress: ===== class-map match-any BestEffort-EoMPLS-Egress match qos-group 1 class-map match-any Business-EoMPLS-Egress match qos-group 2 class-map match-any Multimedia-EoMPLS-Egress match dscp 3 class-map match-any Realtime-EoMPLS-Egress match ip prec 5 class-map match all GRE_DCI_Tunnel1 match access-group name GRE_DCI_Tunnel1 ip access-list extended GRE_DCI_Tunnel1 permit gre host <tunnel source> host <tunnel destination> policy-map Egress-EoMPLS-Child class Realtime-EoMPLS-Egress set mpls exp 5 priority level 1 class Multimedia-EoMPLS-Egress set mple exp 3 priority level 2 class Business-EoMPLS-Egress set mpls exp 2 class BestEffort-EoMPLS-Egress set mple exp 2 policy-map Egress-EoMPLS-Parent policy-map parent class GRE_DCI_Tunnel1 shape average 600000000 service-policy Egress-EoMPLS-Child

Table 6. QoS Configuration Example for EoMPLSoGRE

Page 24: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 24 of 29

It is important to note that when the GRE tunnel is encrypted, this same QoS policy will work as the ability to look at the inner MPLS EXP exists whether the outer header is GRE or IPSec.

For further detail on these and other QoS features available on the ASR 1000 follow the link to the paper in the URL below

http://www.cisco.com/en/US/prod/collateral/routers/ps9343/solution_overview_c22-449961_ps9343_Product_Solution_Overview.html

Page 25: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 25 of 29

Design Notes for MPLS over GRE

The solution of running MPLS VPNs over a GRE and protected GRE infrastructure has obvious cost saving and flexible network design advantages. There are some important points/restrictions to take note of – these mostly pertain to high availability designs.

• As mentioned, the ability to alter the MPLS MTU on an IP Tunnel interface is currently restricted in IOS. The MPLS MTU is always derived from the physical egress interface, minus the appropriate GRE (encapsulation header) and label allocation. This is of vital importance with EoMPLS because the EoMPLS service cannot be fragmented – therefore with the inability to raise the MPLS MTU, the egress interface must have an MTU great enough to include the EoMPLS and GRE header.

• There is currently no support for IP Tunnel HA in IOS, therefore during an RP switchover all tunnels will go down and have to be re-initialized; this subsequently causes all IPSec tunnels to have to be re-established as well as IGP and LDP adjacencies.

• There is no BFD support on tunnel interfaces to design in fast peer down detection in the IGP core in IOS currently

• The feature IGP-LDP sync is not supported on tunnel interfaces.

• There is no support for keep-alive when using the tunnel protection (see link):

http://www.cisco.com/en/US/tech/tk827/tk369/technologies_tech_note09186a008048cffc.shtml#t7

Page 26: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 26 of 29

LxVPNoGRE Case Study Case overview The information below will demonstrate a case study of EoMPLSoGRE and L3VPNoGRE scenarios running simultaneously on Cisco PEs using the existing IOS implementation. (This was tested with a SIP-400 on a 7600 and on an ASR1000).

The topology shown in Figure 18 is used for our case study.

Figure 20. Case Study Set-up

In the above scenario, the following configuration rules are used:

Create a GRE tunnel between each PE router. PEs core-facing interface address is used as GRE tunnel source The tunnel end-points IP addresses should be reachable via the core-facing interface. PEs use static routing via the core-facing interface for GRE tunnel destination LDP is enabled on the GRE tunnel but not on any interfaces in the IP Core Static route pointing to remote PE LDP-ID via GRE tunnel is used Tunnel keep-alive is enabled (as no IPSec in this scenario) MTU of core-facing interface is set to allow for forwarding of jumbo frames The tunnel IP addresses should be reachable via the core facing GE interface. The attachment circuit configuration for EoMPLS Port and VLAN modes use MPLS as the

encapsulation protocol. L3VPN VRFs are running eBGP between PE CE No QoS is being done on the VRFs or Attachment circuits

Page 27: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 27 of 29

The L2VPN Case:

The VC label binding for attachment circuits will be distributed by PEs via a targeted LDP session across the GRE tunnel. Hence, since the each PE routers are penultimate to each other over the GRE tunnel their label binding for each other LDP-ID will be implicit-null. The next-hop PE of each EoMPLS pseudo wire will be learned via the GRE tunnel as shown later in the verification procedures. All EoMPLS traffic will be forwarded via the GRE tunnel.

However, it is expected that some customers may chose to map specific pseudo wires or pw-class to unique GRE tunnels.

The L3VPN Case:

The VPNv4 prefixes, labels and next-hops are learned by remote PEs through MP-iBGP and are not known to the non-MPLS Core: This is achieved by defining a static route to BGP next-hop PE through a GRE Tunnel across the non-MPLS network. When routes are learned from the remote PE they will have a next-hop of the GRE Tunnel. Thus, all traffic across the IP Core will be sent using the GRE Tunnel.

Page 28: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 28 of 29

Case Study Router Configurations

PE1 Configuration

vrf definition VPN1 rd 100:1 address-family ipv4 route-target both 100:1 exit-address-family ! mpls label protocol ldp mpls ldp neighbor 10.10.10.11 targeted mpls ldp router-id Loopback0 force ! interface Tunnel0 ip address 10.9.1.1 255.255.255.0 mpls label protocol ldp mpls ip keepalive 10 3 tunnel source TenGigabitEthernet2/1/0 tunnel destination 10.1.3.2 ! interface Loopback0 ip address 10.10.10.10 255.255.255.255 ! interface TenGigabitEthernet2/1/0 mtu 9216 ip address 10.2.1.1 255.255.255.0 ! interface TenGigabitEthernet9/1 no ip address ! interface TenGigabitEthernet9/1.11 vrf forwarding VPN1 encapsulation dot1Q 300 ip address 192.168.1.1 255.255.255.0 ! interface TenGigabitEthernet9/2 mtu 9216 no ip address xconnect 10.10.10.11 200 encapsulation mpls ! router bgp 65000 bgp log-neighbor-changes neighbor 10.10.10.11 remote-as 65000 neighbor 10.10.10.11 update-source Loopback0 neighbor 192.168.1.2 remote-as 100 ! address-family vpnv4 neighbor 10.10.10.11 activate neighbor 10.10.10.11 send-community extended ! address-family ipv4 vrf VPN1 no synchronization neighbor 192.168.1.2 remote-as 100 neighbor 192.168.1.2 activate neighbor 192.168.1.2 send-community extended ! ip route 10.10.10.11 255.255.255.255 Tunnel0 ip route 10.1.3.0 255.255.255.0 10.2.1.2

Page 29: Deploying and Configuring MPLS Virtual Private Networks in IP Tunnel Environments

White Paper

© YEAR Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 29 of 29

PE2 Configuration

vrf definition VPN1 rd 100:1 address-family ipv4 route-target both 100:1 exit-address-family ! mpls ldp neighbor 10.10.10.10 targeted mpls label protocol ldp mpls ldp router-id Loopback0 force ! interface Tunnel0 ip address 10.9.1.2 255.255.255.0 mpls label protocol ldp mpls ip keepalive 10 3 tunnel source TenGigabitEthernet3/3/0 tunnel destination 10.1.1.1 ! interface Loopback0 ip address 10.10.10.11 255.255.255.255 ! interface TenGigabitEthernet2/1 mtu 9216 no ip address xconnect 10.10.10.10 200 encapsulation mpls ! interface TenGigabitEthernet2/3 mtu 9216 no ip address ! interface TenGigabitEthernet2/3.11 vrf forwarding VPN1 encapsulation dot1Q 300 ip address 192.168.2.1 255.255.255.0 ! interface TenGigabitEthernet3/3/0 mtu 9216 ip address 10.3.1.2 255.255.255.0 ! router bgp 65000 bgp log-neighbor-changes neighbor 10.10.10.10 remote-as 65000 neighbor 10.10.10.10 update-source Loopback0 neighbor 192.168.2.2 remote-as 200 ! address-family vpnv4 neighbor 10.10.10.10 activate neighbor 10.10.10.10 send-community extended exit-address-family ! address-family ipv4 vrf VPN1 no synchronization neighbor 192.168.2.2 remote-as 200 neighbor 192.168.2.2 activate neighbor 192.168.2.2 send-community extended exit-address-family ¡ ip route 10.10.10.10 255.255.255.255 Tunnel0 ip route 10.1.1.0 255.255.255.0 10.3.1.1