24
CONTRAIL ENTERPRISE MULTICLOUD REFERENCE ARCHITECTURE Reference Architecture ©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

CONTRAIL ENTERPRISE MULTICLOUD REFERENCEARCHITECTURE

Reference Architecture

©2019, Juniper Networks, Inc.

Page 2: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

2©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

TABLE OF CONTENTSContrail Enterprise Multicloud Overview ..............................................................................3

Contrail Command and Controller ............................................................................................................................5

Data Center Fabric Management: Green and Brownfield Underlays and the MP-IBGP Overlay .................8

Contrail Networking and the Data Plane: VXLAN Tunnels and vRouters ...................................................... 14

Contrail Networking: Data Center Interconnect for Multiple Sites and Extension to Public Cloud (Multicloud) ................................................................................................................................................................ 19

AppFormix for Multicloud Telemetry and Reporting ......................................................................................... 22

Conclusion .................................................................................................................................23

About Juniper Networks .........................................................................................................24

Page 3: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

3©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Contrail Enterprise Multicloud OverviewThe Contrail Enterprise Multicloud solution is based on a centralized controller driven by an intent-based UI that manages both the data center fabric—a collection of switches and links that form the underlay of basic IP connectivity—and the Multiprotocol IBGP (MP-IBGP) overlay used to advertise Ethernet VPN (EVPN) routes that instantiate Virtual Extensible LAN (VXLAN) tunnels. These tunnels connect tenants and extend the data center to public cloud offerings. The solution is comprised of Contrail Networking with a data center fabric management feature, Contrail Security to secure both local and remote workloads, and AppFormix (Contrail Analytics) for end-to-end network and application-level telemetry.

The single point of control, or “single pane of glass,” aspect of Contrail Enterprise Multicloud is one of its major benefits. The Juniper Contrail Networking™ Controller itself does not enable these features; Juniper offers similar capabilities in a controllerless solution. The goal of the controller is to simplify and automate configuration tasks that are considered complex and prone to human error and which, in large data centers, occur frequently enough to interfere with the operator’s ability to maintain timely service levels.

The Contrail Networking Controller abstracts composable service building blocks such as “tenants,” “subnets,” “policies,” “gateways,” “service chains,” and others that are translated into underlying network building blocks based on EVPN with VXLAN. The fabric management capabilities provide Day 1 plug-and-play configuration and onboarding of the physical network fabric. Once the fabric is up and running, the controller lets the operator perform Day 2 functions at an intent/service level rather than at a technology level—for example, “create fabric,” “create tenant,” “add subnet to tenant,” “attach endpoint to subnet,” “build and apply policy to tenant,” and so on. These “intent-level” operations are performed via controller UI (or intent-level APIs) rather than via low-level vendor-specific CLI commands. It’s worth stressing that Contrail Enterprise Multicloud is not an element management system (EMS) for Juniper devices; it is a fabric and network virtualization overlay (NVO) manager.

EXECUTIVE SUMMARY This reference architecture describes Juniper® Contrail® Enterprise Multicloud, a controller-based solution that simplifies and automates data center management tasks and extends connectivity to other data centers or public clouds.

The solution’s UI and intent-based policy model abstracts the complexity of the underlying network infrastructure, essentially providing an “easy button” for operations staff that focuses on the desired connectivity rather than implementation details.

This document includes a functional description of the solution itself, as well as the components that comprise the solution. It is assumed the reader has IP networking knowledge and a basic understanding of data center technologies and network automation.

Page 4: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

4©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Figure 1 depicts the various pillars and components of the Contrail Enterprise Multicloud reference architecture, which will be the focus of the rest of this document.

Operator

AppFormix

GCP VPC - 2

MulticloudArchitecture

Contrail CommandContrail CommandContrail Command Contrail Command

Kubernetes

OpenShift

OpenStack

VMware

vRouter(+security)

Shifting Multi-Location Complexity Into One Interface

1

2

3

4

5

AWS VPC - 1

Username

Username

✓ Build Fabric✓ Provide Hybrid Connectivity✓ Build PODs✓ Apply Ne / Sec. Policies✓ Monitor / Troubleshoot

Figure 1: The Contrail Multicloud reference architecture

The numbered areas in Figure 1 call out key aspects of the Contrail Enterprise Multicloud architecture. These are:

1. The Contrail Networking Controller, command user interface, and integration with orchestration systems

2. Contrail Networking: Data center fabric management (underlay and overlay)

3. Contrail Networking: VXLAN data plane, vRouters, and application-/tag-based security

4. Contrail Networking: Data Center Interconnect (DCI) for multiple sites and extension to public cloud (multicloud)

5. Juniper AppFormix® for multicloud telemetry and reporting

When combined, these pillars form a solution for managing the local data center and for interconnecting it to remote locations or public clouds, all under the control of a single UI. Figure 1 shows the workflow associated with a Day 1, or “greenfield,” data center and how the UI simplifies data center management. The process starts at the upper left, with the user logging in to the Contrail Command server, and proceeds through several high-level workflows, some of which are wizard-based with automation that simplifies and streamlines the workload. Everything you need to build a data center from the ground up (Day 1) as well as manage its day-to-day operations (Day 2) is included in the Contrail Enterprise Multicloud solution.

The first step is to build the fabric underlay and overlay (terms which describe the physical vs. logical topologies detailed later), a process that includes defining the roles of fabric devices: for instance, a leaf device with bridging vs. a spine device with centralized routing.

Next, compute resources (virtual machines and bare-metal servers) are defined, along with security policies, to ensure that only the desired connectivity is permitted. Various sub-screens are presented for each of these steps, offering pull-down menus and fields that allow you to customize your installation. With cloud connectivity and security established, the final screen denotes the end-to-end telemetry and analytics that allow you to visualize the performance of your fabric and cloud-based applications whether they are running locally, in a remote fabric, or in a public cloud service.

Page 5: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

5©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

The following sections explore each of the numbered areas from Figure 1 and relate them back to the overall Contrail Enterprise Multicloud platform.

Contrail Command and ControllerThe Contrail Enterprise Multicloud platform is based on a centralized SDN controller that provides a single touchpoint for managing both the local data center and multicloud connectivity. The controller’s architecture supports clustering for horizontal scale buildout. Contrail Command provides a single UI into the controller for fabric management, VLAN/tenant configuration, policy-based networking/security, and operational telemetry for networking as well as the application and compute components.

As noted, for scalability and high availability (HA), the controller can consist of multiple compute resources (physical or VMs); the solution is designed to scale horizontally in this regard. In contrast, the Contrail Command server is currently implemented on a single machine (physical or VM), since it does not impose a high processing load (see Figure 2).

ContrailCommand

Low-Scale/POC

“All In One” Contrail Cluster

High-Scale/HA

AIO ContrailCommand Controller Compute Services

Node

Figure 2: Contrail Networking Controller scaling and high availability (clustering)

The left side of Figure 2 shows a two-server (physical or VM) control cluster consisting of a Contrail Command server and an all-in-one (AIO) server, so named because it houses all remaining cluster components (implemented as containerized microservices) on a single, high-performance machine. The dual-server AIO model is suitable for small to medium-sized fabrics (roughly 32 fabric nodes). Though not shown, a single-server model that houses both the Contrail Command and AIO cluster is supported for lab-based Proof of Concept (POC) validation, demonstrations, and training.

The right side of Figure 2 shows the progression to a distributed cluster suitable for use in a large-scale deployment (up to 256 fabric devices) given the correspondingly higher performance demands. To provide HA, three AIO servers or conventional clusters are normally deployed to ensure that a quorum can still be reached despite the loss of any one server or cluster.

Table 1 lists the current server requirements for the command, AIO, and traditional cluster components:

Hardware Requirements for Contrail Enterprise Multicloud

Contrail Command A VM or physical server with:• 4 vCPUs• 8 GB RAM• 300 GB disk out of which 256 GB is allocated to /root directory

Contrail All-in-One A VM or physical server with:• 16 vCPUs• 64 GB RAM• 300 GB disk out of which 256 GB is allocated to /root directory

Contrail, AppFormix: HA Deployment • Contrail Networking Controller (3 nodes): 8 vCPU, 64 GB memory, 300 GB storage• OpenStack Controller (3 nodes): 4 vCPU, 32 GB memory, 100 GB storage• Contrail Services Node (CSN): 4 vCPU, 16 GB memory, 500 GB storage• Compute Nodes: Dependent on the workloads

Page 6: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

6©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Figure 3 provides a functional view of the Contrail Networking Controller and its relationship to the Contrail Command UI and networking components.

Orchestration Systems

IP Fabric(Underlay Network)

East-West Peering Interface (BGP)

Command(Gui)

VM VM VM

Configuration Analytics Management layer

REST APIs

Virtualized Server

VM VM VM

Virtualized Server

Gateway Router

BGP + NETCONF XMPPXMPP

MPLS over GRE,MPLS over UDP,or VXLAN

Hypervisor

Control

Configuration Analytics

Control

Gateway Router

Contrail System

vRoutervRouter

Figure 3: Contrail Networking Controller functional view

The Contrail Networking Controller appears in the middle of Figure 3 with its northbound, east/west, and southbound interfaces, along with its functional relationships to the data center orchestration systems, the Contrail Command UI, and the data center fabric elements.

On the northbound interface, a REST API is used to support the Contrail Command user interface. Additionally, and significantly, it also shows integration with popular orchestration systems such as RedHat OpenShift, Kubernetes, and VMware vCenter. In the multicloud solution, the OpenStack components are used by the controller to manage virtual routers (vRouters), which are installed on a workload/compute resource that handles traffic forwarding and security under the direction of the controller.

The east/west interface is used for BGP peering with additional control nodes in a scaled-out cluster.

The southbound side of the controller is used to configure and interact with the data center fabric using BGP and Network Configuration Protocol (NETCONF). In contrast, the Extensible Messaging and Presence Protocol (XMPP) is used to interact with vRouters installed on compute nodes. The fabric underlay (often EBGP but can be any interior gateway protocol) provides the basic IP connectivity needed to support the establishment of an MP-IBGP overlay that supports the EVPN route exchanges needed to establish VXLAN data plane tunnels. It’s worth noting that the controller participates in the IBGP overlay but not the fabric underlay, given it uses a default route for underlay connectivity.

While VXLAN tunnels are detailed in a later section, for now suffice it to say that they span virtual tunnel endpoints (VTEPs) which can be housed in top-of-rack switches or in VMs, where vRouters provide the needed VTEP functionality.

Page 7: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

7©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

An important architectural point is that IBGP route reflection is used to scale the EVPN control plane (much as with service provider L3VPN offerings). Starting with Contrail Enterprise Multicloud release 5.1, IBGP route reflection is supported on the fabric spine, as detailed in a later section.

The NETCONF protocol is used to push configuration changes into the data center fabric elements and data center gateway to affect the connectivity requirements expressed by the user through the Contrail Command UI. The ability to translate user intent into one or more configuration changes, which are then pushed into the fabric, is a key aspect of the Contrail Enterprise Multicloud platform’s simplicity. It abstracts the details of the fabric elements, shielding users from having to interact with vendor-specific configuration syntax and device operation.

The XMPP is used to install data plane forwarding state (VXLAN tunnels) in vRouters, providing compute nodes (typically VMs in a hypervisor host) with networking and security support. While Figure 3 focuses on vRouters acting as the VTEP, in many cases, the VTEP function is placed on the top-of-rack switch when vRouter functionality is not desired in the attached compute as is typical in the case of a bare-metal server (BMS). The VXLAN tunnels not only form the fabric overlay and provide L2 connectivity for intra-tenant (same VLAN) flows, they also isolate inter-tenant flows (different VLANs) and enforce application level security policies (based on labels/tags) for intra-tenant flows, which are key aspects of Juniper Contrail Platform security.

Figure 4 expands on the integration between the controller and the orchestration system, given that this is a critical aspect of Contrail Enterprise Multicloud. The orchestration system manages compute resources and their related workloads, while the controller provides the desired connectivity (and security) between those workloads by pushing out VTEP functionality in the compute-housed vRouters or top-of-rack switches.

Spine

DC Fabric

Leaf

VTEP

BMS7. Encapsulation Tunnels (VCLAN, etc.)

Hypervisor

FabricVRF

Encap/decap

vRouter

VM1 VM2

VRF A VRF B

StartVM

vRouterAgent

FabricManager

Plugin API Calls

ComputeAgent

NETCONF

NetworkingService

Orchestrator

Controller

1. Create networks and policies

5. Install vRouter exchange MAC/IP routes (XMPP)

6. MP-IBGP EVN route updates to overlay

3. VM creation request

2. Launch VM

4. New VM notify

Figure 4: Integration of orchestration and VXLAN connectivity

Orchestration systems typically fall into two categories: those that manage VM life cycles, and those that manage containerized applications running on those VMs. Examples of the former include VMware’s vCenter and OpenStack, while the latter includes popular solutions such as OpenShift, Nutanix, and Kubernetes. In many cases, a data center will have multiple (nested) orchestrators; one handles the VMs, while the others are reserved for container/pod management on those VMs.

Page 8: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

8©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Figure 4 begins at step 1, where a new VM has its parameters defined on the orchestrator while the desired VM connectivity (VNI assignment) and any security policies are defined on the controller. Again, in the Contrail Enterprise Multicloud model, the orchestration controls the specifics of the machine (CPU, RAM, OS, and so on) while the controller handles the desired connectivity for that machine. Although not shown, things are similar for containerized workloads such as a Kubernetes cluster; the Kubernetes master handles placement of the workload while the controller handles the networking.

At steps 2 and 3, the orchestration system signals the instantiation of the new VM, which is created in the hypervisor host. At step 4, the orchestrator notifies the controller that this VM has been created using a fabric manager plugin. Now that it knows the specifics of the VM, the controller installs a vRouter and uses XMPP to program its L2 and L3 virtual routing and forwarding (VRF) tables based on the results of security policy applied to the routes received from the VXLAN overlay. The next hops for these routes point to an overlay tunnel and a fabric next hop to reach the associated VTEP, thus forming the tunnel overlay.

Step 6 shows the controller advertising the related EVPN routes into the overlay to facilitate remote connectivity to the new vRouter and its associated VMs/containers. Step 7 ends the process with the data plane tunnels established between VTEPs based on the associated security policy. This example focuses on vRouter usage in a VM to provide VTEP functionality, but also shows that, when desired, the controller can use NETCONF to configure top-of-rack switches to provide the VTEP function. These changes may also be pushed out to spine switches, for example, to affect routing in the case of a centrally routed service also being applied.

Contrail Enterprise Multicloud lets you choose best-of-breed orchestration for your compute infrastructure while using industry-leading networking and security between those resources. Integration with the controller means that connectivity and security automatically track changes as these computes come, go, or are shifted between physical locations under control of the orchestrator.

Data Center Fabric Management: Green and Brownfield Underlays and the MP-IBGP OverlayModern data centers are designed around a nonblocking Clos architecture (a network design from the 1950s for telephony switches named after its creator, Dr. Charles Clos) that addresses the dual needs of high bandwidth between compute nodes and the ability to grow the number of computes as needed. These modern data center fabrics are referred to as “Layer 3 Fabrics” because they are based on IP packet switching. The use of an IP-only fabric means there is no need to use vendor-specific implementations to provide high availability, scale, and redundancy.

For very large data centers, this fabric can include thousands of switch ports spread over hundreds of switches. At these numbers, manually building out an IP fabric to support a VXLAN overlay can be a daunting task. Fortunately, simplifying data center fabric management is a critical part of the Contrail Enterprise Multicloud platform.

The same Command Server UI used to provision connectivity in the overlay also supports tools that automate and manage the provisioning of a new fabric, known as a “greenfield underlay” (GrU), and the “onboarding” of a pre-existing fabric, known as a “brownfield underlay” (BrU). Once the fabric underlay is configured/onboarded, the controller is free to establish the fabric overlay, which consists of IBGP peering for EVPN route exchange to support the VXLAN tunnels used in the data plane.

Figure 5 depicts a small data center fabric that is not yet configured with IP parameters, making it a GrU. In this simplified example, a single spine device is shown for clarity; Clos-based fabrics typically have more than one spine device for reliability.

Page 9: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

9©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

ContrailCommand

eth0 .252

AIO

eth0 .253

172.25.120/24(Management Network)

Io0=? QFX5110-2

172.15.20.1/24

et-0/0/33

QFX10002

Io0=?

Io0=?

QFX5110-1ge-0/0/0Ethernet Switching

ge-0/0/0Ethernet Switching

et-0/0/52??

em0

et-0/0.35?

et-0/0.34?

et-0/0/52

Internet

Figure 5: A greenfield underlay

In Figure 5, a Juniper Networks QFX10002 Ethernet Switch is deployed as a spine, connected to two Juniper Networks QFX5110 Ethernet Switches functioning as leaf devices. Note that none of the switches have loopback or interface addresses configured, nor is there any routing protocol in operation. The diagram also details an out-of-band management network that is shared between the fabric devices and the Contrail Enterprise Multicloud cluster, shown here as an AIO deployment. The fabric lacks the IP-based infrastructure (IP addressing and a routing protocol) needed to support the logical MP-IBGP overlay.

In the GrU case, Contrail Command uses a library of automation scripts to add the new spine and leaf devices to a fabric, discover the topology, assign device roles, create device-specific configurations, and push the configurations to each device in order to form the GrU.

For users with an existing IP-based data center fabric, Contrail Enterprise Multicloud can learn the existing topology, assign the device roles (leaf/spine), and then push the required configuration changes to complete the fabric overlay in a process known as “fabric onboarding,” discussed in detail in the next section. Figure 6 illustrates a more realistic fabric (multiple spine switches) in the brownfield state.

Page 10: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

10©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

QFX10003qfx 10002-11192.168.100.13Io0 192.168.255.101

35 0 3334

31:0 31:1 31:0 31:1

35 0 33

0 353532

33:1 32:1

34.20.0

.20.2 .20.4

.20.7

.20.5

1

48 4948 4948 49

0

.20.6

.10.6

.10.4

.10.2

.20.3.20.1

.10.

1 .10.3

.10.5

.10.7.1

0.0

QFX5100qfx5100-7

192.168.100.16Io0 192.168.255.2

QFX10002qfx 10002-12

192.168.100.14Io0 192.168.255.102

EBGP Peering on all links

Management Network

Managementconnection forthe rest of fabric is not shown

Spine

Leaf

Spine 1

Leaf 1 Leaf 2 Leaf 3 Leaf 4

Spine 2

QFX5100qfx5100-8

192.168.100.17Io0 192.168.255.2

Contrail Command192.168.100.129

Controller192.168.100.123

BMS 2192.168.100.125

192.168.10.20

192.168.x.x/31

192.168.11.110

192.168.10.40

Compute 1192.168.100.128

Service Node192.168.100.126

ESX-VM1192.168.100.10

BMS-4192.168.100.127

QFX10002qfx 10002-13

192.168.100.15Io0 192.168.255.3

QFX5110qfx5110-8

192.168.100.18Io0 192.168.255.4

Figure 6: A brownfield underlay

The sample BrU is based on two QFX10002 switches functioning in the spine role with four leaf devices composed of QFX5100 and QFX10000 switches. Note the presence of loopback and interface addressing on all links along with an IGP, which (in this example) is EBGP interface (not loopback) peering; the presence of a working IP fabric is what makes this a BrU.

Recall that the role of the physical underlay network is to provide an IP fabric—that is to say, it must provide unicast IP connectivity from any physical device (server, storage device, router, or switch) to any other physical device. An ideal underlay network provides uniform low-latency, nonblocking, high-bandwidth connectivity from any point to any other point in the network.

It’s worth mentioning that Figure 6 shows a more traditional controller cluster as opposed to the AIO approach shown in Figure 5. Functionally there is no difference; spreading the controller over multiple machines simply improves performance and scaling. Note that for clarity, only one fabric element is shown attached to the management network, and that some of the cluster nodes also have fabric-facing (data) interfaces based on their role—for example, when providing vRouter/gateway services (compute nodes) or to provide network services like Dynamic Host Configuration Protocol (DHCP) (the services node).

To onbord a BrU, the user simply employs the Contrail Command UI to define the existing device roles so that Contrail Enterprise Multicloud can onboard the existing fabric. The controller then pushes out the configuration changes to complete the EVPN-VXLAN overlay. Currently, Contrail Enterprise Multicloud does not support onboarding a brownfield (pre-existing) overlay (BrO), as the overlay must be configured by the controller.

Page 11: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

11©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

QFX10003qfx 10002-11192.168.100.13Io0 192.168.255.101

35 0 3334

31:0 31:1 31:0 31:1

35 0 33

0 353532

33:1 32:1

34.20.0

.20.2 .20.4

.20.7

.20.5

1

48 4948 4948 49

0

.20.6

.10.6

.10.4

.10.2

.20.3.20.1.1

0.1 .10.3

.10.5

.10.7.1

0.0

QFX5100qfx5100-7

192.168.100.16Io0 192.168.255.2

QFX10002qfx 10002-12

192.168.100.14Io0 192.168.255.102

MP-IGBPSessions to

Route Reflectors

Management Network

Spine

MP-IBGP Route Reflector MP-IBGP Route Reflector

Leaf

Spine 1

Leaf 1 Leaf 2 Leaf 3 Leaf 4

Spine 2

QFX5100qfx5100-8

192.168.100.17Io0 192.168.255.2

Contrail Command192.168.100.129

Controller192.168.100.123

BMS 2192.168.100.125

192.168.10.20

192.168.x.x/31

192.168.11.110

192.168.10.40

Compute 1192.168.100.128

Service Node192.168.100.126

ESX-VM1192.168.100.10

BMS-4192.168.100.127

QFX10002qfx 10002-13

192.168.100.15Io0 192.168.255.3

QFX5110qfx5110-8

192.168.100.18Io0 192.168.255.4

Managementconnection forthe rest of fabric is not shown

Figure 7: IBGP route reflection in the overlay

Figure 7 shows the resulting IBGP overlay.

Whether you start with a greenfield or brownfield underlay, once the underlay fabric is onboarded, Contrail Enterprise Multicloud establishes a VXLAN tunnel for the data plane; a VXLAN data plane is the de facto open standard for both IP and Ethernet multitenant networking in the cloud. A key part of the overlay is the use of MP-IBGP peering between the fabric elements.

BGP route reflection is a well understood technology used for years in service and cloud provider networks to dramatically improve BGP scaling by eliminating the need for a full mesh. Route reflection is fully supported by Contrail Enterprise Multicloud beginning with Release 5.1.

As shown in Figure 7, the spine devices function as reflectors such that each leaf requires only two IBGP sessions, one to each reflector. The result is a dual hub-and-spoke topology that reduces BGP state geometrically when compared to the requirements of full-mesh peering. The total number of BGP sessions required in a full mesh is determined using the formula (N(N-1)/2), with each node in the mesh having to support n-1 sessions. Using this formula, a full mesh between the seven devices shown requires 7 * 6/2, or 21 BGP sessions, with each device in the mesh terminating six sessions.

In contrast, a topology with route reflection scales in a linear manner using the formula n-1, such that only six sessions are needed among the same set of nodes. The reflector must support n-1 sessions, a load equal to that carried by each member in the full-mesh case, but its clients require only one session each. This dramatically reduces processing load on a network-wide basis and significantly reduces the number of sessions needed on leaf devices. The use of dual reflectors is common for redundancy; the result doubles the number of sessions required, bringing the total to 12 with each leaf now supporting two connections, one to each reflector.

Page 12: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

12©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Support for route reflection ensures that the fabric overlay can scale horizontally, even among a large set of nodes. Regardless of total fabric size, normally only two route reflectors are defined. Spine switches that are not assigned the route reflector role function as route reflector clients, just like leaf devices.

With IBGP sessions established to the route reflectors, the overlay control plane is ready for EVPN route exchange. Before this can happen, local VLAN membership must be defined in Contrail Command. This requires using the Contrail Command UI to define the various servers (VMs and BMS) attached to the fabric, along with their VLANS and virtual network membership. In Contrail Enterprise Multicloud Release 5.1, virtual port groups (VPGs) are supported to simplify the process of assigning top-of-rack ports to a virtual network by eliminating the need to specify the attached devices’ media access control (MAC) and port identifiers.

Figure 8 shows the DC fabric, now with a working IBGP overlay (solid green lines) and VNI membership defined.

QFX10003qfx 10002-11192.168.100.13Io0 192.168.255.101

35 0 3334

31:0 31:1 31:0 31:1

350 33

0 3535320

33:1 32:1

34 35

1

48 4948 4948 49

0

QFX5100qfx5100-7

192.168.100.16Io0 192.168.255.2

QFX10002qfx 10002-12

192.168.100.14Io0 192.168.255.102

Spine

Leaf

Spine 1

Leaf 1 Leaf 2 Leaf 3 Leaf 4

Spine 2

QFX5100qfx5100-8

192.168.100.17Io0 192.168.255.2

Contrail Command192.168.100.129

Controller192.168.100.123

BMS 2192.168.100.125

192.168.10.20 192.168.11.110

192.168.10.40

Compute 1192.168.100.128

Service Node192.168.100.126

ESX-VM1192.168.100.10

BMS-4192.168.100.127

Virtual Network Green VXLAN Tunnel192.168.10.0/24

192.168.100.0/24

IBGP Overlay

Management Network

QFX10002qfx 10002-13

192.168.100.15Io0 192.168.255.3

QFX5110qfx5110-8

192.168.100.18Io0 192.168.255.4

Figure 8: Fabric overlay-IBGP peering

Note that in this case, there is a shared virtual network, i.e., a common VNI for the green virtual network at Leaf 2 and Leaf 4. It’s worth noting that in BGP, EVPN VLAN tags are locally significant and are stripped off at the VTEP; VXLAN uses the VNI to denote virtual network membership only. Still, it is best practice to match VLAN IDs at both ends where possible.

Page 13: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

13©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

With local VNI state known, the VTEPs use the IBGP overlay to exchange EVPN route information. The received EVPN routes are used to automatically establish VXLAN tunnels to remote VTEPs with shared VNI state. The resulting VXLAN tunnel for the green VLAN is shown as a dashed line between Leaf 2 and Leaf 4 in Figure 8. Once the VXLAN tunnels are established, communication within the green VLAN is possible. Note that an explicit security policy is not needed to permit communications when ports share a VNI, as with traditional bridging and VLANs.

Routing Between VLANs: Central or Edge Routing and Bridging

Figure 9 shows bridging between devices on the same VLAN/VNI. As is always the case, routing is needed to facilitate communications between VLANs/VNIs. Contrail Enterprise Multicloud supports inter-VLAN routing through the construct of an integrated routing and bridging (IRB) interface maintained by fabric devices assigned a VXLAN routing role. The question becomes, where do you place the IRB/VXLAN routing function: in the spine or at the leaf devices?

Figure 9 details the options for inter-VLAN routing in the Contrail Enterprise Multicloud reference architecture.

Lean Spine

Edge RoutedCentrally Routed

Router H-visor withContrail vRouter

L3 VXLAN GW

Blue VLAN Green VLAN

vRouter

Blue VLAN Green VLAN

VM1 VM2

Router H-visor withContrail vRouter

vRouter

VM1 VM2

Figure 9: Central vs. edge routing

In Figure 9, the reference architecture (centrally routed with edge bridging) is shown on the left, with the edge routed and bridged approach shown on the right. The choice of edge or spine routing is made during the provisioning of the logical overlay when device roles are assigned, such as “ERB-UCAST-Gateway” vs. “CRB Gateway.” Here, the former term denotes a leaf routing function while the latter indicates bridging only at the leaf.

Both edge and central routing architectures have pros and cons; it’s up to users to decide which best addresses their specific needs and concerns. For example, edge routing allows for a “lean spine,” which tends to have a lower “blast radius” in the event of a failure, since routing state is distributed over many leaf devices rather than being concentrated in a few spine devices. The downside to edge routing is that it may require a more sophisticated switch at the leaf layers; not all leaf switches can support the VXLAN routing function. A complete list of supported fabric devices that support edge routing can be found at www.juniper.net/documentation/en_US/contrail5.1/topics/concept/erb-for-qfx-switches.html. Some users prefer to put a few higher end switches in the spine and let them handle the routing so as to reduce overall edge state and complexity. While central routing can save on leaf switch costs by limiting edge devices to bridging only, it comes at the cost of an extra hop through the spine when routing between VLANs on the same leaf (as shown on the left side of Figure 9).

Note that when a vRouter is present in a host, routing between internal VMs is performed locally—that is, within the vRouter, whether central or edge-based routing is deployed.

Page 14: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

14©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Contrail Networking and the Data Plane: VXLAN Tunnels and vRoutersA key component of Contrail Enterprise Multicloud is the use of VXLAN tunnels in the data plane to provide Layer 2 stretch (logical L2 networks) over an IP fabric. These tunnels form the basis of user isolation as each virtual network is associated with a unique VNI that limits connectivity to members of the same virtual network, like an MPLS label in the case of L3VPNs. The only way to leave one virtual network and enter another is through VXLAN routing; this is governed by user policy that can restrict all or certain traffic types between virtual networks.

VXLAN has a 24-bit virtual network ID (VNID) space, which allows for 16 million logical networks. Implemented in hardware, VXLAN supports transport of native Ethernet packets inside an encapsulated tunnel. VXLAN has become the de facto standard for overlays terminated on physical switches and is supported in Juniper Networks QFX5100 and QFX10000 switches, EX9200 Ethernet Switches, and MX Series 5G Universal Routing Platforms.

A VXLAN tunnel encapsulates the original Ethernet frame into an outer UDP and IP packet to accommodate routing over the IP fabric.

Dst

. M

AC A

ddr.

Src.

MAC

Add

r.V

LAN

Typ

e0x

8100

VLA

N ID

Tag

Ethe

r Typ

e0x

0800

IP H

eade

rM

isc. D

ata

Prot

ocol

0x11

Hea

der

Chec

ksum

Out

erSr

c. IP

Out

erD

st. I

P

UD

PSr

c. P

ort

XVLA

N P

ort

UD

P Le

ngth

Chec

ksum

0x00

00

VXL

AN

RRRR

1RRR

Rese

rved

VN

ID

Rese

rved

48 48 16 16 16 72 8 16 32 32

OuterMAC Header

14 Bytes(4 bytes optional)

2 Bytes 8 Bytes 8 Bytes

16 16 16 16 8 24 24 8

OuterIP Header

UDPHeader FCS

VXLANHeader Original L2 Frame

Figure 10: VXLAN encapsulation

VXLAN encapsulation, detailed in Figure 10, is well documented in other sources. For our purposes, the use of UDP provides good entropy for load balancing over equal-cost multipath (ECMP). Additionally, the 24-bit VNID field eliminates concerns about running out of VLAN IDs, which can happen in a large data center when relying on the standard 12-bit VLAN ID field.

The use of vRouters is another critical aspect of the Contrail Enterprise Multicloud platform. Figure 11 provides a high-level view of the vRouter concept.

Page 15: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

15©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

vRouter AgentVirtual

Machine(Tenant A)

VirtualMachine

(Tenant B)

VirtualMachine

(Tenant C)

RoutingInstance

(Tenant A)

RoutingInstance

(Tenant B)

VirtualMachine

(Tenant B)

RoutingInstance

(Tenant C)

vRouter Forwarding Plane

Ovelay tunnels(MPLS over GRE, MPLS over UDP, VXLAN)

VirtualMachine

(Tenant C)

FIB

Flow Table

FIB

Flow Table

FIB

Flow Table

Figure 11: The vRouter

vRouters are critical to security, as they are the point of policy enforcement for compute resources. Unmanaged applications or devices, such as a BMS, have their policies enforced by the access switch under the control of a policy pushed out by the controller. vRouters are installed on compute resources (VMs, containers, and life-cycle managed BMS), typically as a kernel module, although other options exist that offer higher performance (SR-IOV or smart-NIC based).

In many ways, vRouters function as remote line cards in a traditional router such as the Juniper Networks MX2000 line of 5G Universal Routing Platforms. You can think of the controller as the Routing Engine (RE), which hosts dynamic routing protocols to maintain the Routing Information Base (RIB), also known as routing table. The vRouters function as the RE’s distributed line cards (MPCs), where the forwarding information base (FIB) also known as forwarding table and next-hop rewrite actions reside.

The vRouter handles data plane encapsulation and decapsulation, and it maintains separate routing/bridging/forwarding tables (VRFs) for each tenant, as shown in Figure 11. The resulting isolation between tenants is a key aspect of the reference architecture’s inherent security. All this talk of controllers, tunnels, and vRouters can be a bit hard to follow. Figure 12 provides a big-picture view of the controller and its interactions with the fabric and VMs, and it shows how VXLAN tunnels, vRouters, and the EVPN overlay work together in the context of a small fabric supporting centralized routing and edge bridging (in other words, spine routed).

Be sure to take the time to understand this figure. It captures many key aspects of the Contrail Enterprise Multicloud platform, and understanding how the physical network relates to its logical equivalent is strong evidence that you understand how Contrail Enterprise Multicloud works.

Page 16: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

16©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Bridge/VLAN domains (L2)

VLAN Red/VNI Red

VLAN Green/VNI Green

ESXI HypervisorHost with vRouter

Installed by Controller

EVPN Type 2(MAC) routes

VXLAN Routing(L3 VXLAN G)

Controller

Spine(Route

Reflector)

VXLANBridging (L2)

Leaf (TOR)VXLAN Tunnel(VTEP to VTEP)

VXLAN Tunnel(VTEP to VTEP)

All EVPNroutes via

RR

All EVPN routes via RR

NETCONF

XMPP

EVPN Type 5 (IP Prefix) routes

BMS1 BMS2

Bridge/VLAN domains (L2)

Logical View

Physical View

VLAN Red/VNI Red

VLAN Green/VNI Green

BMS1 BMS2

VRF

VM1

VN2: bd2 + Prefix2VN1: bd2 + Prefix1

BMS1 BMS2VM1

VRF

Figure 12: Putting it together—physical and logical views

Figure 12 shows a simplified fabric with a single leaf and spine; the leaf has two BMS attached along with a hypervisor host and its VM. The host has a vRouter, installed as a kernel module in the hypervisor by the controller (though in the case of an ESXi hypervisor [vSphere], the vRouter is installed as a VM itself). There are two virtual networks/VNIs: red and green. The ESXi host houses a local VM with membership in the red VLAN; the same VLAN/VNI is shared by BMS1, while, in contrast, BMS2 is assigned to the green VLAN.

The fabric underlay and VXLAN overlay are in place. In the case of the latter, note that the spine switch is acting as a route reflector and has exchanged EVPN routes of all types (Type 2 and Type 5, for example), with both its clients (i.e., the leaf and the controller). The controller is shown in the upper left, with dotted lines showing how it interacts with the fabric using NETCONF to push configuration changes as well as making changes to the vRouter’s forwarding state using XMPP. This example demonstrates the centrally routed/edge bridged reference model, given the VXLAN routing function is shown in the spine.

Because of the red/green VLAN/bridge domain definitions (via the Contrail Command UI) and the resulting EVPN route exchanges in the overlay, VXLAN tunnels are established between the various VTEPs. Note that the vRouter in the ESXi host functions as a VTEP, as do both the leaf and spine nodes. The result is a green and red VXLAN tunnel between leaf and spine, and a red VXLAN tunnel between the leaf and ESXi host.

Page 17: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

17©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

This design leverages several different EVPN route types that serve different functions—for example, unicast vs. multicast reachability or bridging vs. routing. Type 2 EVPN routes advertise MAC reachability and are used for bridging (L2). Bridging occurs within a VLAN and at the edge/leaf (edge bridging); as such, the traffic between BMS1 and VM1 does not flow through the spine. The vRouter in the ESXi host maintains separate tables for each VLAN/VNI configured. Here, only a single VN instance is configured, providing isolation between the red and green tenants, should a green VM be added later. At the spine, which is enabled for VXLAN routing, the IRB direct routes, along with any received Type 5 (IP prefix) routes, are used to populate a Layer 3 VRF that accommodates routing (L3) between VLANs when enabled by group policy.

The spine switch instantiates an IRB interface for each VLAN; this interface serves as the default gateway for nodes in the respective VLANs. When BMS1 wants to talk to BMS2, the inter-VLAN traffic must be routed via the IRBs housed in the spine.

The final point to emphasize in the figure is the physical vs. logical views, which at first glance appear to have no relationship to one another. The key here is that the physical network (the underlay) enables the VXLAN overlay, and it’s the overlay that provides the logical (or functional) view of the resulting connectivity. This example is pretty basic; there is a red VN with L2 connectivity (bridging) among the BMS1 and VM1 members, and a green VN consisting of a single node isolated from the red VN. As expected, a router/IRB function is needed to interconnect members of different VLANs (VNIs); here, this function is performed in the spine switch, which again is an example of central routing with edge bridging.

While the physical network can seem a bit daunting, it must be stressed that this is where Contrail Command comes in. Its intuitive UI makes it easy to define the desired connectivity, and then the controller works to “make it so.” As discussed in a later section, the integrated telemetry and reporting of Contrail Enterprise Multicloud allows you to confirm that your intentions, as executed through the controller’s interaction with the fabric overlay and vRouters, are meeting your expectations.

Application-Based Security

Traditional security mechanisms such as access control lists (ACLs) or firewall filters quickly become hard to manage in a modern data center. This is in part because they are often tied to IP addresses or data center perimeters that are often geography-based. The problem is that the applications themselves are dynamic, coming, going, and roaming as they wish under control of the orchestration system. Most modern applications follow a development-staging-production life cycle. Where the application runs often changes as the application moves through its life cycle. Contrail Enterprise Multicloud solves this dilemma with security tags, sometimes called “labels,” which can be applied to applications, projects, networks, vRouters, VMs, and interfaces, either individually or in combination, to meet the desired security policy and service chaining requirements.

Page 18: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

18©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Figure 13 shows how tags can be used to secure an application tier regardless of where it runs or its development stage.

App1, Deployment = Dev

App1, Deployment = Staging

db

App1, Deployment = Prod

App1, Deployment = Dev-AWS

App1, Deployment = Dev-K8s

App1, Deployment = Dev-OpenShift

App1, Deployment = Staging-BMS

Bare-Metal Servers

No policyrewrite needed

No policyrewrite needed

No policyrewrite needed

No policyrewrite needed

Web App db

Web App db

Web App db

Web App db

Web App db

Web App db

Web App db

Policy

Figure 13: Application security through tags

Figure 13 depicts a typical three-tier Web application along with its life cycle, which includes development, staging, and ultimately production deployment. In this example, tags are applied to the application name, its tier, and its stage of deployment (Dev vs. Staging vs. Prod). The key is that the Web can talk to the app and the app can talk to the database only when their deployment tags match. As a result, the Web tier from the Dev Deployment on the left can only talk to an App tier with a matching Dev Deployment tag. It cannot talk to the App tier in either the Staging or Prod environments.

Policies that reference these tags provide a simple way of ensuring that an application’s components are confined to their respective tiers and areas of deployment; you certainly don’t want your development application interacting with your production environment, and there is no need for a Web front end to interact with the database tier directly. As a result, the same policy can be applied regardless of where the application is staged. For example, no policy modifications are needed if the application’s development stage is shifted into a public cloud, or if it shifts from being hosted in a VM to a container. It’s the same application, with the same tiers, and it still needs to be confined to a specific deployment environment. With application-level security based on tags, it does not matter where or how the application is hosted.

The same tag-based policy construct is used to support service chaining with virtualized or physical network functions (VNF/PNF) such as firewalls, deep packet inspection, or load balancers. Figure 14 shows how service chaining and other security functions can be applied in the Contrail Enterprise Multicloud platform.

Page 19: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

19©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

ContrailCommand

Compute Node Compute Node

Web Tier App Tier

ContrailEnterpriseMulticloud

DB Tier

VMsVMs VMs ContainersLB FW

BMS

1

1

2

3

Figure 14: End-to-end application-based security

A service chain is formed when a network policy specifies that traffic between two networks must flow through one or more network services. This is shown in the area marked “2” in the figure, where a service chain has been inserted between the App and DB tiers. The controller achieves this by converting user policy into one or more steering rules (next hops) inserted into the overlay to direct traffic to the specified service chain—in this case, consisting of load balancing and firewalling. A Layer 4-based policy (ACL) is applied at the areas marked “1” to ensure that the Web tier can only communicate with the App tier. The area marked “3” shows traffic between the VM and containerized processes in the App tier being subjected to a host-based firewall as an example of a simple VNF service chain within the same tier.

With Contrail Enterprise Multicloud, operators can uniformly orchestrate and manage policies across all environments where an application can execute from a single, central location, such that the policy becomes automatically leashed to the workload wherever it launches. This includes support for workloads running in VMs or containers on premises, in private clouds, or in a public cloud (Amazon Web Services [AWS], Azure, and Google Cloud, among others).

Contrail Networking: Data Center Interconnect for Multiple Sites and Extension to Public Cloud (Multicloud)Many enterprises deploy data centers (fabrics) at multiple locations (sites). The Contrail Enterprise Multicloud reference architecture supports the extension of the Contrail Networking Controller’s functionality between multiple data centers/fabrics using a construct known as a Data Center Interconnect (DCI) gateway. Figure 15 provides a high-level overview of the DCI method, which currently supports L3 stretch via Type 5 EVPN route exchange between the DCI gateways at each site.

Page 20: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

20©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

CentralRouted Spine

CentralRouted Spine

CentralRouted Spine

Contrail Controller

ContrailController

VXLAN tunnelsfor L3 Stretch

Spines with DCI GW Role

EBGP Peering(IBGP for same ASN)

IRB

LRGreen

IRB

IRB IRBVXLAN tunnelsfor L3 Stretch

Spines with DCI GW Role

EBGP Peering(IBGP for same ASN)

CentralRouted Spine

LRGreenLR

Red

LRRed

Data Center Left (ASN 61000) Data Center Right (ASN 62000)

Figure 15: Data Center Interconnect for multisite connectivity

In the DCI method, a single Contrail Controller (and Command UI) is deployed at the primary data center location in Data Center Left using ASN 61000. Once the IP underlay has been extended between the two fabrics (normally via an EBGP session between the route reflectors in each spine), the Command UI is used to define logical systems that provide the desired client connectivity—in this example, the green and red VNs within each fabric. The resulting logical system configuration creates VRFs on the spine devices that are performing central routing, each with its IRB. To provide DCI between the left and right data centers, you assign the role of DCI gateway to at least one spine device in each fabric, then create a DCI object that binds the logical system and DCI gateway functions in both fabrics.

Once activated, BGP peering is established between the DCI gateways, extending the fabric overlay through the advertisement of Type 5 EVPN routes that result in L3 VXLAN tunnels between the sites. Support is offered for EBGP or IBGP peering, depending on whether the data centers have the same (IBGP) or different (EBGP) Abstract Syntax Notations (ASNs), as shown in the example.

In addition to extending controller functions between sites, Contrail Enterprise Multicloud also supports the extension of the data center into a public cloud offering, a capability referred to as multicloud.

While building a high-performance private cloud spread over multiple sites is certainly feasible, many smaller enterprises find themselves caught in the economic balance between private vs. public cloud models. The public cloud option locks them into a provider with high recurring costs, while the private cloud requires a well-designed, high-capacity data center to support peak load and provide tolerance for network and equipment failures.

This “all or nothing” dilemma is the problem that Contrail Enterprise Multicloud was designed to solve. The ability to dynamically extend a private data center into a public cloud using open, standards-based secure protocols under the control of a simple user interface is a game changer. The resulting hybrid capability offers the best of both worlds. For example, most processing can be done in the private cloud to reduce costs, but when additional capacity or specific services are needed, they can be instantiated in the public cloud using a pay-per-use model. Alternatively, the extra capacity provided by the public cloud could be a standard part of the data center design—for example offloading all e-mail to Azure. Or it may be held in reserve to accommodate peak load or for disaster recovery by moving or extending compute loads between the local data center and the public cloud.

Page 21: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

21©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Figure 16 illustrates the Contrail Enterprise Multicloud platform’s multicloud capability.

IPSec

MulticloudGateway

Contrail EnterpriseMulticloud-Enabled

Data Center

AWS VPC-1

Subnet RED

Subnet GREEN

Ec2 instances

IPSec

IPSec

Blue VLAN

QFX5110-48S-169

BMS/VM vRouter BMS/VM

QFX5110-48S-170 QFX5110-48S-171

QFX10002-36Q-175DC2-RR2

QFX10002-36Q-174DC2-RR1

EC2 Instance

ContrailController

Leaf Layer

Spine Layer

Internet

Router

APIgwManagement

IPSec Tunnel VPGw

vpc

Figure 16: An enterprise multicloud

The figure shows a Contrail Enterprise Multicloud-enabled data center using a vRouter to provide a multicloud gateway function to extend the private cloud into a public cloud offering (AWS or Azure) using secure (IPsec) tunnels via the Internet. The AWS service is expanded to show how its API gateway is first used to instantiate compute resources (VMs) in a virtual private cloud (VPC) instance. The controller then installs a vRouter in the VM to support connectivity through the VPC gateway, providing overlay connectivity (tunnels) between the private and public clouds.

The Contrail Enterprise Multicloud architecture also supports the use of Juniper Networks SRX Series Services Gateways as the multicloud gateway, as well as direct connect methods where the customer equipment is collocated in the public data center with a direct cable attachment. While IPsec is shown in the figure, SSL with dTLS is also supported in the reference architecture.

Regardless of which method is used for multicloud connectivity, the use of open protocols means the user is free to use public cloud services without risk of being locked into the tools or automation associated with specific services. This empowers the enterprise to pick and choose a best-of-breed provider based on the services needed—for example, using AWS for generic compute and Microsoft Azure for Office 360-related applications and services.

Page 22: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

22©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

AppFormix for Multicloud Telemetry and ReportingWith Contrail Enterprise Multicloud managing your data center fabric, providing secure connectivity for VMs and BMS, and with its multicloud capabilities letting you shift compute loads between private and public clouds, it’s time for the final pillar of the Contrail Enterprise Multicloud architecture to shine: Contrail Analytics.

The Juniper AppFormix component of the Contrail Enterprise Multicloud reference architecture provides end-to-end telemetry, reporting, and visualization for the data center fabric and compute resources, extending all the way to individual applications, to help you understand how your data center and user applications are performing. Built in AI/machine learning and automation also allows AppFormix to react to and recover from service outages, as shown in Figure 17.

Contrail &AppFormix

Monitor Health and Performance of the clouds based onuser and Operator defined Policies• Physical Infrastructure• Virtual Infrastructure & Management Services• VNFs, Virtual Networking services (e.g., FW, LB, etc.)

Notifications are sent to service orchestrator and PerformCorrective Actions to ensure service availability, e.g.• Migrate workload away from a failing host, or• HA to back-up service-chain from unresponsive acitve path

Generate Alerts for SLA Violations and anomalies –both in real-time and as results of Predictive analysis

Resource usage trends and Predicitve Capacity Forecastbased on current and historical data

Figure 17: AppFormix machine learning and predictive AI

AppFormix continues to evolve; for example, AppFormix will soon be able to instruct the controller to take corrective action when problems are detected—actions such as shifting applications off a failing server to another host, or moving that workload to a public cloud. Figure 18 details AppFormix’s role in the Contrail Enterprise Multicloud reference architecture.

Page 23: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

23©2019, Juniper Networks, Inc.

Contrail Enterprise Multicloud

Analytics

Analytics

TopologyDiscovery

Contrail

Contrail

AWS, Azure GCP

vpc

Figure 18: AppFormix integration with Contrail Enterprise Multicloud

Figure 18 highlights AppFormix’s ability to provide analytics on the data center fabric underlay as well as the VXLAN overlay. AppFormix also monitors and reports on VM compute and application performance, both for local resources as well as those running in public cloud networks. With Contrail Enterprise Multicloud, you get end-to-end and bottom-up visibility into how your network is operating, along with built-in intelligence to keep it up and running.

ConclusionJuniper Contrail Enterprise Multicloud is a controller-based solution which provides the tools you need to simplify and automate your data center now and into the future.

The Contrail Enterprise Multicloud reference architecture is based on several pillars that, when combined, provide a simplified way to build, configure, manage, secure, and visualize your data center while allowing you to extend your private cloud into public cloud offerings. The highly scalable controller-based solution’s UI simplifies all aspects of managing a modern data center, including:

• Tools to automate and simplify a greenfield underlay

• Tools to automate and simplify onboarding of brownfield underlays

• Intent-based automation of the IBGP overlay and VXLAN tunnel data plane

• Integration with popular VM orchestration systems

• vRouter support for VMs and BMS with containerized applications, extending VTEP/security and telemetry to endpoints

• Easy-to-understand application tag-based security policies for intent-based security in the face of shifting compute loads

Page 24: Contrail Enterprise Multicloud · Contrail Enterprise Multicloud Contrail Enterprise Multicloud Overview The Contrail Enterprise Multicloud solution is based on a centralized controller

24

Corporate and Sales Headquarters

Juniper Networks, Inc.

1133 Innovation Way

Sunnyvale, CA 94089 USA

Phone: 888.JUNIPER (888.586.4737)

or +1.408.745.2000

Fax: +1.408.745.2100

www.juniper.net

Copyright 2019 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc.

in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper

Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication

without notice.

APAC and EMEA Headquarters

Juniper Networks International B.V.

Boeing Avenue 240

1119 PZ Schiphol-Rijk

Amsterdam, The Netherlands

Phone: +31.0.207.125.700

Fax: +31.0.207.125.701

8030014-001-EN May 2019

Contrail Enterprise Multicloud

• Support for central or edge routing (lean spine) data center architectures

• Support for extension of local data centers into public cloud with secure tunnels

• End-to-end telemetry, reporting, alarming, and AI-based predictive/corrective actions through AppFormix integration

It should be noted that Juniper offers support for a controller-less solution for deploying EVPN based VXLAN overlays. Additional information on the controller-less reference architecture can be found in the IP Fabric EPVN-VXLAN Reference Architecture.

About Juniper NetworksJuniper Networks brings simplicity to networking with products, solutions, and services that connect the world. Through engineering innovation, we remove the constraints and complexities of networking in the cloud era to solve the toughest challenges our customers and partners face daily. At Juniper Networks, we believe that the network is a resource for sharing knowledge and human advancement that changes the world. We are committed to imagining groundbreaking ways to deliver automated, scalable, and secure networks to move at the speed of business.