25
VMware ® Network Virtualization Design Guide TECHNICAL WHITE PAPER JANUARY 2013

VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

Page 1: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

VMware® Network Virtualization Design GuideT e c h n i c a l W h i T e P a P e RJ a n u a R y 2 0 1 3

Page 2: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

VMware Network Virtualization Design Guide

T e c h n i c a l W h i T e P a P e R / 2

Table of contents

Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Components of the VMware Network Virtualization Solution . . . . . . . . . . . . . . . . . . . . . . . 4

vSphere Distributed Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

Logical Network (VXLAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

vCloud Networking and Security Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

vCloud Networking and Security Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

vCloud Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

VXLAN Technology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

Standardization Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

VXLAN Packet Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Intra-VXLAN Packet Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Inter-VXLAN Packet Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Network Virtualization Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Physical Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Network Topologies with L2 Configuration in the Access Layer . . . . . . . . . . . . . . . 12

Network Topologies with L3 Configuration in the Access Layer . . . . . . . . . . . . . . . .13

Logical Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14

Scenario 1 – Greenfield Deployment: Logical Network with a

Single Physical L2 Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14

Scenario 2 – Logical Network: Multiple Physical L2 Domains . . . . . . . . . . . . . . . . . . .15

Scenario 3 – Logical Network: Multiple Physical L2 Domains with vMotion . . . . . .16

Scenario 4 – Logical Network: Stretched Clusters Across Two Datacenters . . . . . 17

Managing IP Addresses in Logical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Scaling Network Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21

Consumption Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

In vCloud Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22

In vCloud Networking and Security Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22

Using API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

Troubleshooting and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Network Health Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

VXLAN Connectivity Check – Unicast and Broadcast Tests . . . . . . . . . . . . . . . . . . . . . .23

Monitoring Logical Flows – IPFIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

Port Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Page 3: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 3

VMware Network Virtualization Design Guide

Intended AudienceThis document is targeted toward virtualization and network architects interested in deploying VMware® network virtualization solutions.

OverviewThe IT industry has gained significant efficiency and flexibility as a direct result of virtualization. Organizations are moving toward a virtual datacenter (VDC) model, and flexibility, speed, scale and automation are central to their success. Although compute and memory resources are pooled and automated, networks and network services, such as security, have not kept pace. Traditional network and security operations not only reduce efficiency but also limit the ability of businesses to rapidly deploy, scale and protect applications. VMware vCloud® Networking and Security™ offers a network virtualization solution to overcome these challenges.

Application

VirtualMachine

Application

VirtualMachine

Application

VirtualMachine

VirtualNetwork

Requirement: x86

DecoupledServer Hypervisor

x86 Environment L2, L3, L4-7 Network Services

Network Virtualization Platform

Requirement: IP Transport

Physical Compute and Memory Physical Network

Workload

VirtualNetwork

Workload

VirtualNetwork

Workload

Figure 1. Server and Network Virtualization Analogy

Figure 1 draws an analogy between compute and network virtualization. Just as VMware vSphere® abstracts compute capacity from the server hardware to create virtual pools of resources, network virtualization abstracts the network into a generalized pool of network capacity. The unified pool of network capacity can then be optimally segmented into logical networks directly attached to specific applications. Customers can create logical networks that span physical boundaries, optimizing compute resource utilization across clusters and pods. Unlike legacy architectures, logical networks can be scaled without reconfiguring the underlying physical hardware. Customers can also integrate network services—such as firewalls, VPNs and load balancers—and deliver them exactly where they are needed. “Single pane of glass” management for all these services further reduces the cost and complexity of datacenter operations.

Page 4: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 4

VMware Network Virtualization Design Guide

The VMware network virtualization solution addresses the following key needs in today’s datacenter:

•Increasingcomputeutilizationbypoolingcomputeclusters

•Enablingnoncontiguousclusterexpansion

•Leveragingcapacityacrossmultipleracksinthedatacenter

•OvercomingIP-addressingchallengeswhenmovingworkloads

•AvoidingVLANsprawlinlargeenvironments

•EnablingmultitenancyatscalewithoutencounteringVLANscalelimitations

By adopting network virtualization, customers can effectively address these issues as well as realize the following business benefits:

•Drivefasterprovisioningofnetworkandservices,enablingbusinessagility

•Improveinfrastructureutilization,leadingtosignificantCapExsavings

– Increase compute utilization by 30 percent by efficiently pooling compute resources

– Increase network utilization by 40 percent due to compute pooling and improved traffic management

•Decouplelogicalnetworksfromphysicalnetworks,providingcompleteflexibility

•Isolateandsegmentnetworktrafficatscale

•Providemultitenancywithoutincreasingtheadministrativeburden

•Automaterepeatablenetworkandserviceprovisioningworkflows,translatingto30percentormorein OpExsavingsonnetworkoperationsalone

Components of the VMware Network Virtualization SolutionThere are several components bundled in the vCloud Networking and Security suite, plus several components of the core vSphere layer, used to deploy VMware network virtualization:

1. VMware vSphere Distributed Switch™ 5.1 (VDS)

2. VMware vSphere logical network (VXLAN)

3. VMware vCloud Networking and Security Edge™ 5.1

4. VMware vCloud Networking and Security Manager™ 5.1

5. VMware vCloud Director® 5.1 (not part of the vCloud Networking and Security suite)

6. VMware vCenter Server™ 5.1 (not part of the vCloud Networking and Security suite; shown as part of item 4 in Figure 2)

Page 5: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 5

VMware Network Virtualization Design Guide

VM

VM VM VM

VM

vShieldManager/vCenter

VMware L3Edge

Physical IP Network

Logical Network(VXLAN)

5

4

2

1

3

VCD

Figure 2. VMware VXLAN Solution Components

vSphere Distributed SwitchVDSabstractsthephysicalnetworkandprovidesaccess-levelswitchinginthevSpherehypervisor.Itiscentraltonetwork virtualization because it enables logical networks that are independent of physical constructs such as VLAN.Keepinmindthefollowingkeypoints:

•VDSfacilitatesmassivescale,withsupportforupto500physicalhosts.

•MultiplefeaturessuchasPortMirroring,NetFlow/IPFIX,ConfigurationBackupandRestore,NetworkHealthCheck,QoS,LACP,andsoon,provideacomprehensivetoolkitfortrafficmanagement,monitoringand troubleshooting within a virtual network.

For specific feature details, refer to the What’s New in VMware vSphere 5.1 – Networking white paper at http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf.

Logical Network (VXLAN)VMwarenetworkvirtualizationisbuiltusingVirtualeXtensibleLocalAreaNetwork(VXLAN)overlaynetworkingtechnology,anindustrystandardthatVMwaredevelopedjointlywithmajornetworkingvendors.Logicalnetwork enables the following capabilities:

•CreationoverexistingIPnetworksofaflexiblelogicallayer2(L2)overlaynetworkthatworksonexistingphysical network infrastructure without the need to rearchitect any of the datacenter networks

•Communication(east–westandnorth–south)whilemaintainingisolationbetweentenants

•ApplicationworkloadsthatareagnosticoftheoverlaynetworkandtransparentlyperformallL2-to-VXLANtranslations in the host

SeethefollowingsectionsformoredetailsonVXLANtechnology,architecturecomponentsandpacketflows.

Page 6: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 6

VMware Network Virtualization Design Guide

vCloud Networking and Security Edge vCloudNetworkingandSecurityEdgeservesasaVXLANgateway,translatingtrafficbetweenthelogicalnetworkandaphysicalVLAN-orIP-basednetwork.Inaddition,itprovidesservicestothelogicalnetworksuchasDHCP,NAT,routing(staticrouting),firewall,VPNandloadbalancing.Itisdeployedinavirtualapplianceformfactor,supportsfullactive–standbyHAfunctionalityandcansupportupto9GBpsoftraffic.

ThefollowingarekeypointstoconsiderforvCloudNetworkingandSecurityEdgeVXLANgatewayandnetworkservices offered in network virtualization:

•ItactsasanL3gatewaytotranslatebetweenVXLANandphysicalnetworksandisprimarilyusedfor north–southtraffic.

•Itprovidesinter-VXLANrouting.

•EachVXLANsegmentrequiresaseparatevCloudNetworkingandSecurityEdgeinterfacetoensureisolation.

•Itisavailableinthreesizes:compact,fullandx-large;itoffersoptionstoscaleupforhigherperformanceorscale out using multiple virtual appliances.

•vCloudNetworkingandSecurityEdgefirewallservicescanbeappliedonaper–VXLANsegmentbasis.

•Inmultitenantdeployments,individualpoolsofIPpertenantcanbeprovidedusingvCloudNetworkingandSecurityEdgeDHCPservices.

vCloud Networking and Security ManagervCloud Networking and Security Manager is the centralized network and security management component of the vCloud Networking and Security product suite. It is installed from an open virtualization appliance (OVA) file as a virtual machine by using VMware vSphere Client™.

KeepinmindthefollowingimportantpointsaboutvCloudNetworkingandSecurityManager:

•UsingthevCloudNetworkingandSecurityManageruserinterface,administratorscaninstall,configureandmaintain network and network services components.

•vCloudNetworkingandSecurityManagerexposesAPIsthatcanbeusedtointegratewithexistingcloudmanagement systems or for scripts. These are also termed as northbound APIs.

•vCloudDirectorrequiresvCloudNetworkingandSecurityManagertooffersimpleworkflowsforconsumptionof virtual networks and services.

•VMwarevCenterServer™plug-inforvCloudNetworkingandSecurityManagerenablescustomerstoperformVXLANconfigurationfromvCenterServeraspartoftheNetwork Virtualization tab.

vCloud DirectorThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization is fully integrated in vCloud Director workflows, enabling rapid self-serviceprovisioningwithinthecontextoftheapplicationworkload.vCloudDirectorusesvCloudNetworkingand Security Manager in the backend to provision network virtualization elements. vCloud Director is not part ofvCloudNetworkingandSecurity;itisaseparatepurchasedcomponent.Itisnotmandatoryfordeployinganetwork virtualization solution, but it is highly recommended to achieve the complete operational flexibility and agility discussed previously. See consumption models for all available consumption choices for VMware network virtualization.

Page 7: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 7

VMware Network Virtualization Design Guide

VXLAN Technology OverviewStandardization EffortVXLANisanInternetEngineeringTaskForce(IETF)InternetdraftformulatedincollaborationwithleadingnetworkingvendorsincludingCisco,AristaandBroadcom.ItprovidesaframeworkforcreatingL2overlaynetworksoverL3networks.EachL2overlaynetworkiscalledaVXLANsegment(or“virtualwire”)andisuniquelyidentifiedbya24-bitsegmentID.Thisenablescustomerstocreateupto16millionuniqueVXLANsegments, each of which is an isolated logical network.

EncapsulationVXLANmakesuseofanencapsulationortunnelingmethodtocarrytheL2overlaynetworktrafficontopof L3networks.AspecialkernelmodulerunningonthevSpherehypervisorhostalongwithavmknicactsasthevirtualtunnelendpoint(VTEP).EachVTEPisassignedauniqueIPaddressthatisconfiguredonthe vmknicvirtualadapterassociatedwiththeVTEP.

TheVTEPonthevSpherehosthandlesallencapsulationanddeencapsulationoftrafficforallvirtualmachinesrunningonthathost.AVTEPencapsulatestheMACandIPpacketsfromthevirtualmachineswithaVXLAN+UDP+IPheaderandsendsthepacketoutasanIPunicastormulticastpacket.Thelattermodeisusedfor broadcast and unknown destination MAC frames originated by the virtual machines that must be sent across the physical IP network.

Figure3showstheVXLANframeformat.TheoriginalpacketbetweenthevirtualmachinescommunicatingonthesameVXLANsegmentisencapsulatedwithanouterEthernetheader,anouterIPheader,anouterUDPheaderandaVXLANheader.TheencapsulationisdonebythesourceVTEPandissentouttothedestinationVTEP.AtthedestinationVTEP,thepacketisstrippedofitsouterheaderandispassedontothedestinationvirtual machine if the segment ID in the packet is valid.

OuterMACDA

VXLAN Encapsulation Original Ethernet Frame

OuterMACSA

Outer8021.Q

Outer IPDA

Outer IPSA

OuterUDP

VXLANHeader8 bytes

InnerMACDA

InnerMACSA

OptionalInner8021.Q

OriginalEthernetPayload

CRC

Figure 3. VXLAN Frame Format

ThedestinationMACaddressintheouterEthernetheadercanbetheMACaddressofthedestinationVTEPorthatofanintermediateL3router.TheouterIPheaderrepresentsthecorrespondingsourceanddestinationVTEPIPs.Theassociationofthevirtualmachine’sMACtotheVTEP’sIPisdiscoveredviasourcelearning.Moredetailsontheforwardingtableareprovidedinthe“VXLANPacketFlow”section.TheouterUDPheadercontains source port, destination port and checksum information. The source port of the UDP header is a hash of theinnerEthernetframe’sheader.ThisisdonetoenablealevelofentropyforECMP/loadbalancingofthevirtualmachine–to–virtualmachinetrafficacrosstheVXLANoverlay.TheVXLANheaderisan8-bytefieldthathas8bitstoindicatewhethertheVXLANNetworkIdentifier(VNI)isvalid,24bitsfortheVXLANSegment ID/VXLANVNIandtheremaining24bitsreserved.

Page 8: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 8

VMware Network Virtualization Design Guide

VXLAN Packet FlowThefollowingflowpatterndescribesthehandlingofARPonaVXLANsegment(forthepurposesofdiscussion,itisatypicalARPpacketfromavirtualmachine(MAC1)connectedtoalogicalL2networkVXLAN5001):

•Figure4showstwovirtualmachinesconnectedtoalogicalL2network.Thevirtualmachinesdon’tdetectanydifference in communicating to the external world. They continue to use standard IP protocol to communicate withthedestination.ThetrafficflowsthroughtheVTEPinterfacedefinedonthehost.

•EachlogicalL2networkisassociatedwithanIPmulticastgroup.Inthisexample,VXLAN5001isassociatedwithIPmulticastgroupaddress(239.1.1.1),andbothvSpherehosts(VTEPs)havejoinedthatmulticastgroup.

•TheARPbroadcastframefromthevirtualmachineisencapsulatedwithinanIPmulticastframebytheVTEPon which the virtual machine is running.

•ThemulticastframeisthensenttothemulticastgroupassociatedwithalogicalL2networksegmentID.

•ThemulticastframeisreceivedbythetargetVTEPs.ThedestinationVTEPsthenvalidatethelogicalL2network segment ID, deencapsulate the packet, and forward it if there are virtual machines on that host that areconnectedtothisL2network.

•ThedestinationvirtualmachinethenrespondstotheARPrequestwithaunicastpacket.TheVTEPonthehostonwhichthisdestinationvirtualmachineisrunningestablishesapoint-to-pointtunnelwiththeVTEPwherethe virtual machine MAC1 is hosted.

NOTE:

•Thenumberofmulticastgroupssupportedinthephysicalinfrastructuredictateswhethertherecanbeaone-to-onemappingtologicalL2networksegmentIDs.However,inthescenariowheretherearemorelogicalnetworksthanmulticastgroups,mappingofmultiplelogicalnetworkstoonemulticastgroupissupported.

•MulticastframesaregeneratedonlywhenabroadcastpacketisdetectedonthelogicalL2networkorifVTEP’sforwardingtabledoesnothavethemappingofavirtualmachineMAC-to-VTEPIPforthatMACaddress,alsocalledanunknownunicastpacket.ThisissimilartothetransparentbridgingoperationofL2switchesorbridgeswherethepacketsarebroadcastifthereisnoentryintheMACforwardingtablethatmatchesthedestinationMACaddressofaframe.AfterthevirtualmachineMACaddress–to–VTEPIPaddressentryhasbeendiscoveredandupdatedintotheforwardingtable,anyfuturerequestsforcommunicationtothatparticularvirtualmachineishandledbythesourcehostVTEPbyestablishingapoint-to-point(stateless)tunnelbetweendestinationVTEPswherethevirtualmachineishosted.

•TheIPmulticastprotocolactsasacontrolplanethathelpsbuildtheforwardingtablewithvirtualmachineMACaddressandVTEPIPaddressmapping.Figure4showsthepacketencapsulationandaforwardingtableentryinoneoftheVTEPs.

VTEPMACaddressesaredetectedduringthemulticastpacketexchangethatoccurswhenavirtualmachineisconnectedtoavirtualwire.NostandardARPrequestissentoutfromtheVXLANkernelmoduletodetecttheVTEPMACaddress,sothereisnoproxyARPconfigurationrequirementonthefirsthoprouter.

Page 9: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 9

VMware Network Virtualization Design Guide

VM

1

2 3

4

vSphere Distributed Switch

Forwarding Table

VXLAN 5001

VM MAC VTEP IP Segment ID

MAC1 10.20.10.10 5001

L2/L3 networkinfra

L2

MAC 1

vSphere

VTEP IP10.20.10.10

VTEP IP10.20.10.11

IP L2 IPUDP VXLAN Payload

L2 IP PayloadL2 IP Payload

VM

MAC 2

vSphere

Figure 4. VXLAN Encapsulation and Forwarding Table Example

ThenextpartofthissectiondescribespacketflowinthefollowingVXLANdeployments:

1) Intra-VXLAN packet flow; that is, two virtual machines on the same logical L2 network

2) Inter-VXLAN packet flow; that is, two virtual machines on two different logical L2 networks

Page 10: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 0

VMware Network Virtualization Design Guide

Intra-VXLAN Packet FlowFigure5showstwotrafficflows:

•AvirtualmachineiscommunicatingwithanothervirtualmachineonthesamelogicalL2network (red dotted line).

•AvirtualmachineiscommunicatingwithanexternaldeviceontheInternet(greendottedline).

VM VM

VXLAN BLUE192.168.1.0/24

External Network172.26.10.0/24

192.168.1.11192.168.1.10

192.168.1.1

172.26.10.10

Virtual Machine–to–Virtual Machine communicationVirtual Machine–to–Internet communication

Internet

vCloud Networkingand Security Edge

Gateway

Figure 5. VXLAN Traffic Flow – Same Logical L2 and External Traffic

Inthecaseofvirtualmachine–to–virtualmachinecommunicationonthesamelogicalL2network,thefollowingtwo traffic flow examples illustrate possibilities that are dependent on where the virtual machines are deployed:

1) Both virtual machines are on the same vSphere host.

2) The virtual machines are on two different vSphere hosts.

Inthefirstcase,trafficremainsononevSpherehost;inthesecondcase,thevirtualmachinepacketisencapsulatedintoanewUDPheaderbythesourceVTEPononevSpherehostandissentoverthroughtheexternalIPnetworkinfrastructuretothedestinationVTEPonanothervSpherehost.Inthisprocess,theexternalswitchesandroutersdonotdetectanythingaboutthevirtualmachine’sIP(192.168.1.10/192.168.1.11)andMACaddress because they are embedded in the new UDP header.

In the scenario where the virtual machine is communicating with the external world, as shown by the green dottedline,itfirstwillsendthetraffictogatewayIPaddress192.168.1.1;thevCloudNetworkingandSecurityEdgegatewaywillsendunencapsulatedtrafficoveritsexternal-facinginterfacetotheInternet.

Page 11: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 1

VMware Network Virtualization Design Guide

Inter-VXLAN Packet FlowIntheexampleshowninFigure6,therearetwologicalL2networks,VXLANBlueandVXLANOrange.Thevirtual machines connected to these networks are isolated from each other. The two networks are assigned with twodifferentsubnetIPaddresses,192.168.1.0/24and192.168.2.0/24.ThevCloudNetworkingandSecurityEdgegatewayactsastherouter/gatewaybetweenthesetwoisolatedlogicalL2networks.

The traffic flow between the two virtual machines on different logical networks depends on where the virtual machinesandvCloudNetworkingandSecurityEdgegatewayappliancearedeployed.Thefollowingarepossible scenarios:

1) All the virtual machines and the vCloud Networking and Security Edge gateway are on the same vSphere host.

2) The virtual machines are on different vSphere hosts, and the vCloud Networking and Security Edge gateway appliance is deployed on one of the vSphere hosts.

3) All the virtual machines and the vCloud Networking and Security Edge gateway appliance are on different vSphere hosts.

The first case is simple to describe because the traffic remains on the same host. The virtual machines direct the traffictotherespectivegatewayIPaddressofthelogicalnetworksubnets192.168.1.1and192.168.2.1.ThevCloudNetworkingandSecurityEdgegatewayreceivesthetrafficonthedifferentinterfacesand,basedonthefirewallrule, makes the routing decision between the two different interfaces.

The second and third cases of traffic flow involve the encapsulated packets that traverse the physical network infrastructurebeforetheyreachthevCloudNetworkingandSecurityEdgegateway,whichthenroutesthepacket to the appropriate destination.

Virtual Machine–to–Virtual Machine communication between two VXLANs

vCloud Networkingand Security Edge

Gateway

VM VM VM

VXLAN Orange192.168.2.0/24

VXLAN Blue192.168.1.0/24

External Network172.26.10.0/24

192.168.1.10

192.168.1.1 192.168.2.1

172.26.10.10

192.168.1.11 192.168.2.10

Internet

Figure 6. VXLAN Traffic Flow – Different Logical L2

Page 12: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 2

VMware Network Virtualization Design Guide

Network Virtualization Design Considerations VMware network virtualization can be deployed on top of existing datacenter networks. In this section, we discusshowthelogicalnetworksusingVXLANscanbedeployedovercommondatacenternetworktopologies.Wefirstdiscussrequirementsforthephysicalnetwork,followedbylogicalnetworkdeploymentoptions.

Physical NetworkThe physical datacenter network varies across different customer environments in terms of which network topologytheyuseintheirdatacenter.Hierarchicalnetworkdesignprovidestherequiredhighavailabilityandscalability to the datacenter network. This section assumes that the reader has some background in various networktopologiesutilizingtraditionalL3andL2networkconfigurations.Readersareencouragedtolookatthedesignguidesfromthephysicalnetworkvendorofchoice.Wewillexaminesomecommonphysicalnetworktopologies and how to enable network virtualization in them.

Network Topologies with L2 Configuration in the Access LayerInthistopologyaccesslayer,switchesconnecttotheaggregationlayeroveranL2network.AggregationswitchesaretheVLANterminationpoints,asshowninFigure7.SpanningTreeProtocol(STP)istraditionallyusedtoavoidloops.Routingprotocolsrunbetweenaggregationandcorelayers.

VM

VM

VM

VM

VM

VM VM

L3 Access Layer

STP

L2 Trunks

L3 Links

Routing

VXLAN Fabric

vSphere Distributed Switch

Aggregation Layer

Core Layer Rack 10Rack 1

Enable IGMPSnooping

Deploy VDS

Consume Logical L2Network

Single SubnetVLAN100VLAN100

Enable IGMPQuerier

Figure 7. Datacenter Design – L2 Configuration in Access Layer with STP

Insuchdeploymentswithasinglesubnet(VLAN100)configuredondifferentracks,enablingnetworkvirtualizationbasedonVXLANrequiresthefollowing:

•EnableIGMPsnoopingontheL2switches.

•EnabletheIGMPquerierfeatureononeoftheL2/L3switchesintheaggregationlayer.

•Increasetheend-to-endMTUbyaminimumof50bytestoaccommodateaVXLANheader.Therecommendedsizeis1,550orjumboframes.

Page 13: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 3

VMware Network Virtualization Design Guide

To overcome slower convergence times and lower link utilization limitations of STP, most datacenter networks todayusetechnologiessuchasCiscovPC/VSS(orMLAG,MCE,SMLT,andsoon).FromtheVXLANdesignperspective,thereisnochangetothepreviouslystatedrequirements.

Whenthephysicaltopologyhasanaccesslayerwithmultiplesubnetsconfigured(forexample,VLAN100inRack1andVLAN200inRack10inFigure8),theaggregationlayermusthaveProtocol-IndependentMulticast(PIM) enabled to ensure that multicast routes across multiple subnets are exchanged.

AlltheVXLANrequirementspreviouslydiscussedapplytoleafandspinedatacenterarchitecturesaswell.

Network Topologies with L3 Configuration in the Access LayerInthistopology,accesslayerswitchesconnecttotheaggregationlayeroveranL3network.AccessswitchesaretheVLANterminationpoints,asshowninFigure8.KeyadvantagesofthisdesignarebetterutilizationofallthelinksusingEqual-CostMultipathing(ECMP)andeliminationofSTP.

FromtheVXLANdeploymentperspective,thefollowingrequirementsmustbemet:

•EnablePIMonaccessswitches.

•EnsurethatduringtheVXLANpreparationprocess,noVLANisconfigured.ThisensuresthataVDSdoesn’tperformVLANtagging,alsocalledvirtualswitchtagging(VST)mode.

•Increaseend-to-endMTUbyaminimumof50bytestoaccommodateaVXLANheader.Therecommendedsizeis1,550orjumboframes.

VM

VM

VM

VM

VM

VM VM

L3 Access Layer

L3 LinksRouting

ECMP

VXLAN Fabric

vSphere Distributed Switch

Aggregation Layer

Core Layer Rack 10Rack 1

Enable PIM

Deploy VDS

Consume Logical L2Network

Figure 8. Datacenter Design – L3 Configuration in Access Layer with ECMP

Page 14: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 4

VMware Network Virtualization Design Guide

Logical NetworkAfterthephysicalnetworkhasbeenprepared,logicalnetworksaredeployedwithVXLAN,withnoongoingchanges to the physical network. The logical network design differs based on the customer’s needs and the type of compute, network and storage components they have in the datacenter. The following aspects of the virtual infrastructure should be taken into account before deploying logical networks:

•AclusterisacollectionofvSpherehostsandassociatedvirtualmachineswithsharedresources.Oneclustercanhaveamaximumof32vSpherehosts.

•AVDSisthedatacenter-widevirtualswitchthatcanspanacrossupto500hostsinthedatacenter.BestpracticeistouseoneVDSacrossallclusterstoenablesimplifieddesignandcluster-wideVMwarevSpherevMotion® migration.

•WithVXLAN,anewtraffictypeisaddedtothevSpherehost:VXLANtransporttraffic.Asabestpractice,thenewVXLANtraffictypeshouldbeisolatedfromothervirtualinfrastructuretraffictypes.ThiscanbeachievedbyassigningaseparateVLANduringtheVXLANpreparationprocess.

•AVMwarevSphereESXi™host’sinfrastructuretraffic,includingvMotionmigration,VMwarevSphereFaultTolerance,management,andsoon,isnotencapsulatedandisindependentoftheVXLAN-basedlogicalnetwork. These traffic types should be isolated from each other, and enough bandwidth should be allocated to them. As of this release only, VMware does not support placing infrastructure traffic such as vMotion migration onVXLAN-basedvirtualnetworks.Onlyvirtualmachinetrafficissupportedonlogicalnetworks.

•TosupportvMotionmigrationsofworkloadsbetweenclusters,allclustersshouldhaveaccesstoallstorageresources.

•ThelinkaggregationmethodconfiguredonthevSpherehostsalsoimpactshowVXLANtransporttraffictraversesthehostNICs.TheVDSVXLANportgroup’steamingcanbeconfiguredasfailover,LACPactivemode,LACPpassivemodeorstaticEtherChannel.

a. WhenLACPorstaticEtherChannelisconfigured,theupstreamphysicalswitchmusthaveanequivalentportchannelorEtherChannelconfigured.

b. Also,ifLACPisused,thephysicalswitchmusthave5-tuplehashdistributionenabled.

c. VirtualportIDandload-basedteamingarenotsupportedwithVXLAN.

Next, the design in the following three scenarios is discussed.

•Greenfielddeployment–Adatacenterbuiltfromscratch.

•Brownfielddeployment–Anexistingoperationaldatacenterwithvirtualization.

•Stretchedcluster–Twodatacentersseparatedbyashortdistance.

Scenario 1 – Greenfield Deployment: Logical Network with a Single Physical L2 Domain In a greenfield deployment, the recommended design is to have a single VDS stretching across all the compute clusterswithinthesamevCenterServer.AllhostsintheVDSareplacedonthesameL2subnet(singleVLANonalluplinks).InFigure9,theVLAN10spanningtheracksisswitched—notrouted—creatingasingleL2subnet.ThissinglesubnetservesastheVXLANtransportsubnet,andeachhostreceivesanIPaddressfromthissubnet,usedinVXLANencapsulation.Multicastandotherrequirementsaremetbasedonthephysicalnetworktopology.RefertotheL2configurationintheaccesslayershowninFigure9fordetailsonmulticast-relatedconfiguration.

Page 15: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 5

VMware Network Virtualization Design Guide

VM

VM

VM

VM

VM

VM

VM

VM

VXLAN Fabric

VXLAN 5002

VLAN 10Cluster 1Rack 1

VLAN 10

Legend:

Cluster 2Rack 10

VTEP

vwire5001portgroup

vwire5002portgroup

Switch

VXLAN 5001

vSphere vSphere vSphere

vSphere Distributed Switch

vSphere

Logical L2Network

Figure 9. Greenfield Deployment – One VDS

Keepinmindthefollowingkeypointswhiledeploying:

•TheVDSVXLANportgroupmustbeinthesameVLANacrossallhostsinallclusters.ThisconfigurationishandledthroughthevCloudNetworkingandSecurityManagerplug-ininvCenterServer.

•VDS,VLAN,teamingandMTUsettingsmustbeprovidedaspartoftheVXLANconfigurationprocess.

•AVTEPIPaddressisassignedeitherviaDHCPorstaticallyviavCenterServer.

•Virtualmachinescommunicatingoutsidethelogicalnetwork(totheInternetortononlogicalnetworkswithinthedatacenter)requireaVXLANgateway.

vMotionBoundaryThevMotionboundary,ortheworkloadmigrationlimit,inVXLANdeploymentisdictatedbythefollowing two criteria:

1) vMotion migration is limited to hosts managed by a single vCenter Server instance.

2) vMotion migration is not possible across two VDS.

In this scenario where all the hosts are part of the same VDS, vMotion migration will work across all hosts as long asthesharedstoragerequirementissatisfiedacrossthetwoclusters.

Scenario 2 – Logical Network: Multiple Physical L2 DomainsInbrownfielddeployments,clustersaretypicallydeployedwithmultipleVDS,onepercluster.EachVDSisonadifferentsubnet,terminatedonanaggregationrouter.LogicalL2networkscanspanacrossthesesubnetboundaries.Themaindifferenceascomparedtoscenario1isthatVXLANtransporttrafficisroutedinsteadofbeingswitchedinthesamesubnet.MulticastandECMPrequirementsaredependentonthephysicaltopology.RefertotheL3configurationintheaccesslayershowninFigure10fordetailsonmulticast-relatedconfiguration.

Page 16: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 6

VMware Network Virtualization Design Guide

VM

VM

VM

VM

VM

VM

VM

VM

VXLAN Fabric

VXLAN 5002

VLAN 10Cluster 1Rack 1

VLAN 20

Legend:

Cluster 2Rack 10

VTEP

vwire5001portgroup

vwire5002portgroupSwitch

Router

VXLAN 5001

vSphere vSphere

vSphere Distributed Switch

vSphere vSphere

vSphere Distributed Switch

Logical L2Network

Figure 10. Brownfield Deployment – Two VDS

Keepinmindthefollowingkeypointswhiledeploying:

•VTEPsindifferentsubnetscanroutetraffictoeachother.

•AVTEPIPaddressisassignedeitherviaDHCPorstaticallyviavCenter.

•Applicationsrunninginvirtualmachinescannotdetectthephysicaltopologyandareinthesamesubnet.

•Virtualmachinescommunicatingoutsidethelogicalnetwork(totheInternetortononlogicalnetworkswithinthedatacenter)requireaVXLANgateway.(Seeappendix2forpacketflows.)

vMotionBoundaryInthistwo-VDSVXLANdeployment,thevMotionboundaryislimitedtooneVDS.TheworkloadsdeployedonalogicalL2networkcannotbemovedtoahostconnectedtoadifferentVDS.However,ifworkloadplacementalone is the goal, this design enables the choice of any cluster for the deployment of a workload, even if they are ondifferentphysicalVLANs.

Scenario 3 – Logical Network: Multiple Physical L2 Domains with vMotionIfvMotionmigrationacrossclustersisanimportantrequirement,thefollowingmodifieddesignshouldbeused.Here,asingleVDSspansacrossmultipleclusters,enablingvMotionmigrationacrossclusters.Thefollowingaresome of the key differences in this design:

•NoVLANIDisconfiguredduringtheVXLANpreparation.TheVDSwillnotperformVLANtaggingfortheVXLANtrafficgoingoutontheuplinks(noVST).

•DedicateduplinksarerequiredonthehoststocarryuntaggedVXLANtraffic.

•Thephysical-switchports,wherethehostuplinksareconnected,areconfiguredasaccessportswithappropriateVLAN.Forexample,asshowninFigure11,accessswitchportsofcluster1areconfiguredwithVLAN10;thoseofcluster2areconfiguredwithVLAN20.

Page 17: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 7

VMware Network Virtualization Design Guide

VM

VM

VM

VM

VM

VM

VM

VM

VXLAN Fabric

VXLAN 5002

VLAN 10

Cluster 1Rack 1

VLAN 20

Legend:

Cluster 2No VSTNo VST

Rack 10

VTEP

vwire5001portgroup

vwire5002portgroup

Switch

Router

VXLAN 5001

vSphere vSphere

vSphere Distributed Switch

vSphere vSphere

Logical L2Network

Figure 11. Brownfield Deployment – Single VDS to Enable vMotion Migration

Because the storage network is parallel and independent of a logical network, it is assumed that both clusters canreachthesharedstorage.StandardvMotionmigrationdistancelimitationsandsinglevCenterrequirementsstillapply.BecausethemovedvirtualmachineisstillinthesamelogicalL2network,noIPreaddressingisnecessary, even though the physical hosts might be on different subnets.

Scenario 4 – Logical Network: Stretched Clusters Across Two DatacentersStretched clusters offer the ability to balance workloads between two datacenters. This nondisruptive workload mobility enables migration of services between geographically adjacent sites. A stretched cluster design helps poolresourcesintwodatacentersandenablesworkloadmobility.Virtualmachine–to–virtualmachinetrafficiswithinthesamelogicalL2network,enablingL2adjacencyacrossdatacenters.Thevirtualmachine–to–virtualmachine traffic dynamics are the same as those previously cited. In this section, we will discuss the impact of this designonnorth–southtraffic(virtualmachinecommunicatingoutsidethelogicalL2network)becausethatisthe main difference as compared to previous scenarios.

Figure12showstwosites,siteAandsiteB,withtwohostsdeployedineachsitealongwiththestorageandthereplicationsetup.HereallhostsaremanagedbyasinglevCenterServerandarepartofthesameVDS.Ingeneral,forstretchedclusterdesign,thefollowingrequirementsmustbemet:

•ThetwodatacentersmustbemanagedbyonevCenterServerbecausetheVXLANscopeislimitedtoasinglevCenter Server.

•vMotionsupportrequiresthatthedatacentershaveacommonstretchedVDS(asinscenario3).AmultipleVDSdesign,discussedinscenario2,canalsobeused,butvMotionmigrationwillnotwork.

Page 18: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 8

VMware Network Virtualization Design Guide

vSphere Distributed Switch

VM VM

VXLAN 5002

Internet InternetFC/IP

IP Network IP Network

Stretched Cluster

WAN

Site A

Storage A

LUN (R/W)

Storage B

LUN (R/O)

Site B

After vMotion

Figure 12. Stretched Cluster

Inthisdesign,thevCloudNetworkingandSecurityEdgegatewayispinnedtooneofthedatacenters(siteAinthisexample).InthevCloudNetworkingandSecurity5.1release,eachVXLANsegmentcanhaveonlyonevCloudNetworkingandSecurityEdgegateway.Thishasthefollowingimplications:

•Allnorth–southtrafficfromtheseconddatacenter(siteB)inthesameVXLAN(5002)musttransitthe vCloudNetworkingandSecurityEdgegatewayinthefirstdatacenter(siteA).

•Also,whenavirtualmachineismovedfromsiteAtositeB,allnorth–southtrafficreturnstositeAbeforereaching the Internet or other physical networks in the datacenter.

•Storagemustsupporta“campuscluster”configuration.

Theseimplicationsraiseobviousconcernsregardingbandwidthconsumptionandlatency,soanactive–activemultidatacenter design is not recommended. This design is mainly targeted toward the following scenarios:

•DatacentermigrationsthatrequirenoIPaddresschangesonthevirtualmachines.Afterthemigrationhasbeencompleted,thevCloudNetworkingandSecurityEdgegatewaycanbemovedtothenewdatacenter,requiringachangeinexternalIPaddressesonthevCloudNetworkingandSecurityEdgeonly.IfallvirtualmachineshavepublicIPaddressesandarenotbehindvCloudNetworkingandSecurityEdgegatewaynetworkaddress translation (NAT), more changes are needed.

•Deploymentsthatrequirelimitednorth–southtraffic.Becausevirtualmachine–virtualmachinetrafficdoesnotrequirecrossingthevCloudNetworkingandSecurityEdgegateway,thestretchedclusterlimitationdoesnotapply.

These scenarios also benefit from elastic pooling of resources and initial workload placement flexibility. If virtual machinesareindifferentVXLANs,thelimitationsdonotapply.

Page 19: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 1 9

VMware Network Virtualization Design Guide

Managing IP Addresses in Logical NetworksIn a large cloud environment with multiple tenants, IP address management is a critical task. In this section, we willfocusonIPaddressmanagementofthevirtualmachinesdeployedontheVXLANlogicalL2network.EachlogicalL2networkcreatedwithVXLANisaseparateL2broadcastdomain.ThisL2broadcastdomaincanbeassociated with a separate subnet using a private IP space or publicly routable IP space. Depending on whether private IP space or publicly routable IP space is used for the assignment to the logical networks, customers must chooseeithertheNATorthenon-NAToptiononthevCloudNetworkingandSecurityEdgegateway.SotheIPaddressassignmentdependsonwhetherthevirtualmachineisconnectedtoalogicalL2networkthroughaNATornon-NATconfiguration.Let’stakealookattheexamplewiththefollowingtwodeployments:

1) Using the NAT and DHCP services of the vCloud Networking and Security Edge gateway

2) Not using the NAT and DHCP services of the vCloud Networking and Security Edge gateway

WithNetworkAddressTranslationIn deployments where customers have limited IP address space, NAT is used to provide address translation from privateIPspacetothelimitedpublicIPaddresses.ByutilizingvCloudNetworkingandSecurityEdgegatewayservices, customers can provide individual tenants with the ability to create their own pool of private IP addresses, which ultimately get mapped to the publicly routable external IP address of the external vCloud NetworkingandSecurityEdgegatewayinterface.

Figure13showsathree-tenantdeployment,witheachtenantvirtualmachineconnectedtoseparatelogicalL2networks.Theblue,greenandpurplevirtualwires(VXLANsegments)areconnectedtothethreeinternalinterfacesofthevCloudNetworkingandSecurityEdgegateway;theexternalinterfaceofthevCloudNetworkingandSecurityEdgeisconnectedtotheInternetviaadatacenterrouter.

VM

VM

VM

VM

192.168.1.10

vCloudNetworking and

Security EdgeGateway

192.168.2.10

192.168.1.0/24

192.168.3.0/24

Standard NATConfiguration and DHCP service

172.26.10.0/24

VXLAN 5000

192.168.2.0/24VXLAN 5001

External Network

Internet

192.168.1.11

192.168.3.10

VXLAN 5002

192.168.1.1

192.168.2.1

192.168.3.1

172.26.10.1

Figure 13. NAT and DHCP Configuration on vCloud Networking and Security Edge Gateway

Page 20: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 2 0

VMware Network Virtualization Design Guide

ThefollowingaresomeconfigurationdetailsofthevCloudNetworkingandSecurityEdgegateway:

•Blue,greenandpurplevirtualwires(VXLANsegments)areassociatedwithseparateportgroupsonaVDS.InternalinterfacesofthevCloudNetworkingandSecurityEdgegatewayconnecttotheseportgroups.

•ThevCloudNetworkingandSecurityEdgegatewayinterfaceconnectedtothebluevirtualwireisconfiguredwithIP192.168.1.1.

•EnableDHCPserviceonthisinternalinterfaceofvCloudNetworkingandSecurityEdgebyprovidingapoolofIPaddresses.Forexample,192.168.1.10to192.168.1.50.

•AllthevirtualmachinesconnectedtothebluevirtualwirereceiveanIPaddressfromtheDHCPserviceconfiguredonEdgeoronthesamesubnet.

•TheNATconfigurationontheexternalinterfaceofthevCloudNetworkingandSecurityEdgegatewayallowsvirtual machines on a virtual wire to communicate with devices on the external network. This communication is allowedonlywhentherequestsareinitiatedbythevirtualmachinesconnectedtotheinternalinterfaceofthevCloudNetworkingandSecurityEdge.

InsituationswhereoverlappingIPandMACaddresssupportisrequired,onevCloudNetworkingandSecurityEdgegatewaypertenantisrecommended.Figure14showsanoverlappingIPaddressdeploymentwithtwotenantsandtwoseparatevCloudNetworkingandSecurityEdgegateways.

vCloudNetworking and

Security EdgeGateway

vCloudNetworking and

Security EdgeGateway

10.10.0.0/16External Network

Tenant 1

IP Core

VM VM

10.10.1.10 10.10.1.11

Tenant 2

VM

10.10.1.10

10.10.1.0/24VXLAN 5001

10.10.1.0/24VXLAN 5000

10.10.1.110.10.1.1

10.10.20.110.10.10.1

Figure 14. Overlapping IP and MAC Addresses

WithoutNetworkAddressTranslationCustomers who are not limited by routable IP addresses, have virtual machines with public IP addresses or do notwanttodeployNATcanusestaticroutingonvCloudNetworkingandSecurityEdge.

Page 21: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 2 1

VMware Network Virtualization Design Guide

VM

VM

VM

VM

172.26.1.10

vCloudNetworking and

Security EdgeGateway

172.26.2.10

172.26.1.0/24

172.26..3.0/24

172.26.10.0/24

VXLAN 5000

172.26.2.0/24VXLAN 5001

External Network

Internet

172.26..1.11

172.26..3.10

VXLAN 5002

172.26.1.1

172.26.2.1

172.26.3.1

172.26.10.1

Figure 15. Routable IP Assignments to the Logical Networks

InthedeploymentshowninFigure15,thevCloudNetworkingandSecurityEdgegatewayisnotconfiguredwiththeDHCPandNATservices.However,staticroutesaresetupbetweendifferentinterfacesofthevCloudNetworkingandSecurityEdgegateway.

OtherNetworkServices•Inamultitenantenvironment,vCloudNetworkingandSecurityEdgefirewallcanalsobeusedtosegment

intertenant and intratenant traffic.

•vCloudNetworkingandSecurityEdgeloadbalancercanbeusedforloadbalancingexternaltointernalWebtraffic,forexample,whenmultipleWebserversaredeployedonthelogicalnetwork.Staticroutesmustbeconfigured on the upstream router to properly route inbound traffic to the vCloud Networking and Security Edgeexternalinterface.

•vCloudNetworkingandSecurityEdgealsoprovidesDNSrelayfunctionalitytoresolvedomainnames.DNSrelay configuration should point to an existing DNS in the physical network. Alternatively, a DNS server can be deployed in the logical network itself.

Scaling Network Virtualization In this section, we present the design considerations that can be followed for the different components while planningthescalingofVXLANnetworksandassociatednetworkservices.Thefollowingkeycomponentsandparameters should be taken into account:

1) VDS:

•OnevCenterServercanhave128VDS.

•OneVDScanspanacross500hosts.

•OneVDScansupport10,000portgroups.BecauseanewportgroupiscreatedforeverylogicalL2network,thisnumberdictatesthenumberofL2logicalnetworksthatcanbecreated.

Page 22: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 2 2

VMware Network Virtualization Design Guide

2) vCloud Networking and Security Edge gateway:

•EachvCloudNetworkingandSecurityEdgegatewaycanhaveamaximumof10interfacesandcan beconfiguredtoconnecttoaninternalorexternalnetwork.Thenumberoflogicalnetworksrequiringgateway services determines the number of gateway instances that must be deployed based on the 10-interfaces-per-gatewaymaximum.Forexample,ifoneinterfacepergatewayisconnectedtoanexternalnetwork(leaving9forinternalnetworks),thenumberofgatewayinstancesrequiredfor 90logicalL2networkswouldbe90/9—thatis,10vCloudNetworkingandSecurityEdgegatewaydevices.

•Availableinthreedifferentsizes,basedoncapacity.

3) VXLAN Traffic:

•TheplannedvirtualmachineconsolidationratioshouldtakeintoconsiderationtheamountofvirtualmachinetrafficthatVTEPmusthandle.

•MeetthebandwidthrequirementsfortheVXLANtrafficbyassigningsufficientNICsforthesame. To optimally utilize the uplinks, use link aggregation methods on the physical switches.

4) Multicast:

•EachVXLANlogicalnetworkisuniquelyidentifiedbyacombinationofanumbercalledsegmentID(determined from a range defined by the user) and the configured multicast group. The multicast group–to–VXLANsegmentIDmappingishandledbythevCloudNetworkingandSecurityManager. Thereisnoneedtohaveone-to-onemappingbetweenthesegmentIDandthemulticastgroup.Incase of a limited number of multicast groups, vCloud Networking and Security Manager maps multiple logical networks (segment IDs) to one multicast group.

Consumption ModelsAftertheVXLANconfigurationhasbeencompleted,customerscancreateandconsumelogicalL2networksondemand. Depending on the type of vCloud Networking and Security bundle purchased, they have the following three options:

1) Use the vCloud Director interface.

2) Use the vCloud Networking and Security Manager interface.

3) Use REST APIs offered by vCloud Networking and Security products.

In vCloud DirectorvCloudDirectorcreatesaVXLANnetworkpoolimplicitlyforeachproviderVDCbackedbyVXLANpreparedclusters.ThetotalnumberoflogicalnetworksthatcanbecreatedusingaVXLANnetworkpoolisdeterminedbytheconfigurationatthetimeofVXLANfabricpreparation.AcloudadministratorcaninturndistributethistotalnumbertothevariousorganizationVDCsbackedbytheproviderVDC.ThequotaallocatedtoanorganizationVDCdeterminesthenumberoflogicalnetworks(organizationVDC/VMwarevSpherevApp™networks)backedbyVXLANthatcanbecreatedinthatorganizationVDC.

In vCloud Networking and Security ManagerCustomerswhodon’thavevCloudDirectordeploymentcanconsumethelogicalL2networksthroughthevCloudNetworkingandSecurityManagerWebinterfaceorthroughthevSphereClientnetworkvirtualizationplug-in.

Page 23: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 2 3

VMware Network Virtualization Design Guide

Using APIIn addition to vCloud Director and vCloud Networking and Security Manager, vCloud Networking and Security components can be managed using APIs provided by VMware. For detailed information on how to use the APIs, refer to the vCloud Networking and Security 5.1 API Programming Guide at https://www.vmware.com/pdf/vshield_51_api.pdf.

Troubleshooting and MonitoringThe following are some of the important tools that customers should use to troubleshoot and monitor the VXLANnetwork.ThesetoolsprovidetherequiredvisibilityintotheencapsulatedVXLANtrafficandalsohelpmanage the overall logical network infrastructure.

Network Health CheckNetworkHealthCheckenablesproactivereportsonvirtualandphysicalnetworkconfigurationinconsistencies,reducing operational costs involved in troubleshooting and fixing errors. It checks for the following three parameters:

•VLANIDs

•MTUsettings

•Teamingconfiguration

VXLAN Connectivity Check – Unicast and Broadcast TestsThe unicast and broadcast tests available through the vCloud Networking and Security Manager enable customers to test the configuration across the virtual and physical infrastructure. They also enable verification thatallVTEPconfigurationsarecorrectandthateachVTEPcanreachotherVTEPs.AgatewayaddressonVTEPisrequiredforthisfunctionalitytowork.AVTEPIPaddressmustbeassignedusingDHCPtoconfigurethegateway,becausestaticIPconfigurationonVTEPviavCenterServerdoesnotenablegatewaystobeconfigured.ProxyARPonupstreamgateway/routerisnotarequirement.

Monitoring Logical Flows – IPFIX NetFlowv10/IPFIXonVDSenablesvendorstopredefinecustomNetFlowrecords.AnewVXLANtemplatehasbeenpredefinedtomonitortrafficflowsinlogicalnetworks.Withthistemplate,customerscanmonitorVXLANflowsatvirtualmachine–levelgranularity.

Port Mirroring VDSprovidesmultiplestandardportmirroringfeaturessuchasSPAN,RSPANandERSPANthathelpindetailedtraffic analysis.

Page 24: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

T e c h n i c a l W h i T e P a P e R / 2 4

VMware Network Virtualization Design Guide

ConclusionThe VMware network virtualization solution addresses the current challenges with the physical network infrastructureandbringsflexibility,agilityandscalethroughVXLAN-basedlogicalnetworks.Alongwiththeabilitytocreateon-demandlogicalnetworksusingVXLAN,thevCloudNetworkingandSecurityEdgegatewayhelpscustomersdeployvariouslogicalnetworkservicessuchasfirewall,DHCP,NATandloadbalancingonthese networks. The operational tools provided as part of the solution help in the troubleshooting and monitoring of these overlay networks.

Page 25: VMware Network Virtualization Design GuideThe vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual infrastructure. Network virtualization

VMware, inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www .vmware .comCopyright © 2013 VMware, Inc . All rights reserved . This product is protected by U .S . and international copyright and intellectual property laws . VMware products are covered by one or more patents listed at http://www .vmware .com/go/patents . VMware is a registered trademark or trademark of VMware, Inc . in the United States and/or other jurisdictions . All other marks and names mentioned herein may be trademarks of their respective companies . Item No: VMW-WP-NETWORK-VIRT-GUIDE-USLET-101 Docsource: OIC - 12VM008 .07