Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
IMPLEMENTATION GUIDE
Copyright © 2011, Juniper Networks, Inc. 1
DEsIGNING A LAyEr 2 DATA CENTEr NETwOrk wITh ThE QFAbrIC ArChITECTUrE how to build a Data Center Network with the QFabric Products Acting as a Layer 2 switch
Although Juniper Networks has attempted to provide accurate information in this guide, Juniper Networks does not warrant or guarantee the accuracy of the information provided herein. Third party product descriptions and related technical details provided in this document are for information purposes only and such products are not supported by Juniper Networks. All information provided in this guide is provided “as is”, with all faults, and without warranty of any kind, either expressed or implied or statutory. Juniper Networks and its suppliers hereby disclaim all warranties related to this guide and the information contained herein, whether expressed or implied of statutory including, without limitation, those of merchantability, fitness for a particular purpose and noninfringement, or arising from a course of dealing, usage, or trade practice.
2 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
TableofContents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
QFabric basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Node Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
QFabric Architecture Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Defining Node Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Example 1: sNG configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Example 2: rsNG configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Example 3: NNG configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Interface Naming Conventions for QFabric Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Interface Type Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Access Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Trunk Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
routed Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Example 1: L3 routed port on NNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Example 2: rVI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Interface range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
Example 1: Unsupported interface range configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Example 2: supported interface range configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
LAG Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Example 1: same member LAG configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Example 2: Cross member LAG configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
VLAN Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Design Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Use Case 1: 10GbE servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
Common Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
step 1. Define the QFabric Node alias and NNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
step 2. Define 10 VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
step 3. LAG configuration NNG connecting to MX series and srX series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
step 4. VLAN membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
step 5. Fhr configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Configuration for rack server Deployments or blade Chassis with Pass-Through Module . . . . . . . . . . . . . . . . . . . . . . . . . .18
Configuration steps for Dual-Attached Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Configuration for blade Chassis with blade switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Configuration steps for singled-homed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Copyright © 2011, Juniper Networks, Inc. 3
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Configuration steps for Dual-homed, Active/Active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Use Case 2: Mixture of 10GbE and 1GbE connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Common Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Existing 1GbE servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
10GbE with 1GbE Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
server Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
About Juniper Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
TableofFigures
Figure 1: Juniper’s data center solution with QFabric architecture, MX series, srX series, vGw, and Junos space . . . . . . 5
Figure 2: QFabric physical connection for data plane between QFabric Node and Interconnect and control plane
through the control plane Ethernet (CPE). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Figure 3: LAG support between node groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Figure 4: Different types of redundancy for rack servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 5: Different deployment scenarios with embedded blade switches in blade chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 6: Use case 1 design diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 7: single-attached device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Figure 8: Dual-attached device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Figure 9: Dual-homed device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Figure 10: single-homed with blade switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Figure 11: Dual-homed (active/active) with blade switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Figure 12: Use case 2 design diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 13: EX4200 Virtual Chassis with rsNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Figure 14: rsNG with EX3300 Virtual Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Introduction
As people become more adept at employing virtualization technologies and as applications become more efficient,
the need for a high-performance and scalable data center infrastructure becomes ever more critical. yet today’s
data center network architecture has too many layers and is often too rigid to meet those requirements. Juniper has
developed its new Juniper Networks® QFabric™ technology to address the inefficiencies of legacy data center networks.
QFabric technology reduces network complexity by reducing the number of switch layers and managed devices,
while providing optimal network utilization and a pay-as-you-grow model without compromising overall network
performance.
Scope
This document will discuss the design of a data center network where QFabric technology acts as the Layer 2 switch.
It will describe the overall network topology and provide relevant configuration templates implementing a QFabric
architecture.
The target audience for this document includes network architects, engineers, or operators, as well as individuals who
require related technical knowledge. Every effort has been made to make this document relevant to the widest possible
audience. It is assumed that the reader is familiar with Juniper Networks Junos® operating system and has a base level
of knowledge about QFabric architecture.
DesignConsiderations
One of the biggest challenges with today’s data center is keeping the network simple while enabling it to grow, and
doing this without making trade-offs. Adding new switches is the typical response to network growth, but that means
more devices to manage, and more switches can have a negative impact on network performance due to their location.
Juniper Networks has introduced the QFabric technology to address these challenges. QFabric techology has the
unique ability to reduce complexity by flattening the network to a single tier, providing any-to-any connectivity that
ensures no device is more than a single hop away from any other device. Increasing port counts with in a QFabric
architecture does not increase complexity or add devices to manage, since all QFabric components are managed as a
single device.
Copyright © 2011, Juniper Networks, Inc. 5
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Figure1:Juniper’sdatacentersolutionwithQFabricarchitecture,MXSeries,SRXSeries,vGW,andJunosSpace
QFabricBasics
The QFabric architecture is composed of three separate devices—the Juniper Networks QFX3100 Director, QFX3008
Interconnect, and QFX3500 Node—that represent the basic components of a switch. Each component plays a vital role
in how the QFabric architecture works. The QFabric Director functions just like a routing Engine in a modular switch,
where it is responsible for managing the other QFabric components as well as distributing forwarding tables to the
QFabric Nodes and QFabric Interconnects. The QFabric Interconnect is equivalent to a fabric, acting like the backplane
of the switch and providing a simple, high-speed transport that interconnects all of the QFabric Nodes in a full mesh
topology to provide any-to-any port connectivity. The QFabric Node is equivalent to a line card, providing an intelligent
edge that can perform routing and switching between connected devices.
MX SeriesRemoteData Center
SRX Series
SRX5800
Servers
VMware vSphere
NAS FC StoragevGW
6 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Figure2:QFabricphysicalconnectionfordataplanebetweenQFabricNodeandInterconnectandcontrolplanethroughthecontrolplaneEthernet(CPE).
NodeGroups
A node group is nothing more than an abstraction of a single or set of QFabric Nodes with similar attributes that are
logically grouped. Node groups are not bounded by physical location but by common traits. There are three different
types of QFabric Nodes—server node group (sNG), redundant server node group (rsNG), and network node group
(NNG).
• sNG is a single QFabric Node that is connected to servers, blade chassis, and storage devices; it may also be
referred to as host-facing ports. Typically, host devices require a subset of protocols1 such as Link Aggregation
Control Protocol (LACP), Link Layer Discovery Protocol (LLDP), etc. Therefore sNGs will only need to support host
type protocols. Layer 2 or Layer 3 networking protocols2 such as spanning Tree Protocol (xsTP), OsPF, etc., are not
supported and cannot be configured on sNG ports.
• rsNG is similar to sNG with a couple of differences. First, an rsNG requires two QFabric Nodes to be grouped.
second, it can support cross-member (node) link aggregation groups (LAGs), as shown in Figure 3.
• NNG is a set of QFabric Nodes connected to wAN routers, other networking devices, or service appliances such
as firewalls, server load balancers, etc. because such devices will be connected to an NNG, all protocol stacks are
available on these ports. QFabric architecture requires at least one QFabric Node to be a member of an NNG; up to
eight devices are allowed. while defined as an NNG, it does not limit connections to service appliances or networking
devices; server and storage devices can also connect to an NNG.
Figure3:LAGsupportbetweennodegroups
QFabric Interconnect
QFabric Director
CPE
QFabric Node #1
QFabric Node #2
QFabric Node #3
QFabricNode #128
• • •
SNG
QF/Node
RSNG
QF/Node QF/Node
NNG
QF/Node QF/Node QF/Node
1 host-facing protocols are LLDP, LACP, Address resolution Protocol (ArP), Internet Group Management Protocol (IGMP) snooping, Data Center bridging Exchange (DCbx)
2 Networking-facing protocols are xsTP (denotes MsTP, rsTP, or VsTP), L3 unicast and multicast protocols, and IGMP
Copyright © 2011, Juniper Networks, Inc. 7
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Table1:NodeGroupsSupportMatrix
NoDEGRouPS MAX.NuMBERoFMEMBERSPERNoDEGRouP
MAX.NuMBERoFNoDE
GRouPSWIThINThEQFABRIC
ARChITECTuRE
SAMEMEMBERLAG
CRoSS-MEMBERLAG(ACTIvE/
ACTIvE)
SuPPoRThoST-FACINGPRoToCoLS3
SuPPoRTNETWoRkING-
FACINGPRoToCoLS4
single node
group (sNG)1 127 3 3
redundant
server node
group (rsNG)
2 63 3 3 3
Network node
group (NNG)8 1 3 3 3 3
QFabricArchitectureConfiguration
This document will not cover the deployment or bring-up of the system; it is assumed that the QFabric architecture
has been brought up by a certified specialist and is ready to be configured. This section will cover defining node groups,
configuring port types (access or trunk), VLANs, LAGs, and VLAN membership.
All management and configuration is done through the QFabric Director. There is no need to go into individual QFabric
technology devices and configure them. The entire QFabric solution can be managed from a single IP address that is
shared by the QFabric Directors.
DefiningNodeGroups
Node groups are a new concept for the Junos operating system and are only relevant to QFabric technology. Therefore,
a new stanza has been introduced to help manage QFabric Nodes and node groups. by default, all QFabric Nodes
are identified by serial number, but managing the devices by serial number can be a challenge. To simplify the
management process, QFabric Nodes can be identified by more meaningful or intuitive descriptions such as physical
location (row and rack), as shown with the example below.
3 host-facing protocols are LLDP, LACP, ArP, IGMP snooping, DCbx4 Networking-facing protocols are xsTP, L3 unicast and multicast protocols, and IGMP
[edit fabric]netadmin@qfabric# set aliases node-device ABCD1230 row1-rack1
Just as in configuration mode, “fabric” has been introduced into the operational command to provide QFabric
architecture-related administrative show commands. below is an example of a serial number-to-alias assignment. The
Connection and Configuration columns provide the current state of the QFabric Node.
netadmin@qfabric> show fabric administration inventory node-devices Item Identifier Connection ConfigurationNode devicerow1-rack1 ABCD1230 Connected Configured row1-rack2 ABCD1231 Connected Configured row1-rack3 ABCD1232 Connected Configuredrow21-rack1 ABCD1233 Connected Configured
QFabric Nodes—even single devices—need to be assigned to a node group. Any arbitrary name can be assigned to an
sNG and rsNG. NNG is the exception to this rule, as it already has a name, Nw-NG-0, and it cannot be changed. A
QFabric Node can only be part of one node group type; it cannot be part of two different node groups.
Typically, members within node groups are close in proximity, but that it is not a requirement; members of a node group
can be in different parts of the data center.
8 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Example1:SNGconfiguration
[edit fabric]netadmin@qfabric# set resources node-group SNG-1 node-device row1-rack1
Example2:RSNGconfiguration
[edit fabric]netadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack2netadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack3
Note: Up to two QFabric Nodes can be part of an rsNG.
Example3:NNGconfiguration
[edit fabric]netadmin@qfabric# set resources node-group NW-NG-0 network-domainnetadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1
Note: Up to eight QFabric Nodes can be part of an NNG.
A corresponding “show” command, shown below, provides overall node group membership and status.
netamdin@qfabric> show fabric administration inventory node-groups Item Identifier Connection ConfigurationNode group NW-NG-0 Connected Configured row21-rack1 ABCD1233 Connected Configured RSNG-1 Connected Configured row1-rack2 ABCD1231 Connected Configured row1-rack3 ABCD1232 Connected Configured SNG-1 Connected Configured row1-rack1 ABCD1230 Connected Configured
Another helpful command, “show fabric administration inventory,” combines both node device and node groups.
InterfaceNamingConventionsforQFabricTechnology
The standard Junos Os port naming convention is a three-level identifier—interface_name-fpc/pic/port_no. The Flexible
PIC Concentrator (FPC) is the first level, and it provides slot location within the chassis. For QFabric architecture, the
three-level identification poses a big challenge for management because QFabric technology can scale to include up to
128 QFabric Nodes, and there is no concept of a “slot” with QFabric Nodes. Therefore, the interface naming convention
has been enhanced for QFabric technology to four levels, where a chassis-level identifier is added. The new interface
naming scheme is QFabric Node:interface_name-fpc/pic/port. The QFabric Node can either be the serial number or the
alias name that has been assigned.
netadmin@qfabric> show interfaces row1-rack1:xe-0/0/10 Physical interface: row1-rack1:xe-0/0/10, Enabled, Physical link is Up Interface index: 49182, SNMP ifIndex: 7340572 Link-level type: Ethernet, MTU: 1514, Speed: 10Gbps, Duplex: Full-Duplex, BPDU Error: None, MAC-REWRITE Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Disabled Interface flags: Internal: 0x0 CoS queues : 12 supported, 12 maximum usable queues Current address: 84:18:88:d5:b3:42, Hardware address: 84:18:88:d5:b3:42 Last flapped : 2011-09-06 21:10:51 UTC (04:20:44 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps)
Copyright © 2011, Juniper Networks, Inc. 9
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Note: This interface naming convention only applies to physical interfaces. For logical interfaces such as LAGs, use
node-group:interface_name-fpc/pic/slot. routed VLAN interfaces (rVIs) follow the standard naming convention used
by Juniper Networks EX series Ethernet switches, which is vlan.x.
InterfaceTypeConfiguration
The next few sections will cover common configurations—ports and VLANs. QFabric architecture follows the same
configuration context as EX series switches. Those who are familiar with configuring EX series switches will find the
next few sections very familiar, with the only difference being the interface naming convention.
There are three different interface types—access, trunk, and routed interface. Just as with any other Junos Os platform,
interface configurations are done under the interface stanza. The access and trunk ports can be configured on any node
groups. routed interfaces are limited to rVI or NNG ports.
AccessPort
[edit interfaces]netadmin@qfabric# set row1-rack1:xe-0/0/0.0 family ethernet-switching port-mode access
Note: Port-mode access is optional. If port mode is not defined, the default port mode is “access.”
The standard “show interfaces” command is available. Another helpful interface command for L2 port is “show
ethernet-switching interfaces <QFabric Node:interface_name-fpc/pic/slot>.” An example output is shown below.
netadmin@qfabric> show ethernet-switching interfaces row1-rack1:xe-0/0/0 detail Interface: row1-rack1:xe-0/0/0.0, Index: 82, State: up, Port mode: AccessEther type for the interface: 0x8100VLAN membership: default, untagged, unblockedNumber of MACs learned on IFL: 0
TrunkPort
[edit interfaces]netadmin@qfabric# set row1-rack1:xe-0/0/1.0 family ethernet-switching port-mode trunk
RoutedInterface
As mentioned earlier, routed interfaces can either be rVI or L3 ports on NNG. rVI is a logical L3 interface that provides
routing between different networks. The following example shows physical L3 interface configurations on both NNG
and rVI.
Example1:L3routedportonNNG
[edit interfaces]netadmin@qfabric# set row21-rack1:xe-0/0/0.0 family inet address 1.1.1.1/24below is a sample “show output” command on a “show interface” for an L3 route interface on an NNG.
netadmin@qfabric> show interfaces row21-rack1:xe-0/0/0
below is a sample “show output” command on a trunk interface.
netadmin@qfabric> show ethernet-switching interfaces row1-rack1:xe-0/0/1 detail Interface: LC2:xe-0/0/1.0, Index: 89, State: down, Port mode: TrunkEther type for the interface: 0x8100Number of MACs learned on IFL: 0
10 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Physical interface: row1-rack4:xe-0/0/0, Enabled, Physical link is Up Interface index: 131, SNMP ifIndex: 1311224 Link-level type: Ethernet, MTU: 1514, Speed: 10Gbps, Duplex: Full-Duplex, BPDU Error: None, MAC-REWRITE Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Disabled Interface flags: Internal: 0x4000 CoS queues : 12 supported, 12 maximum usable queues Current address: 84:18:88:d5:e7:0c, Hardware address: 84:18:88:d5:e7:0c Last flapped : 2011-09-07 12:53:59 UTC (00:21:30 ago) Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Logical interface row21-rack1:xe-0/0/0.0 (Index 86) (SNMP ifIndex 1311280) Flags: 0x4000 Encapsulation: ENET2 Input packets : 0 Output packets: 1 Protocol inet, MTU: 1500 Destination: 1.1.1/24, Local: 1.1.1.1, Broadcast: 1.1.1.255
Example2:RvI
Step1.ConfiguringtheRvIinterface
[edit interfaces]netadmin@qfabric# set vlan.1250 family inet address 10.83.100.1/24
Step2.BindingtheRvIinterfacetothevLAN
[edit interfaces]netadmin@qfabric# set vlans v1250 l3-interface vlan.1250below is a sample “show output” command on a “show interface” for an rVI.
netadmin@qfabric> show interfaces vlan Physical interface: vlan, Enabled, Physical link is Up Interface index: 128, SNMP ifIndex: 1311221 Type: VLAN, Link-level type: VLAN, MTU: 1518, Speed: 1000mbps Link type : Full-Duplex Current address: 84:18:88:d5:ee:05, Hardware address: 00:1f:12:31:7c:00 Last flapped : Never Input packets : 0 Output packets: 0 Logical interface vlan.1250 (Index 88) (SNMP ifIndex 2622001) Flags: 0x4000 Encapsulation: ENET2 Input packets : 0 Output packets: 1 Protocol inet, MTU: 1500 Destination: 10.83.100/24, Local: 10.83.100.1, Broadcast: 10.83.100.255
InterfaceRange
To simplify configuration, Junos Os allows grouping a range of identical interfaces that share the same configuration.
This reduces the time and effort required to configure a large set of interfaces. The range can be defined with a start-
interface to end-interface or with a regular expression. Either method is supported, but the interface range is limited
within a QFabric Node. A range cannot span multiple QFabric Nodes. The example below shows both supported
and unsupported configurations. For more information on interface range, please refer to the Junos Os Technical
Documentation.
Copyright © 2011, Juniper Networks, Inc. 11
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Example1:unsupportedinterfacerangeconfiguration
Notice the member range starts on the QFabric Node named “row1-rack1” and ends on the QFabric Node named “row1-
rack3.” QFabric Nodes are identified by serial number or alias name; there is no concept of slot (module) sequence.
}interfaces { interface-range dev-cluster { member-range row1-rack1:xe-0/0/0 to row1-rack3:xe-0/0/15; unit 0 { family ethernet-switching; } }}
LAGConfiguration
Link aggregation provides link redundancy as well as increased bandwidth. QFabric supports both static and dynamic
LAGs, which can be configured on any QFabric Node. There are two typical LAG deployments—same member and cross
member. same member LAGs are where all the LAG child members are terminated on the same QFabric Node. Cross
member LAGs are where child member LAGs are split between node group members. As discussed in the Defining
Node Group section, same member LAGs can be configured on any node groups, while cross member LAGs are only
supported on rsNGs and NNGs.
Table2:NodeGroupsLAGSupportMatrix
NoDEGRouPS SAMEMEMBERLAG CRoSS-MEMBERLAG(ACTIvE/ACTIvE)
sNG 3
rsNG 3 3
NNG 3 3
Example1:SamememberLAGconfiguration
Step1.DefinenumberofsupportedLAGspernodegroup
while the example below is for an sNG named sNG-1, the same configuration is applicable to rsNG or NNG—the
configuration will just need to reflect the correct node group name. All node groups support the same member LAG
configuration.
netadmin@qfabric# set chassis node-group SNG-1 aggregated-devices ethernet device-count 1
Example2:Supportedinterfacerangeconfiguration
}interfaces { interface-range dev-cluster { member-range row1-rack1:xe-0/0/0 to row1-rack1:xe-0/0/47; member-range row1-rack2:xe-0/0/0 to row1-rack2:xe-0/0/47; member-range row1-rack3:xe-0/0/0 to row1-rack3:xe-0/0/15; unit 0 { family ethernet-switching; } }}
12 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Step2.AssigntheinterfacetoaLAGinterface
Note: The chassis identifier name is the QFabric Node.
[edit interfaces]netadmin@qfabric# set row1-rack1:xe-0/0/46 ether-options 802.3ad ae0 netadmin@qfabric# set row1-rack1:xe-0/0/47 ether-options 802.3ad ae0
Step3.ConfiguretheLAGinterface
All common LAG parameters across child LAG members are centralized to the LAG interface itself. These include LACP,
speed, duplex, etc. while the example below is for a Layer 2 interface, for Layer 3 the “family” needs to change from
ethernet-switching to inet, as L3 is only supported on NNG. For static LAGs, omit the LACP configuration. One thing to
note is that the node identifier is the node group, not the QFabric Node.
[edit interfaces]netadmin@qfabric# set SNG-1:ae0 aggregated-ether-options lacp activenetadmin@qfabric# set SNG-1:ae0 unit 0 family ethernet-switching port-mode trunk
some relevant commands for LAG:
• show lacp ## applicable to dynamic LAG only ##
• show interface terse | match node_group:interface_name ## example – sNG-1:ae0 ##
• show interface node_group:interface_name
Example2:CrossmemberLAGconfiguration
Step1.DefinethenumberofsupportedLAGspernodegroup
The example below is for an rsNG. The same configuration is applicable to an NNG.
netadmin@qfabric# set chassis node-group RSNG-1 aggregated-devices ethernet device-count 10
Step2.AssigntheinterfacetoaLAGinterface
Note: The interface name is the QFabric Node.
[edit interfaces]netadmin@qfabric# set row1-rack2:xe-0/0/0 ether-options 802.3ad ae0 netadmin@qfabric# set row1-rack3:xe-0/0/0 ether-options 802.3ad ae0
Step3.ConfiguretheLAGinterface
All common LAG parameters across child LAG members are centralized in the LAG interface itself. These include LACP,
speed, duplex, etc. while the example shown below is for a Layer 2 interface, for Layer 3 the family needs to change
from ethernet-switching to inet, as L3 is only supported on NNG. For static LAGs, omit the LACP configuration. One
thing to note is that the node identifier is the node group and not the QFabric Node.
[edit interfaces]netadmin@qfabric# set RSNG-1:ae0 aggregated-ether-options lacp activenetadmin@qfabric# set RSNG-1:ae0 unit 0 family ethernet-switching port-mode trunk
Copyright © 2011, Juniper Networks, Inc. 13
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
some relevant commands for cross-member LAG:
• show lacp ## applicable to dynamic LAG only ##
• show interface terse | match node_group:interface_name ## example – rsNG-1:ae0 ##
• show interface node_group:interface_name
vLANConfiguration
VLANs allow users to control the size of a broadcast domain and, more importantly, group ports in a Layer 2 switched
network into the same broadcast domain as if they are connected on the same switch, regardless of their physical
location.
QFabric architecture is no exception. VLANs can be contained to a single node group or spread across the same or
different types of node groups. The steps below outline how to define VLANs and assign VLAN port membership.
Step1.DefinethevLAN
VLANs are defined under the VLAN stanza. Minimum configuration is VLAN name and vlan-id.
[edit vlans]netadmin@qfabric# set default vlan-id 1
below is an example of “show vlan output.” The asterisk “*” denotes that the interface is up.
netadmin@qfabric> show vlans Name Tag Interfacesdefault 1 row1-rack1:xe-0/0/0.0*, row1-rack1:xe-0/0/0.1*, row1- rack2:xe-0/0/3.0*, RSNG-1:ae0.0*, NW-NG-0:ae0.0*
Step2.vLANportmembership
If VLAN membership is not explicitly configured on the access ports, then it falls back to the “default” VLAN. For
trunk ports, explicit configuration is required. There are two methods for assigning a port to a VLAN—port centric and
VLAN centric. Either method is valid, but if interface range or group profile isn’t being used, then for ease of VLAN
management, Juniper recommends that VLAN membership for the access port be done under the VLAN method and
under the port method for the trunk port.
Method 1: VLAN centric
[edit vlans]netadmin@qfabric# set default interface row1-rack1:xe-0/0/0.0
Method 2: Port centric
Either the vlan-name or vlan-id (802.1Q) can be used.
[edit interfaces]netadmin@qfabric# set row1-rack1:xe-0/0/0.0 family ethernet-switching vlan members 1
Trunk Port
On trunk ports, VLAN ranges are supported for ease of configuration, i.e.,1-100. For nonsequential VLANs, enclose the
membership with squared brackets and use a space for separation, i.e., [1-10 21 50-100].
14 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
[edit interfaces]netadmin@qfabric# set row1-rack1:xe-0/0/0.0 family ethernet-switching port-mode trunk vlan members [1-10 21 50-100]
In the above configuration, all VLANs are tagged on the interface. For hybrid trunks, untagged and tagged traffic use
the “native-vlan-id” keyword for untagged. below is an example trunk interface configured for VLAN 1 to be untagged
and VLANs 2-25 to be tagged. Note that 1 is not part of the “vlan members” configuration.
[edit interfaces]netadmin@qfabric# set row1-rack1:xe-0/0/0.0 family ethernet-switching port-mode trunk native-vlan-id 1 vlan members [2-25]
some helpful VLAN membership commands are:
• show vlans
• show vlans vlan-name detail
• show ethernet-switching interfaces brief
• show ethernet-switching interfaces node_identifier:interface_name-fpc/pic/port
below is an example of viewing the media access control (MAC) address table for QFabric architecture.
netadmin@qfabric> show ethernet-switching table Ethernet-switching table: 3 entries, 1 learned VLAN MAC address Type Age Interfaces default * Flood - NW-NG-0:All-members default 00:10:db:ff:a0:01 Learn 51 NW-NG-0:ae0.0 default 84:18:88:d5:ee:05 Static - NW-NG-0:Router
Additional useful MAC address table commands are:
• show ethernet-switching table summary
• show ethernet-switching table interface node_identifier:interface_name-fpc/pic/port
• show Ethernet-switching table vlan
DesignuseCases
This section will describe various QFabric architecture L2 design use cases. For cable deployment, there are a few
options—top-of-rack (TOr), middle-of-row (MOr), or end-of-row (EOr), each of which has pros and cons associated
with them. QFabric technology offers benefits with all three types of deployments, including lower cabling costs,
modularity and deployment flexibility, as well as fewer (one logical) devices to manage and a simplified L2 topology
that is spanning-tree free. while the QFabric architecture can be deployed as TOr, EOr, or MOr, for the following
design use cases, the deployment of choice will be TOr.
how the rack server or blade chassis is connected to the TOr depends on the high availability strategy, specifically
whether it is at the application, server/network interface card (NIC), or network level. For rack servers, there are three
different types of connections and levels of redundancy, which are explained below.
• Single-attached—The server only has a single link connecting to the switch. In this model, there is either no
redundancy, or the redundancy is built into the application.
• Dual-attached—The server has two links connecting to the same switch. NIC teaming is enabled on the servers,
where it can be either active/standby or active/active. The second link provides the second level of redundancy. The
more common deployment is active/active with a static LAG between the switch and rack server.
• Dual-homed—The server has two links that connect to two different switches/modules in either an active/standby or
active/active mode. This is a third level of redundancy; in addition to link redundancy there is spatial redundancy. If
one of the switches fails, then there is an alternative path. In order to provide an active/active deployment, the NICs
need to be in different subnets; if they are sharing the same IP/MAC, then some form of stacking or multichassis LAG
technology needs to be supported on the switches so a LAG can be configured between the switches and server.
Copyright © 2011, Juniper Networks, Inc. 15
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Figure4:Differenttypesofredundancyforrackservers
Depending on how the server is connected and how NIC teaming is implemented, the QFabric Node should be
configured with the appropriate node group. The table below provides a matrix showing the relationship between node
group and server connections.
Table3:NodeGroupSelectionMatrixforRackServersorBladeSwitcheswithPass-ThroughModules
ACTIvE/PASSIvE ACTIvE/ACTIvE
single-attached sNG N/A
Dual-attached sNG sNG
Dual-homed rsNG rsNG
Network redundancy is not specific to TOr deployment, as it also exists for MOr or EOr. The same deployment
principle applies to TOr, EOr. and MOr, with a minor exception for MOr or EOr where, in a dual-homed connection
scenario using modular switches, the second link can be connected to either a different module or a different chassis,
depending on cost and rack space.
In the case where blade chassis are used instead of rack servers, physical connectivity may vary depending on the
blade chassis intermediary connection, pass-through module, or blade switches. Juniper recommends a pass-through
module as it provides a direct connection between the servers and QFabric architecture. This direct connection
eliminates any oversubscription and the additional switching layer that is seen with blade switches. The deployment
options for pass-through are exactly the same as described for rack servers.
As for blade switches, depending on the vendor, they all have one thing in common with embedded blade switches—
they represent another device to manage, which adds complexity to the overall switching topology. The diagram below
shows the common network deployment between the blade switches and access switches.
Figure5:Differentdeploymentscenarioswithembeddedbladeswitchesinbladechassis
Single-attached Dual-attached Dual-homed
(L) Active/Standby(R) Active/Active
(L) Active/Standby(R) Active/Active
Dual-homedDual-homedSingle-homed
BladeSwitch
BladeChassis
Active/Backup Active/Active
16 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
• Single-homed—Each blade switch has a LAG connection into a single access switch. In this deployment, there are no
L2 loops to worry about or manage.
• Dual-homed(active/backup)—In this deployment, each access switch is a standalone device. since there are
potential L2 loops, the blade switch should support some sort of L2 loop prevention technology, such as sTP or
active/backup like technology, which will effectively block any redundant link to break the L2 loop.
• Dual-homed(active/active)—This deployment provides the most optimized deployment, as all links between the
blade and access switches are active and forwarding and provide network resiliency. The connection between the
blade switch and access switch is a LAG, which means the external switches must support either multichassis LAG
or some form of stacking technology. since LAG is a single logical link between the blade and external switch, then
there are no L2 loops to worry about or manage.
Note: Figure 5 assumes that the blade switches are separate entities and are not daisy-chained or logically grouped
through a stacking technology.
since QFabric architecture is a distributed solution that acts as a single logical switch, the two most likely deployments
are single-homed or dual-homed (active/active). The QFabric Nodes will be configured as sNG for single-homed and
rsNG for dual-homed (active/active).
Table4:NodeGroupSelectionMatrixforBladeChassiswithEmbeddedBladeSwitches
ACTIvE/PASSIvE ACTIvE/ACTIvE
single-homed sNG N/A
Dual-homed (active/backup) sNG or rsNG sNG or rsNG
Dual-homed (active/active) rsNG rsNG
The First hop router (Fhr) can either be the QFabric Node or one of the Juniper Networks MX series 3D Universal Edge
routers. Placement of the Fhr depends on whether the VLAN needs to span different geographic data centers. If so,
then from a design perspective, it is desirable for all the advanced features—virtual private LAN service (VPLs), Fhr,
etc.—to be centrally located on the MX series router, and for the QFabric Node to operate as a pure L2 switch. If a VLAN
stretch is not required, then the QFabric Node can be the Fhr.
A third possible design is a hybrid in which the Fhr is split between the MX series router and the QFabric Node, but this
level of detail will not be covered because it is out of the scope of this document. There is a separate L3 design and
implementation guide that can be found on the QFabric product page on www.juniper.net.
Juniper Networks srX series services Gateways, MX series routers, and any other network service appliances are
connected to the NNG as LAGs.
useCase1:10GbEservers
In this use case, all data center connections are 10GbE. All servers have multiple virtual machines, with each virtual
machine in its own VLAN. In addition, the server will have three separate VLANs for storage, management, and VMware
vMotion.
Each server has the follow configurations:
• Dual 10GbE ports
• Eight virtual machines, with each in its own VLAN
The VLAN will span across all QFabric Nodes.
Copyright © 2011, Juniper Networks, Inc. 17
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Figure6:usecase1designdiagram
CommonConfiguration
Step1.DefinetheQFabricNodealiasandNNG
• • •
[edit fabric]netadmin@qfabric# set aliases node-device ABCD1252 row21-rack1anetadmin@qfabric# set aliases node-device ABCD1253 row21-rack1bnetadmin@qfabric# set resources node-group NW-NG-0 network-domain netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1anetadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1b
Step2.Define10vLANs
[edit vlans]netadmin@qfabric# set v1100 vlan-id 1100netadmin@qfabric# set v1101 vlan-id 1101tonetadmin@qfabric# set v1109 vlan-id 1109
Step3.LAGconfigurationNNGconnectingtoMXSeriesandSRXSeries
netadmin@qfabric# set chassis node-group NW-NG-0 aggregated-devices ethernet device-count 24[edit interfaces]netadmin@qfabric# set interface-range LAG-ae0 member row21-rack1:xe-0/0/[0-1] netadmin@qfabric# set interface-range LAG-ae0 member row21-rack2:xe-0/0/[0-1]netadmin@qfabric# set interface-range LAG-ae0 ether-options 802.3ad ae0netadmin@qfabric# set interface-range LAG-ae1 member row21-rack1:xe-0/0/[2-3]netadmin@qfabric# set interface-range LAG-ae1 member row21-rack2:xe-0/0/[2-3] netadmin@qfabric# set interface-range LAG-ae1 ether-options 802.3ad ae1netadmin@qfabric# set interface-range LAG-ae3 member row21-rack1:xe-0/0/[4-5] netadmin@qfabric# set interface-range LAG-ae3 member row21-rack2:xe-0/0/[4-5]
18 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
netadmin@qfabric# set interface-range LAG-ae3 ether-options 802.3ad ae3netadmin@qfabric# set interface-range LAG-ae4 member row21-rack1:xe-0/0/[6-7] netadmin@qfabric# set interface-range LAG-ae4 member row21-rack2:xe-0/0/[6-7] netadmin@qfabric# set interface-range LAG-ae4 ether-options 802.3ad ae4
netadmin@qfabric# set NW-NG-0:ae0 aggregated-ether-options lacp activenetadmin@qfabric# set NW-NG-0:ae1 aggregated-ether-options lacp activenetadmin@qfabric# set NW-NG-0:ae3 aggregated-ether-options lacp activenetadmin@qfabric# set NW-NG-0:ae4 aggregated-ether-options lacp active
Step4.vLANmembership
[edit interfaces]netadmin@qfabric# set NW-NG-0:ae0.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set NW-NG-0:ae1.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set NW-NG-0:ae3.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set NW-NG-0:ae4.0 family etherent-switching port-mode trunk vlan members 1100-1109
Step5.FhRconfiguration
while this document will not provide the command-line interface (CLI) configuration for the Fhr, the Fhr placement
can either be the QFabric Node or the MX series router. If it is the QFabric Node, then configure rVIs for all of the
VLANs. If the MX series router is the Fhr, then the QFabric Node will be a pure L2 switch and the MX series router will
be configured with integrated routing and bridging (Irb) devices and Virtual router redundancy Protocol (VrrP). The
VrrP mastership should alternate between the pair of MX series routers for L3 traffic load balancing.
ConfigurationforRackServerDeploymentsorBladeChassiswithPass-ThroughModule
ConfigurationStepsforSingle-AttachedDevice
Figure7:Single-attacheddevice
SNG
QF/Node
Copyright © 2011, Juniper Networks, Inc. 19
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Step1.DefineQFabricNodealiasandSNG
[edit fabric]netadmin@qfabric# set aliases node-device ABCD1231 row1-rack1netadmin@qfabric# set aliases node-device ABCD1232 row1-rack2netadmin@qfabric# set aliases node-device ABCD1233 row1-rack3netadmin@qfabric# set aliases node-device ABCD1234 row1-rack4netadmin@qfabric# set resources node-group SNG-1 node-device row1-rack1netadmin@qfabric# set resources node-group SNG-2 node-device row1-rack2netadmin@qfabric# set resources node-group SNG-3 node-device row1-rack3netadmin@qfabric# set resources node-group SNG-4 node-device row1-rack4
Step2.ConfigureinterfaceandassignvLANmembershipforsingle-attachedserver
[edit interfaces]netadmin@qfabric# set interface-range Pod-1-server-ports member row1-rack1:xe-0/0/[0-47]netadmin@qfabric# set interface-range Pod-1-server-ports member row1-rack2:xe-0/0/[0-47]netadmin@qfabric# set interface-range Pod-1-server-ports member row1-rack3:xe-0/0/[0-47]netadmin@qfabric# set interface-range Pod-1-server-ports unit 0 family ethernet-switching port-mode trunk vlan members 1100-1109
ConfigurationStepsforDual-AttachedDevice
Figure8:Dual-attacheddevice
SNG
QF/Node
Step1.DefineQFabricNodealiasandSNG
[edit fabric]netadmin@qfabric# set aliases node-device ABCD1231 row1-rack1anetadmin@qfabric# set aliases node-device ABCD1232 row1-rack1bnetadmin@qfabric# set aliases node-device ABCD1233 row1-rack2anetadmin@qfabric# set aliases node-device ABCD1234 row1-rack2bnetadmin@qfabric# set resources node-group SNG-1 node-device row1-rack1anetadmin@qfabric# set resources node-group SNG-2 node-device row1-rack1bnetadmin@qfabric# set resources node-group SNG-3 node-device row1-rack2anetadmin@qfabric# set resources node-group SNG-4 node-device row1-rack2b
20 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Step2.ConfigureLAG
netadmin@qfabric# set chassis node-group SNG-1 aggregated-devices ethernet device-count 24[repeat for other SNGs][edit interfaces]netadmin@qfabric# set interface-range LAG-ae0 member row1-rack1:xe-0/0/[0-1] ether-options 802.3ad ae0netadmin@qfabric# set interface-range LAG-ae1 member row1-rack1:xe-0/0/[2-3] ether-options 802.3ad ae1[repeat as needed]
Step3.ConfigureinterfaceandassignvLANmembershipfordual-attachedserver
[edit interfaces]netadmin@qfabric# set SNG-1:ae0.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set SNG-1:ae1.0 family etherent-switching port-mode trunk vlan members 1100-1109[repeat as needed]
ConfigurationStepsforDual-homedwithNICTeamingConfiguredforActive/Active
Figure9:Dual-homeddevice
Step1.DefineQFabricNodealiasandRSNG
RSNG
QF/Node QF/Node
netadmin@qfabric# set aliases node-device ABCD1231 row1-rack1anetadmin@qfabric# set aliases node-device ABCD1232 row1-rack1bnetadmin@qfabric# set aliases node-device ABCD1233 row1-rack2anetadmin@qfabric# set aliases node-device ABCD1234 row1-rack2bnetadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack1anetadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack1bnetadmin@qfabric# set resources node-group RSNG-2 node-device row1-rack2anetadmin@qfabric# set resources node-group RSNG-2 node-device row1-rack2b
Step2.ConfigureLAG
netadmin@qfabric# set chassis node-group RSNG-1 aggregated-devices ethernet device-count 48[repeat for other SNGs][edit interfaces]netadmin@qfabric# set interface-range LAG-ae0 member row1-rack1a:xe-0/0/0netadmin@qfabric# set interface-range LAG-ae0 member row1-rack1b:xe-0/0/0netadmin@qfabric# set interface-range LAG-ae0 ether-options 802.3ad ae0
Copyright © 2011, Juniper Networks, Inc. 21
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
netadmin@qfabric# set interface-range LAG-ae1 member row1-rack1a:xe-0/0/1netadmin@qfabric# set interface-range LAG-ae1 member row1-rack1b:xe-0/0/1netadmin@qfabric# set interface-range LAG-ae1 ether-options 802.3ad ae1[repeat as needed]netadmin@qfabric# set RSNG-1:ae0 aggregated-ether-options lacp activenetadmin@qfabric# set RSNG-1:ae1 aggregated-ether-options lacp active[repeat as needed]
Step3.ConfigureinterfaceandassignvLANmembershipfordual-homedserver
[edit interfaces]netadmin@qfabric# set RSNG-1:ae0.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set RSNG-1:ae1.0 family etherent-switching port-mode trunk vlan members 1100-1109[repeat as needed]
ConfigurationforBladeChassiswithBladeSwitches
The oversubscription from the blade switch to the QFabric Node will dictate the cabling design. If the oversubscription
is high, that means there are fewer physical connections. In this case, EOr or MOr will be the choice of deployment. If
the oversubscription is low, then a TOr deployment could be implemented as long as there are enough connections on
a per-rack basis to justify it.
Note: Disable sTP on the blade switches, otherwise the ports will go down on the xsNG due to bridge protocol data
unit (bPDU) protect. sTP is not required because the connections between the switches are LAG.
ConfigurationStepsforSingled-homed
Figure10:Single-homedwithbladeswitches
Step1.DefinetheQFabricNodealiasandSNG
SNG
QF/Node
[edit fabric]netadmin@qfabric# set aliases node-device ABCD1231 row1-rack1netadmin@qfabric# set aliases node-device ABCD1232 row1-rack2netadmin@qfabric# set aliases node-device ABCD1233 row1-rack3netadmin@qfabric# set aliases node-device ABCD1234 row1-rack4netadmin@qfabric# set resources node-group SNG-1 node-device row1-rack1netadmin@qfabric# set resources node-group SNG-2 node-device row1-rack2netadmin@qfabric# set resources node-group SNG-3 node-device row1-rack3netadmin@qfabric# set resources node-group SNG-4 node-device row1-rack4
22 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Step2.ConfigureLAG
netadmin@qfabric# set chassis node-group SNG-1 aggregated-devices ethernet device-count 24[repeat for other SNGs][edit interfaces]netadmin@qfabric# set interface-range LAG-ae0 member row1-rack1:xe-0/0/[0-1] ether-options 802.3ad ae0netadmin@qfabric# set interface-range LAG-ae1 member row1-rack1:xe-0/0/[2-3] ether-options 802.3ad ae1[repeat as needed]
Step3.ConfigureinterfaceandassignvLANmembershipfordual-attachedserver
[edit interfaces]netadmin@qfabric# set SNG-1:ae0.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set SNG-1:ae1.0 family etherent-switching port-mode trunk vlan members 1100-1109[repeat as needed]
ConfigurationStepsforDual-homed,Active/Active
Figure11:Dual-homed(active/active)withbladeswitches
Step1.DefineQFabricNodealiasandRSNG
RSNG
QF/NodeQF/Node
netadmin@qfabric# set aliases node-device ABCD1231 row1-rack1anetadmin@qfabric# set aliases node-device ABCD1232 row1-rack1bnetadmin@qfabric# set aliases node-device ABCD1233 row1-rack2anetadmin@qfabric# set aliases node-device ABCD1234 row1-rack2bnetadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack1anetadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack1bnetadmin@qfabric# set resources node-group RSNG-2 node-device row1-rack2anetadmin@qfabric# set resources node-group RSNG-2 node-device row1-rack2b
Copyright © 2011, Juniper Networks, Inc. 23
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
useCase2:Mixtureof10GbEand1GbEconnection
This second use case is ideal for data centers that have a mixture of 10GbE and 1GbE. The 10GbE connections will be
mainly for data, storage, and vMotion, while the 1GbE connections are strictly for management. Each server has the
following configuration:
• 3 NICs—two 10GbE and one 1GbE
• Eight virtual machines per server, with each in its own VLAN
Figure12:usecase2designdiagram
Step2.ConfigureLAG
netadmin@qfabric# set chassis node-group RSNG-1 aggregated-devices ethernet device-count 48[repeat for other SNGs][edit interfaces]netadmin@qfabric# set interface-range LAG-ae0 member row1-rack1a:xe-0/0/0netadmin@qfabric# set interface-range LAG-ae0 member row1-rack1b:xe-0/0/0netadmin@qfabric# set interface-range LAG-ae0 ether-options 802.3ad ae0netadmin@qfabric# set interface-range LAG-ae1 member row1-rack1a:xe-0/0/1netadmin@qfabric# set interface-range LAG-ae1 member row1-rack1b:xe-0/0/1netadmin@qfabric# set interface-range LAG-ae1 ether-options 802.3ad ae1[repeat as needed]netadmin@qfabric# set RSNG-1:ae0 aggregated-ether-options lacp activenetadmin@qfabric# set RSNG-1:ae1 aggregated-ether-options lacp active[repeat as needed]
Step3.ConfigureinterfaceandassignvLANmembershipfordual-homedserver
• • •
[edit interfaces]netadmin@qfabric# set RSNG-1:ae0.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set RSNG-1:ae1.0 family etherent-switching port-mode trunk vlan members 1100-1109[repeat as needed]
24 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
CommonConfiguration
Step1.DefinetheQFabricNodealiasandNNG
[edit fabric]netadmin@qfabric# set aliases node-device ABCD1252 row21-rack1anetadmin@qfabric# set aliases node-device ABCD1253 row21-rack1bnetadmin@qfabric# set resources node-group NW-NG-0 network-domain netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1anetadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1b
Step2.Define10vLANs
[edit vlans]netadmin@qfabric# set v1100 vlan-id 1100netadmin@qfabric# set v1101 vlan-id 1101tonetadmin@qfabric# set v1109 vlan-id 1109
Step3.LAGconfigurationNNGconnectingtoMXSeriesandSRXSeries
netadmin@qfabric# set chassis node-group NW-NG-0 aggregated-devices ethernet device-count 24[edit interfaces]netadmin@qfabric# set interface-range LAG-ae0 member row21-rack1:xe-0/0/[0-1] netadmin@qfabric# set interface-range LAG-ae0 member row21-rack2:xe-0/0/[0-1]netadmin@qfabric# set interface-range LAG-ae0 ether-options 802.3ad ae0netadmin@qfabric# set interface-range LAG-ae1 member row21-rack1:xe-0/0/[2-3]netadmin@qfabric# set interface-range LAG-ae1 member row21-rack2:xe-0/0/[2-3] netadmin@qfabric# set interface-range LAG-ae1 ether-options 802.3ad ae1netadmin@qfabric# set interface-range LAG-ae3 member row21-rack1:xe-0/0/[4-5] netadmin@qfabric# set interface-range LAG-ae3 member row21-rack2:xe-0/0/[4-5] netadmin@qfabric# set interface-range LAG-ae3 ether-options 802.3ad ae3netadmin@qfabric# set interface-range LAG-ae4 member row21-rack1:xe-0/0/[6-7] netadmin@qfabric# set interface-range LAG-ae4 member row21-rack2:xe-0/0/[6-7] netadmin@qfabric# set interface-range LAG-ae4 ether-options 802.3ad ae4
netadmin@qfabric# set NW-NG-0:ae0 aggregated-ether-options lacp activenetadmin@qfabric# set NW-NG-0:ae1 aggregated-ether-options lacp activenetadmin@qfabric# set NW-NG-0:ae2 aggregated-ether-options lacp activenetadmin@qfabric# set NW-NG-0:ae3 aggregated-ether-options lacp active
Step4.vLANmembership
[edit interfaces]netadmin@qfabric# set NW-NG-0:ae0.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set NW-NG-0:ae1.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set NW-NG-0:ae2.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set NW-NG-0:ae3.0 family etherent-switching port-mode trunk vlan members 1100-1109
Copyright © 2011, Juniper Networks, Inc. 25
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Step5.FhRconfigurationontheMXSeries
while this document will not provide the CLI configuration, the MX series router will be configured with Irbs for routing
with VrrP. The VrrP mastership should alternate between the pair of MX series routers for traffic load balancing.
Existing1GbEServers
The existing 1GbE servers will be connected into a Juniper Networks EX4200 Ethernet switch in Virtual Chassis
configuration, which provides a modular, chassis-based solution. It can connect up to 480 10/100/1000bAsE-T ports.
For more information and best practice guidelines for EX4200 Virtual Chassis configurations, please refer to the Virtual
Chassis Technology best Practices guide at http://www.juniper.net/us/en/local/pdf/implementation-guides/8010018-
en.pdf. since the QFabric Node will be configured as rsNG and will be an aggregation point for all of the EX4200
switches in the Virtual Chassis configuration, the rsNG should be racked at a centralized location.
Note: Disable xsTP on the uplink ports of the EX4200 Virtual Chassis configuration that is connecting to the QFabric
Node. The interface on the xsNG will be disabled due to bPDU protection.
Figure13:EX4200virtualChassiswithRSNG
Step1.DefineQFabricNodealiasandRSNG
Row#3
Row#2
Row#1
RSNG
netadmin@qfabric# set aliases node-device ABCD1231 row21-rack1cnetadmin@qfabric# set aliases node-device ABCD1232 row21-rack1dnetadmin@qfabric# set resources node-group RSNG-1 node-device row21-rack1cnetadmin@qfabric# set resources node-group RSNG-1 node-device row21-rack1d
26 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Step2.ConfigureLAG
netadmin@qfabric# set chassis node-group RSNG-1 aggregated-devices ethernet device-count 48[repeat for other SNGs][edit interfaces]netadmin@qfabric# set interface-range LAG-ae0 member row21-rack1c:xe-0/0/0netadmin@qfabric# set interface-range LAG-ae0 member row21-rack1d:xe-0/0/0netadmin@qfabric# set interface-range LAG-ae0 ether-options 802.3ad ae0netadmin@qfabric# set interface-range LAG-ae1 member row21-rack1c:xe-0/0/1netadmin@qfabric# set interface-range LAG-ae1 member row21-rack1d:xe-0/0/1netadmin@qfabric# set interface-range LAG-ae1 ether-options 802.3ad ae1[repeat as needed]netadmin@qfabric# set RSNG-1:ae0 aggregated-ether-options lacp activenetadmin@qfabric# set RSNG-1:ae1 aggregated-ether-options lacp active[repeat as needed]
Step3.ConfigureinterfaceandassignvLANmembershipfordual-homedserver
[edit interfaces]netadmin@qfabric# set RSNG-1:ae0.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set RSNG-1:ae1.0 family etherent-switching port-mode trunk vlan members 1100-1109[repeat as needed]
Step4.ConfiguretheuplinksportconnectivityontheEX4200virtualChassis
This document will not cover the CLI configuration for the EX4200 Virtual Chassis configuration. The uplink ports
on the EX4200 Virtual Chassis should have xsTP disabled and be configured as a LAG with the appropriate VLAN
membership.
10GbEwith1GbEManagement
Each rack will have three switches—two QFabric Nodes in rsNG to provide dual-homed connectivity to the server,
and a 1GbE switch for 1GbE management. The 1GbE switch will be a Juniper Networks EX3300 Ethernet switch in
Virtual Chassis configuration. The EX3300 Virtual Chassis provides cost-effective 10/100/1000bAsE-T connectivity
and allows up to six interconnected EX3300 switches to operate as a single, logical device. The connection from the
EX3300 Virtual Chassis configuration can be to any rsNG within that row provided it is connected to the same rsNG.
Figure 14 shows an MOr type of connection between the EX3300 Virtual Chassis and rsNG because it is the shortest
cabling distance.
Figure14:RSNGwithEX3300virtualChassis
RSNGEX3300-VC
Copyright © 2011, Juniper Networks, Inc. 27
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
Step1.DefinetheQFabricNodealiasandRSNG
netadmin@qfabric# set aliases node-device ABCD1231 row1-rack1anetadmin@qfabric# set aliases node-device ABCD1232 row1-rack1bnetadmin@qfabric# set aliases node-device ABCD1233 row1-rack2anetadmin@qfabric# set aliases node-device ABCD1234 row1-rack2bnetadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack1anetadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack1bnetadmin@qfabric# set resources node-group RSNG-2 node-device row1-rack2anetadmin@qfabric# set resources node-group RSNG-2 node-device row1-rack2b
Step2.ConfigureLAG
netadmin@qfabric# set chassis node-group RSNG-1 aggregated-devices ethernet device-count 48[repeat for other SNGs][edit interfaces]netadmin@qfabric# set interface-range LAG-ae0 member row1-rack1a:xe-0/0/0netadmin@qfabric# set interface-range LAG-ae0 member row1-rack1b:xe-0/0/0netadmin@qfabric# set interface-range LAG-ae0 ether-options 802.3ad ae0netadmin@qfabric# set interface-range LAG-ae1 member row1-rack1a:xe-0/0/1netadmin@qfabric# set interface-range LAG-ae1 member row1-rack1b:xe-0/0/1netadmin@qfabric# set interface-range LAG-ae1 ether-options 802.3ad ae1[repeat as needed]netadmin@qfabric# set RSNG-1:ae0 aggregated-ether-options lacp activenetadmin@qfabric# set RSNG-1:ae1 aggregated-ether-options lacp active[repeat as needed]
Step3.ConfigureinterfaceandassignvLANmembershipfordual-homedserver
[edit interfaces]netadmin@qfabric# set RSNG-1:ae0.0 family etherent-switching port-mode trunk vlan members 1100-1109netadmin@qfabric# set RSNG-1:ae1.0 family etherent-switching port-mode trunk vlan members 1100-1109[repeat as needed]
Step4.ConfiguretheuplinksportconnectivityontheEX3300virtualChassis
This document will not cover the CLI configuration for the EX3300 Virtual Chassis configuration. The uplink ports
on the EX3300 Virtual Chassis should have xsTP disabled and be configured as a LAG with the appropriate VLAN
membership.
ServerConfigurations
For server configurations, refer to the previous section for either rack server or blade chassis. Then refer to the
appropriate section depending on the type of connection between the server/blade chassis to the QFabric Node.
Summary
Juniper Networks QFabric architecture with the Juniper Networks QFX series products provides a flexible solution
for deploying a fabric across the data center, enabling unique network designs to simplify the data center network
while maintaining any-to-any connectivity. QFabric architecture fundamentally simplifies the data center network by
reducing the number of managed devices and connections and centralizing management.
successful QFabric technology deployments can be accomplished by following the guidance in this design and
implementation guide. The designs suggested in this document will help establish complete data center solutions by
integrating MX series, EX series, srX series, and Juniper Networks vGw Virtual Gateway products into the QFabric
architecture.
28 Copyright © 2011, Juniper Networks, Inc.
IMPLEMENTATION GUIDE - Designing a Layer 2 Data Center Network with the QFabric Architecture
8010082-002-EN Oct 2011
Copyright 2011 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, Netscreen, and screenOs are registered trademarks of Juniper Networks, Inc. in the United states and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
EMEAheadquarters
Juniper Networks Ireland
Airside business Park
swords, County Dublin, Ireland
Phone: 35.31.8903.600
EMEA sales: 00800.4586.4737
Fax: 35.31.8903.601
APACheadquarters
Juniper Networks (hong kong)
26/F, Cityplaza One
1111 king’s road
Taikoo shing, hong kong
Phone: 852.2332.3636
Fax: 852.2574.7803
CorporateandSalesheadquarters
Juniper Networks, Inc.
1194 North Mathilda Avenue
sunnyvale, CA 94089 UsA
Phone: 888.JUNIPEr (888.586.4737)
or 408.745.2000
Fax: 408.745.2100
www.juniper.net
Printed on recycled paper
To purchase Juniper Networks solutions,
please contact your Juniper Networks
representative at 1-866-298-6428 or
authorized reseller.
AboutJuniperNetworks
Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers,
Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking.
The company serves customers and partners worldwide. Additional information can be found at www.juniper.net.