8010083-en

Embed Size (px)

Citation preview

  • 8/2/2019 8010083-en

    1/32

    IMPLEMENTATION GUIDE

    Copyright 2011, Juniper Networks, Inc.

    DEIGNING A LAE 3 DATA

    CENTE NETO IT TE

    QFAbIC ACITECTUEow to build a Data Center Network ith

    QFaric Products Acting as a Layer 3 switch

    Although Juniper Networks has attempted to provide accurate information in this guide, Juniper Networks does not warrant or guarantee the accuracy of the

    information provided herein. Third party product descriptions and related technical details provided in this document are for information purposes only and such

    products are not supported y Juniper Networks. All information provided in this guide is provided as is, with all faults, and without warranty of any kind, either

    expressed or implied or statutory. Juniper Networks and its suppliers herey disclaim all warranties related to this guide and the information contained herein,

    whether expressed or implied of statutory including, without limitation, those of merchantaility, tness for a particular purpose and noninfringement, or arising

    from a course of dealing, usage, or trade practice.

  • 8/2/2019 8010083-en

    2/32

    2 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    TableofContents

    Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    cope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    QFaric basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    Node Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    QFaric Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    Defining Node Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    Example 1: NG configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    Example 2: NG configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

    Example 3: NNG configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    Interface Naming Conventions for QFaric Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    Interface Type Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    Access Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    Trunk Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    outed Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    Layer 3 LAG Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

    VLAN Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    Trunk Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    Design Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    Connecting Layer 3 Device to QFaric Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

    oute Lookup and Forwarding Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    QFaric and VP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    Layer 3 Design Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    Use Case 1: tatic Default oute Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    Use Case 2: Putting QFaric Architecture into an OPF Area. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    Use Case 3: Putting QFaric Architecture into OPF tu Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    Use Case 4: Connecting One-Armed X eries Device as Active/Active with QFaric Architecture . . . . . . . . . . . . . . . 22

    Use Case 5: Connecting One-Armed X eries as Active/backup with QFaric Architecture . . . . . . . . . . . . . . . . . . . . . 25

    Use Case 6: Connecting One-Armed X eries Gateway to QFaric Architecture (VF-based teering Mode) . . . 28

    Use Case 7: QFaric Architecture back-to-back Extension with L3 LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    ummary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    Aout Juniper Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

  • 8/2/2019 8010083-en

    3/32

    Copyright 2011, Juniper Networks, Inc. 3

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    TableofFigures

    Figure 1: Juniper s data center solution with QFaric architecture, MX eries, X eries,

    vG Virtual Gateway, and Junos pace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    Figure 2: QFaric logical and physical configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    Figure 3: LAG support etween node groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    Figure 4: Different types of redundancy for rack servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    Figure 5: Different deployment scenarios with emedded lade switches in lade chassis . . . . . . . . . . . . . . . . . . . . . . . . . . .15

    F igu re 6: Layer 3 d evi ces can e lo cated an ywh er e in th e Q Fa ri c ar ch itectu re. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    Figure 7: NNG connecting to MX eries with LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    Figure 8: QFaric technology in OPF area0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    Figure 9: X eries one-armed deployment in a two-tier architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    Figure 10: One-armed X eries active/active deployment with QFaric technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    Figure 1 1: One-armed X eries act ive/act ive deployment with QFaric architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    F igur e 12: A pp ly in g sec ur ity pol icy to in ter-V F ro utin g on QFar ic arc hi tectu re . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    Figure 13: back-to-ack extension with LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

  • 8/2/2019 8010083-en

    4/32

    4 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    Introduction

    As people ecome more adept at employing virtualization technologies, and as applications ecome more efficient,

    the need for a high-performance and scalale data center infrastructure ecomes increasingly critical. Todays data

    center network architecture has too many layers and is too rigid to meet those requirements. Juniper has developed a

    new technology called Juniper Networks QFaric architecture that addresses the inefficiencies of legacy data center

    networks. QFaric technology eliminates network complexity y reducing the numer of switch layers and managed

    devices, while providing optimal network utilization and a pay-as-you-grow model that doesnt compromise overall

    network performance.

    Scope

    This document will discuss the design of a data center network where QFaric architecture acts as the Layer 3 switch.

    It will descrie the overall network topology and provide relevant configuration templates for QFaric solutions.

    The target audiences for this document are architects, network engineers or operators, and individuals who

    require technical knowledge, although every effort has een made to make this document appeal to the widest

    possile audience. It is assumed that the reader is familiar with Juniper Networks Junos operating system and is

    knowledgeale aout the QFaric family of products. Also, reading the Designing a Layer 2 Data Center Network with

    the QFaric Architecture implementation guide is highly recommended.

    DesignConsiderations

    One of the iggest challenges with todays data center is keeping the network simple while enaling it to grow without

    making uncomfortale trade-offs. Adding new switches is the typical response to network growth, ut that means more

    devices to manage and, more importantly, a potentially negative impact on network performance due to switch locations.

    Juniper Networks has introduced QFaric technology to address these challenges. QFaric technology has the unique

    aility to reduce complexity y flattening the network to a single tier, providing any-to-any connectivity that ensures

    every device is no more than a single hop away from any other device. Increasing port counts with QFaric architecture

    does not increase complexity or add devices to manage, since all QFaric solution components are managed as a

    single device.

  • 8/2/2019 8010083-en

    5/32

    Copyright 2011, Juniper Networks, Inc. 5

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    Figure1:JunipersdatacentersolutionwithQFabricarchitecture,MXSeries,SRXSeries,

    vGWVirtualGateway,andJunosSpace.

    QFabricBasics

    Juniper Networks QFaric architecture is composed of three components: QFaric Director, QFaric Interconnect, andQFaric Node. Each component plays a vital role. The QFaric Director functions as a outing Engine (E) in a modular

    switch, where it is responsile for managing the overall QFaric system as well as distriuting forwarding tales to the

    QFaric Nodes and QFaric Interconnects. The QFaric Interconnect is equivalent to a faric, acting like the ackplane

    of the switch and providing a simple, high-speed transport that interconnects all of the QFaric Nodes in a full-mesh

    topology to provide any-to-any port connectivity. The QFaric Node is equivalent to a line card, providing an intelligent

    edge that can perform routing and switching etween connected devices.

    Figure2:QFabriclogicalandphysicalconfiguration

    MX SeriesRemote

    Data Center

    SRX Series

    SRX5800

    Servers NAS FC Storage

    VMware vSphere

    vGW

    QFabric Interconnect

    QFabric Director

    CPE

    QFabricNode #1

    QFabricNode #2

    QFabricNode #3

    QFabricNode #128

  • 8/2/2019 8010083-en

    6/32

    6 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    NodeGroups

    A node group is nothing more than an astraction of a single or set of QFaric Nodes that are logically grouped with

    similar attriutes. Node groups are not ound y physical location ut y common traits. There are three different types

    of QFaric Nodes: server node group (NG), redundant server node group (NG), and network node group (NNG).

    NG is a single QFaric Node that is connected to servers, lade chassis, and storage devices (it may also e

    referred to as host-facing ports). Typically, host devices require a suset of protocols 1 such as Link Aggregation

    Control Protocol (LACP) and Link Layer Discovery Protocol (LLDP). Therefore, NGs will only need to support hosttype protocols. Layer 2 or Layer 3 networking protocols2 such as panning Tree Protocol (xTP) and OPF are not

    supported and cannot e configured on NG ports.

    NG is similar to NG with a couple of differences. First, an NG requires two QFaric Nodes to e grouped.

    econd, it can support cross-memer (node) link aggregation groups (LAGs), as shown in Figure 3.

    NNG is a set of QFaric Nodes connected to AN routers, other networking devices, or service appliances such

    as firewalls or server load alancers. because such devices will e connected to an NNG, all protocol stacks are

    availale on these ports. The QFaric architecture requires at least one QFaric Node to e a memer of an NNG

    (up to eight devices are allowed). hile defined as an NNG, it does not limit connections to service appliances or

    networking devices; server and/or storage devices can also connect to an NNG.

    Figure3:LAGsupportbetweennodegroups

    Table1:NodeGroupsSupportMatrix

    N D G R S M A X. N M B R F

    MMBRSR

    NDGR

    MAX.NMBR

    FND

    GRSWITIN

    TQFABRIC

    ARCITCTR

    SAMMMBR

    LAG

    CRSS-MMBR

    LAG(ACTIV/

    ACTIV)

    SRT

    ST-FACING

    RTCLS3

    SRT

    NTWRkING-

    FACING

    RTCLS4

    ingle node

    group (NG)1 127 3 3

    edundant

    server node

    group (NG)

    2 63 3 3 3

    Network node

    group (NNG)8 1 3 3 3 3

    QFabricConguration

    This document will not go over the deployment or ring-up of the system. It is assumed that the QFaric architecture

    has already een rought up y a certified specialist and is ready to e configured. This section will cover how to define

    node groups and how to configure port types (access or trunk), VLANs, LAGs, and VLAN memership.

    All management and configuration is done through the QFaric Director. There is no need to go into individual QFaricdevices and configure them. The entire QFaric architecture can e managed from a single IP address that is shared y

    the QFaric Directors.

    1 ost-facing protocols are LLDP, LACP, Address esolution Protocol (AP), Internet Group Management Protocol (IGMP) nooping, Data Center bridging (DCbx).2 Network-facing protocols are xTP, OPF, L3 unicast and multicast protocols, and IGMP.3 ost-facing protocols are LLDP, LACP, AP, IGMP nooping, DCbx.4 Network-facing protocols are xTP, L3 unicast and multicast protocols, and IGMP.

    SNG

    QFabricNode

    QFabricNode

    RSNG NNG

    QFabricNode

    QFabricNode

    QFabricNode

    QFabricNode

  • 8/2/2019 8010083-en

    7/32

    Copyright 2011, Juniper Networks, Inc. 7

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    DeningNodeGroups

    Node groups are a new concept for the Junos operating system and are only relevant to QFaric technology. Therefore,

    a new stanza has een introduced to help manage QFaric Nodes and node groups. by default, all QFaric Nodes are

    identified y serial numer. erial numers can e easily managed with a spreadsheet, and it is not humanly possile

    to manage without one. QFaric Nodes can e aliased with a more meaningful name, such as the physical location of

    the QFaric Node (row and rack), as shown with the example elow.

    [edit fabric]

    netadmin@qfabric# set aliases node-device ABCD1230 row1-rack1

    Just as in configuration mode, faric has een introduced into the operational command to provide QFaric

    architecture-related administrative show commands. below is an example of a serial numer-to-alias assignment.

    The Connection and Configuration columns provide the current state of the QFaric Node.

    netadmin@qfabric> show fabric administration inventory node-devices

    Item Identier Connection Conguration

    Node device

    row1-rack1 ABCD1230 Connected Congured

    row1-rack2 ABCD1231 Connected Congured

    row1-rack3 ABCD1232 Connected Conguredrow21-rack1 ABCD1233 Connected Congured

    QFaric Nodeseven single devicesneed to e assigned to a node group. Any aritrary name can e assigned to an

    xNG. NNG is the exception to this rule, as it already has a name (N-NG-0) which cannot e changed. A QFaric

    Node can only e part of one node group type; it cannot e part of two different node groups.

    Typically memers within node groups are close in proximity, ut that is not a requirement. Memers of a node group

    can e in different parts of the data center.

    xample1:SNGconguration

    [edit fabric]

    netadmin@qfabric# set resources node-group SNG-1 node-device row1-rack1

    xample2:RSNGconguration

    [edit fabric]

    netadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack2

    netadmin@qfabric# set resources node-group RSNG-1 node-device row1-rack3

    Note: Up to two QFaric Nodes can e part of an NG.

    xample3:NNGconguration

    [edit fabric]

    netadmin@qfabric# set resources node-group NW-NG-0 network-domain

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1

    Note: Up to eight QFaric Nodes can e part of an NNG.

  • 8/2/2019 8010083-en

    8/32

    8 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    A corresponding show command, shown elow, provides overall node group memership and status.

    netamdin@qfabric> show fabric administration inventory node-groups

    Item Identier Connection Conguration

    Node group

    NW-NG-0 Connected Congured

    row21-rack1 ABCD1233 Connected Congured

    RSNG-1 Connected Congured

    row1-rack2 ABCD1231 Connected Congured

    row1-rack3 ABCD1232 Connected Congured

    SNG-1 Connected Congured

    row1-rack1 ABCD1230 Connected Congured

    Another helpful command, show faric administration inventory, comines oth node device and node groups.

    InterfaceNamingConventionsforQFabricArchitecture

    The standard Junos O port naming convention is a three-level identifierinterface_name-fpc/pic/port_no . The fpc is the

    first level, and it provides slot location within the chassis. For QFaric architecture, the three-level identification poses

    a ig challenge for management ecause QFaric technology can scale to include up to 128 QFaric Nodes, and there

    is no concept of a slot with QFaric Nodes. Therefore, the QFaric interface naming convention has een enhanced to

    include four levels, where a chassis-level identifier is added. The new interface name scheme is QFabric Node:interface_name-fpc/pic/port . The QFaric Node can either e the serial numer or the alias name that has een assigned.

    netadmin@qfabric> show interfaces row1-rack1:xe-0/0/10

    Physical interface: row1-rack1:xe-0/0/10, Enabled, Physical link is Up

    Interface index: 49182, SNMP ifIndex: 7340572

    Link-level type: Ethernet, MTU: 1514, Speed: 10Gbps, Duplex: Full-Duplex,

    BPDU Error: None, MAC-REWRITE Error: None, Loopback: Disabled,

    Source ltering: Disabled, Flow control: Disabled

    Interface ags: Internal: 0x0

    CoS queues : 12 supported, 12 maximum usable queues

    Current address: 84:18:88:d5:b3:42, Hardware address: 84:18:88:d5:b3:42

    Last apped : 2011-09-06 21:10:51 UTC (04:20:44 ago)

    Input rate : 0 bps (0 pps)Output rate : 0 bps (0 pps)

    Note: This interface naming convention only applies to physical interfaces. For logical interfaces such as LAGs, it is

    node-group:interface_name-fpc/pic/slot. outed VLAN interfaces (VIs) follow the standard naming convention used

    y Juniper Networks EX eries Ethernet witches: vlan.x.

    InterfaceTypeConguration

    The next few sections will cover common configurationsports and VLANs. QFaric architecture follows the same

    configuration context as EX eries switches. Those who are familiar with configuring the EX eries will find the next

    few sections very familiar, with the only difference eing the interface naming convention.

    There are three different interface typesaccess, trunk, and routed interface. Just as with any other Junos O platform,

    interface configurations are done under the interface stanza. The access and trunk ports can e configured on any node

    groups. outed interfaces are limited to VI or NNG ports.

  • 8/2/2019 8010083-en

    9/32

    Copyright 2011, Juniper Networks, Inc. 9

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    Accessort

    [edit interfaces]

    netadmin@qfabric# set row1-rack1:xe-0/0/0.0 family ethernet-switching port-mode

    access

    Note: Port mode access is optional. If port mode is not defined , the default port mode is access.

    The standard show interfaces command is availale. Another helpful interface command for Layer 2 port is show

    ethernet-switching interfaces . An example output is shown elow:

    netadmin@qfabric> show ethernet-switching interfaces row1-rack1:xe-0/0/0 detail

    Interface: row1-rack1:xe-0/0/0.0, Index: 82, State: up, Port mode: Access

    Ether type for the interface: 0x8100

    VLAN membership:

    default, untagged, unblocked

    Number of MACs learned on IFL: 0

    Trunort

    [edit interfaces]

    netadmin@qfabric# set row1-rack1:xe-0/0/0.0 family ethernet-switching port-modetrunk

    below is a sample show output command on a trunk interface:

    netadmin@qfabric> show ethernet-switching interfaces row1-rack1:xe-0/0/1 detail

    Interface: LC2:xe-0/0/1.0, Index: 89, State: down, Port mode: Trunk

    Ether type for the interface: 0x8100

    Number of MACs learned on IFL: 0

    RoutedInterface

    As mentioned earlier, routed interfaces can either e VI or Layer 3 ports on NNG. VI provides routing etween VLANs

    as well as etween physical routed interfaces on the NNG. The following example shows physical Layer 3 interface

    configurations on oth NNG and VI.

    xample1:L3routedportonNNG

    [edit interfaces]

    netadmin@qfabric# set row21-rack1:xe-0/0/0.0 family inet address 1.1.1.1/24

    below is a sample show output command on a show interface for a Layer 3 route interface on an NNG:

    netadmin@qfabric> show interfaces row21-rack1:xe-0/0/0

    Physical interface: row1-rack4:xe-0/0/0, Enabled, Physical link is Up

    Interface index: 131, SNMP ifIndex: 1311224

    Link-level type: Ethernet, MTU: 1514, Speed: 10Gbps, Duplex: Full-Duplex,

    BPDU Error: None, MAC-REWRITE Error: None, Loopback: Disabled,Source ltering: Disabled, Flow control: Disabled

    Interface ags: Internal: 0x4000

    CoS queues : 12 supported, 12 maximum usable queues

    Current address: 84:18:88:d5:e7:0c, Hardware address: 84:18:88:d5:e7:0c

    Last apped : 2011-09-07 12:53:59 UTC (00:21:30 ago)

    Input rate : 0 bps (0 pps)

    Output rate : 0 bps (0 pps)

    Logical interface row21-rack1:xe-0/0/0.0 (Index 86) (SNMP ifIndex 1311280)

  • 8/2/2019 8010083-en

    10/32

    10 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    Flags: 0x4000 Encapsulation: ENET2

    Input packets : 0

    Output packets: 1

    Protocol inet, MTU: 1500

    Destination: 1.1.1/24, Local: 1.1.1.1, Broadcast: 1.1.1.255

    xample2:RVI

    tep 1. Configuring the VI interface

    [edit interfaces]

    netadmin@qfabric# set vlan1250.0 family inet address 10.83.100.1/24

    tep 2. binding the VI interface to the VLAN

    [edit interfaces]

    netadmin@qfabric# set vlans v1250 l3-interface vlan.1250

    below is a sample show output command on a show interface for an VI:

    root@qfabric> show interfaces vlan

    Physical interface: vlan, Enabled, Physical link is Up

    Interface index: 128, SNMP ifIndex: 1311221

    Type: VLAN, Link-level type: VLAN, MTU: 1518, Speed: 1000mbps

    Link type : Full-Duplex

    Current address: 84:18:88:d5:ee:05, Hardware address: 00:1f:12:31:7c:00

    Last apped : Never

    Input packets : 0

    Output packets: 0

    Logical interface vlan.1250 (Index 88) (SNMP ifIndex 2622001)

    Flags: 0x4000 Encapsulation: ENET2

    Input packets : 0

    Output packets: 1

    Protocol inet, MTU: 1500Destination: 10.83.100/24, Local: 10.83.100.1, Broadcast: 10.83.100.255

    Layer3LAGConguration

    Link aggregation provides link redundancy as well as increases andwidth. QFaric architecture supports oth static

    and dynamic LAGs, which can e configured on any QFaric Node. There are two typical LAG deploymentssame

    memer and cross memer. ame memer LAGs are where all of the LAG child memers are terminated on the

    same QFaric Node. Cross memer LAGs are where child memer LAGs are split etween node group memers. As

    discussed in the Defining Node Groups section, same memer LAGs can e configured on any node group, while cross

    memer LAGs are only supported on NGs and NNGs.

    Table2:NodeGroupsLAGSupportMatrix

    NDGRS SAMMMBRLAG CRSS-MMBRLAG(ACTIV/ACTIV)

    NG 3

    NG 3 3

    NNG 3 3

  • 8/2/2019 8010083-en

    11/32

    Copyright 2011, Juniper Networks, Inc. 1

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    xample1:SamememberLAGconguration

    tep 1. Define numer of supported LAGs per node group

    hile the example elow is for an NG named NG-1, the same configuration is applicale to NG or NNGthe

    configuration will just need to reflect the correct node group name. All node groups support the same memer LAG

    configuration.

    netadmin@qfabric# set chassis node-group SNG-1 aggregated-devices ethernetdevice-count 1

    tep 2. Assign the interface to a LAG interface

    Note: The chassis identifier name is the QFaric Node.

    [edit interfaces]

    netadmin@qfabric# set row1-rack1:xe-0/0/46 ether-options 802.3ad ae0

    netadmin@qfabric# set row1-rack1:xe-0/0/47 ether-options 802.3ad ae0

    tep 3. Configure the LAG interface

    All common LAG parameters across child LAG memers such as LACP, speed, duplex, and so on are centralized to the

    LAG interface itself. hile the example elow is for a Layer 2 interface, for Layer 3 the family needs to change fromethernet-switching to inet (L3 is only supported on NNG). For static LAGs, omit the LACP configuration. One thing to

    note is that the node identifier is the node group, not the QFaric Node.

    [edit interfaces]

    netadmin@qfabric# set SNG-1:ae0 aggregated-ether-options lacp active

    netadmin@qfabric# set SNG-1:ae0 unit 0 family ethernet-switching port-mode trunk

    ome relevant commands for LAG:

    show lacp ## applicale to dynamic LAG only ##

    show interface terse | matchnode_group:interface_name ## example NG-1:ae0 ##

    show interface node_group:interface_name

    tep 4. Assign IP address to LAG interface

    xample2:CrossmemberLAGconguration

    tep 1. Define the numer of supported LAGs per network node group

    netadmin@qfabric# set chassis node-group NW-NG-0 aggregated-devices ethernet

    device-count 10

    tep 2. Assign the interface to a LAG interface

    Note: The interface name is the QFaric Node.

    [edit interfaces]

    netadmin@qfabric# set row1-rack2:xe-0/0/0 ether-options 802.3ad ae0netadmin@qfabric# set row1-rack3:xe-0/0/0 ether-options 802.3ad ae0

  • 8/2/2019 8010083-en

    12/32

    12 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    tep 3. Configure the LAG interface and assign it an IP address

    All common LAG parameters across child LAG memers such as LACP, speed, duplex, and so on are centralized to the

    LAG interface itself. hile the example elow is for a Layer 2 interface, for Layer 3 the family needs to change from

    ethernet-switching to inet (L3 is only supported on NNG). For static LAGs, omit the LACP configuration. One thing to

    note is that the node identifier is the node group and not the QFaric Node.

    [edit interfaces]

    netadmin@qfabric# set NW-NG-0:ae0 aggregated-ether-options lacp active

    netadmin@qfabric# set NW-NG-0:ae0 unit 0 family ethernet-switching port-mode

    trunk

    ome relevant commands for LAG:

    show lacp ## applicale to dynamic LAG only ##

    show interface terse | match node_group:interface_name ## example N-NG-0:ae0 ##

    show interface node_group:interface_name

    Once the LAG interface is configured for Layer 2 link, change the family to inet and assign an IP address.

    [edit interfaces]

    netadmin@qfabric# set NW-NG-0:ae0.0 family inet address 192.168.0.1/24

    VLANConguration

    VLANs allow users to control the size of a roadcast domain and, more importantly, group ports in a Layer 2 switched

    network into the same roadcast domain as if they were connected on the same switch, regardless of their physical

    location.

    QFaric architecture is no exception. VLANs can e contained to a single node group or spread across the same and/or

    different types of node groups. The steps elow outline how to define VLANs and assign VLAN port memership.

    tep 1. Define the VLAN

    VLANs are defined under the VLAN stanza. Minimum configuration is VLAN name and vlan-id.

    [edit vlans]

    netadmin@qfabric# set default vlan-id 1

    below is an example of show vlan output. The asterisk denotes that the interface is up.

    netadmin@qfabric> show vlans

    Name Tag Interfaces

    default 1

    row1-rack1:xe-0/0/0.0*, row1-rack1:xe-0/0/0.1*, row1-

    rack2:xe-0/0/3.0*,

    RSNG-1:ae0.0*, NW-NG-0:ae0.0*

    tep 2. VLAN port memership

    If VLAN memership is not explicitly configured on the access ports, then it reverts ack to the default VLAN. For

    trunk ports, explicit configuration is required. There are two methods for assigning a port to a VLANport centric and

    VLAN centric. Either method is valid, ut if interface range or group profile isnt eing used, then for ease of VLAN

    management, Juniper recommends that VLAN memership for the access port should e done under the VLAN

    method and under the port method for the trunk port.

  • 8/2/2019 8010083-en

    13/32

    Copyright 2011, Juniper Networks, Inc. 13

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    Method1:VLANcentric

    [edit vlans]

    netadmin@qfabric# set default interface row1-rack1:xe-0/0/0.0

    Method2:ortcentric

    Either the vlan-name or vlan-id (802.1Q) can e used.

    [edit interfaces]

    netadmin@qfabric# set row1-rack1:xe-0/0/0.0 family ethernet-switching vlan

    members 1

    Trunort

    On trunk ports, VLAN ranges are supported for ease of configuration (i.e., 1-100). For nonsequential VLANs, enclose the

    memership with squared rackets and use a space for separation (i.e., 1-10 21 50-100).

    [edit interfaces]

    netadmin@qfabric# set row1-rack1:xe-0/0/0.0 family ethernet-switching port-mode

    trunk vlan members [1-10 21 50-100]

    In the aove configuration, all VLANs are tagged on the interface. For hyrid trunks, untagged and tagged traffic use

    the native-vlan-id keyword for untagged. below is an example trunk interface configured for VLAN 1 to e untagged

    and VLANs 2-25 to e tagged. Note that VLAN 1 is not part of the vlan memers configuration.

    [edit interfaces]

    netadmin@qfabric# set row1-rack1:xe-0/0/0.0 family ethernet-switching port-mode

    trunk native-vlan-id 1 vlan members [2-25]

    ome helpful VLAN memership commands are:

    show vlans

    show vlans vlan-name detail

    show ethernet-switching interfaces rief

    show ethernet-switching interfaces node_iden tifier:in terface_name-fpc/pic/port

    below is an example of the media access control (MAC) address tale for the QFaric:

    netadmin@qfabric> show ethernet-switching table

    Ethernet-switching table: 3 entries, 1 learned

    VLAN MAC address Type Age Interfaces

    default * Flood - NW-NG-0:All-members

    default 00:10:db::a0:01 Learn 51 NW-NG-0:ae0.0

    default 84:18:88:d5:ee:05 Static - NW-NG-0:Router

    Additional useful MAC address tale commands include:

    show ethernet-switching tale summary

    show ethernet-switching tale interface node_iden tifier:in terface_name-fpc/pic/port

    show Ethernet-switchin g tale vlan

  • 8/2/2019 8010083-en

    14/32

    14 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    DesignseCases

    This section will descrie various Layer 3 design uses cases deploying QFaric technology. For cale deployment, there

    are few optionstop-of-rack (TO), middle-of-row (MO), or end-of-row (EO)each of which has pros and cons.

    QFaric architecture offers enefits with all three types of deployments, including lower caling costs, modularity and

    deployment flexiility, as well as fewer (one logical) devices to manage and a simplified TP-free Layer 2 topology.

    hile QFaric architecture can e deployed as TO, EO, or MO, for the following design use cases, the deployment

    of choice will e TO.

    ow the rack server or lade chassis is connected to the TO depends on the high availaility strategy, i.e., is it at the

    application, server/network interface card (NIC), or network level? For rack servers, there are three different types of

    connections and levels of redundancy, which are explained elow.

    Single-attached: The server only has a single link connecting to the switch. In this model, there is either no

    redundancy, or the redundancy is uilt into the application.

    Dual-attached: The server has two links connecting to the same switch. NIC teaming is enaled on the servers,

    where it can e either active/standy or active/active. The second link provides the second level of redundancy. The

    more common deployment is active/active with a static LAG etween the switch and rack server.

    Dual-homed: The server has two links that connect to two different switches/modules in either an active/standy

    or active/active mode. This is a third level of redundancy; in addition to link redundancy there is spatial redundancy.

    If one of the switches fails, then there i s an alternate path. In order to provide an active/active deployment, the NIC

    needs to e in different sunets. If they are sharing the same IP/MAC, then some form of stacking or multichassis LAG

    technology needs to e supported on the switches so that a LAG can e configured etween the switches and server.

    Figure4:Differenttypesofredundancyforracservers

    Depending on how the servers are connected and how NIC teaming is implemented, the QFaric Node should e

    configured with the appropriate node group. The tale elow shows the relationship etween node group and server

    connections.

    Table3:NodeGroupSelectionMatrixforRacServersorBladeSwitcheswithass-ThroughModules

    ACTIV/ASSIV ACTIV/ACTIV

    ingle-attached NG N/ADual-attached NG NG

    Dual-homed NG NG

    Single-attached Dual-attached Dual-homed

    (L) Active/Standby(R) Active/Active

    (L) Active/Standby(R) Active/Active

  • 8/2/2019 8010083-en

    15/32

    Copyright 2011, Juniper Networks, Inc. 15

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    Network redundancy is not specific to TO deployment, as it also exists for MO or EO. The same deployment

    principles apply to TO, EO, and MO, with minor exceptions for MO or EO where, in a dual-homed connection

    scenario using modular switches, the second link can e connected to either a different module or a different chassis,

    depending on cost and rack space.

    In the case where lade chassis are used instead of rack servers, physical connectivity may vary depending on the

    lade chassis intermediary connection, pass-through module, or lade switches. Juniper recommends the pass-

    through module as it provides a direct connection etween the servers and the QFaric architecture. This direct

    connection eliminates any oversuscription and the additional switching layer that is seen with lade switches. The

    deployment options for pass-through are exactly the same as descried for rack servers.

    As for lade switches, depending on the vendor, they all have one thing in commonthey represent another device to

    manage, which adds complexity to the overall switching topology. Figure 5 shows the common network deployment

    etween lade switches and access switches.

    Figure5:Differentdeploymentscenarioswithembeddedbladeswitchesinbladechassis

    Single-homed: Each lade switch has a LAG connection into a single access switch. In this deployment, there are no

    Layer 2 loops to worry aout or manage.

    Dual-homed(active/bacup): In this deployment, each access switch is a standalone device. ince there are

    potential Layer 2 loops, the lade switch should support some sort of Layer 2 loop prevention TP or active/ackup-

    like technology, which will effectively lock any redundant link to reak the Layer 2 loop.

    Dual-homed(active/active): This deployment provides the most optimized deployment, as all links etween the

    lade and access switches are active and forwarding and provide network resiliency. The connection etween the

    lade switch and access switch is a LAG, which means the external switches must support either multichassis LAG

    or some form of stacking technology. ince LAG is a single logical link etween the lade and external switches,

    there are no Layer 2 loops to worry aout or manage.

    Note: Figure 5 assumes that lade switches are separate entities and are not daisy-chained or logically grouped

    through a stacking technology.

    ince QFaric architecture is a distriuted system that acts as a single logical switch, the two most likely deployments

    are single-homed or dual-homed (active/active). The QFaric Nodes will e configured as NG for single-homed and

    NG for dual-homed (active/active).

    Table4:NodeGroupSelectionMatrixforBladeChassiswithmbeddedBladeSwitches

    ACTIV/ASSIV ACTIV/ACTIV

    ingle-homed NG N/A

    Dual-homed (active/ackup) NG or NG NG or NG

    Dual-homed (active/active) NG NG

    Dual-homedDual-homedSingle-homed

    BladeSwitch

    BladeChassis

    Active/Backup Active/Active

  • 8/2/2019 8010083-en

    16/32

    16 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    In this document, the first hop router is the QFaric architecture. Use cases where a AN edge router such as one

    of Juniper Networks MX eries 3D Universal outers, a security device such as one of Juniper Networks X eries

    ervices Gateways, or any other service layer devices (load alancer, AN optimizer, service gateway) connect to the

    QFaric architecture as Layer 3 devices are discussed elow.

    ConnectingLayer3DevicetoQFabricArchitecture

    hen Layer 3 devices connect to the QFaric architecture, the network node group port must e used for the physical

    connection. Network node group memers do not have to e deployed physically close together, they can span the

    data center. owever, only eight QFaric Nodes can e in a network node group. It is impossile to have multiple

    network node groups per QFaric architecture configuration.

    Figure6:Layer3devicescanbelocatedanywhereintheQFabricarchitecture

    RouteLooupandForwardingDecisions

    In the QFaric architecture, all of the data plane intelligence is distriuted to each QFaric Node. In other words, if the

    packet comes into one of the QFaric Nodes and it requires Layer 3 lookup, the QFaric Node consults its routing tale

    and decides on the destination QFaric Node. The ingress QFaric Node sends a packet to the 40 Gps uplink. Once

    the egress QFaric Node receives the packet, it references its own Address esolution Protocol (AP) tale to select an

    appropriate port.

    QFabricandVRR

    In traditional data center architectures, Virtual outer edundancy Protocol (VP) is typically required to secure thegateway redundancy for any Layer 3 devices. owever, moving onto the QFaric architecture, VP is not necessary

    since a QFaric solution is a single logical switch, meaning that there is no need to have multiple devices running as

    gateways. ithin a network node group, the high availaility of a gateway has already een uilt in. For example, the

    X eries cluster in Figure 6 connects to two QFaric Nodes which are part of NNG. To the X eries cluster, it is

    the same as connecting to different ports on different line cards on a single switch. These line cards and ports are

    fully synchronized at the QFaric Director level. There is no need to run protocols to ensure the switchover etween

    devices; therefore, users do not have to configure VP among network node groups.

    Junos Pulse Gateway

    WX Series

    MX Series

    SRX Series

    SNG SNG

    Load Balancer

    NNG: Network Node Group

    NNG

  • 8/2/2019 8010083-en

    17/32

    Copyright 2011, Juniper Networks, Inc. 17

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    Layer3DesignseCases

    seCase1:StaticDefaultRouteConguration

    In the case where an MX eries device is present and provides most of the rich routing functionality, and the QFaric

    architecture just needs to provide asic routing, a static default route configuration will apply. Three QFaric Nodes in

    NNG will provide inter-VLAN routing and upstream LAG access to redundant MX eries devices. On the MX eries side,

    there are two ways to provide one unique gateway IP to QFaric architectureone is Virtual Chassis technology on the

    MX eries and the other is VP etween the two.

    Figure7:NNGconnectingtoMXSerieswithLAG

    tep 1. Define QFaric Node alias and NNG

    MX Series

    NNG

    VLAN1104VLAN1100

    VLAN1101 VLAN1103

    VLAN1102

    [edit fabric]

    netadmin@qfabric# set aliases node-device ABCD1252 row21-rack1

    netadmin@qfabric# set aliases node-device ABCD1253 row21-rack2

    netadmin@qfabric# set aliases node-device ABCD1254 row21-rack3

    netadmin@qfabric# set resources node-group NW-NG-0 network-domain

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack2

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack3

    tep 2. Define Layer 2 configuration

    [edit vlans]

    netadmin@qfabric# set v1100 vlan-id 1100

    netadmin@qfabric# set v1100 vlan-id 1101

    netadmin@qfabric# set v1100 vlan-id 1102

    netadmin@qfabric# set v1100 vlan-id 1103

    netadmin@qfabric# set v1100 vlan-id 1104

  • 8/2/2019 8010083-en

    18/32

    18 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    tep 3. Define LAG configuration NNG connecting to MX eries device

    [edit]

    netadmin@qfabric# set chassis node-group NW-NG-0 aggregated-devices ethernet

    device-count 24

    [edit interfaces]

    netadmin@qfabric# set interface-range LAG-ae0 member row21-rack1:xe-0/0/[0-1]

    netadmin@qfabric# set interface-range LAG-ae0 member row21-rack2:xe-0/0/[0-1]

    netadmin@qfabric# set interface-range LAG-ae0 member row21-rack3:xe-0/0/[0-1]

    netadmin@qfabric# set interface-range LAG-ae0 ether-options 802.3ad ae0

    netadmin@qfabric# set interface-range LAG-ae1 member row21-rack1:xe-0/0/[2-3]

    netadmin@qfabric# set interface-range LAG-ae1 member row21-rack2:xe-0/0/[2-3]

    netadmin@qfabric# set interface-range LAG-ae1 member row21-rack3:xe-0/0/[2-3]

    netadmin@qfabric# set interface-range LAG-ae1 ether-options 802.3ad ae0

    netadmin@qfabric# set NW-NG-0:ae0 aggregated-ether-options lacp active

    netadmin@qfabric# set NW-NG-0:ae1 aggregated-ether-options lacp active

    tep 4. Assign IP address to LAG interfaces

    [edit interfaces]

    netadmin@qfabric# set NW-NG-0:ae0.0 family inet address 192.168.0.1/24

    netadmin@qfabric# set NW-NG-0:ae1.0 family inet address 192.168.0.2/24

    tep5: Configure VI for five VLANs

    [edit interfaces]

    netadmin@qfabric# set vlan1100.0 family inet address 10.84.100.1/24

    netadmin@qfabric# set vlan1101.0 family inet address 10.85.100.1/24

    netadmin@qfabric# set vlan1102.0 family inet address 10.86.100.1/24

    netadmin@qfabric# set vlan1103.0 family inet address 10.87.100.1/24

    netadmin@qfabric# set vlan1104.0 family inet address 10.88.100.1/24

    tep 6. bind the VI interface to the VLAN

    [edit interfaces]

    netadmin@qfabric# set vlans v1100 l3-interface vlan.1100

    netadmin@qfabric# set vlans v1101 l3-interface vlan.1101

    netadmin@qfabric# set vlans v1102 l3-interface vlan.1102

    netadmin@qfabric# set vlans v1103 l3-interface vlan.1103

    netadmin@qfabric# set vlans v1104 l3-interface vlan.1104

    tep 7. Configure default routes to the MX eries

    [Assumes that 192.168.0.254 is the address of the MX eries Virtual Chassis configuration]

    [edit routing-option]

    netadmin@qfabric# set routing-options static route 0.0.0.0/0 next-hop

    192.168.0.254

  • 8/2/2019 8010083-en

    19/32

    Copyright 2011, Juniper Networks, Inc. 19

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    tep 8. Verify default route configuration

    netadmin@qfabric> show route terse

    inet.0: 16 destinations, 16 routes (16 active, 0 holddown, 0 hidden)

    + = Active Route, - = Last Active, * = Both

    A Destination P Prf Metric 1 Metric 2 Next hop AS path

    * 0.0.0.0/0 S 5 192.168.0.254

    * 10.84.100.0/24 D 0 NW-NG-0:vlan.1100

    * 10.84.100.1/32 L 0 Local

    * 10.85.100.0/24 D 0 NW-NG-0:vlan.1101

    * 10.85.100.1/32 L 0 Local

    * 10.86.100.0/24 D 0 NW-NG-0:vlan.1102

    * 10.86.100.1/32 L 0 Local

    * 10.87.100.0/24 D 0 NW-NG-0:vlan.1103

    * 10.87.100.1/32 L 0 Local

    * 10.88.100.0/24 D 0 NW-NG-0:vlan.1104

    * 10.88.100.1/32 L 0 Local

    * 192.168.0.0/24 D 0 NW-NG-0:ae0.0

    NW-NG-0:ae1.0

    * 192.168.0.1/32 L 0 Local* 192.168.0.2/32 L 0 Local

    Note: The MX eries Virtual Chassis configuration will not e covered, since it is out of the scope of this document.

    Please visit www.juniper.net for more information aout Virtual Chassis technology.

    seCase2:uttingQFabricArchitectureintoanSFArea

    Another use case is to run OPF on the QFaric architecture. This scenario is applicale where the user wants more

    granular control over advertised/advertising routes. In the following example, QFaric technology is deployed in OPF

    Area0 and upstream MX eries devices will advertise the default route.

    Figure8:QFabrictechnologyinSFarea0

    MX Series

    NNG

    OSPF Area0

    VLAN1104VLAN1100

    VLAN1101 VLAN1103

    VLAN1102

  • 8/2/2019 8010083-en

    20/32

    20 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    tep 1. Define QFaric Node alias and NNG

    [edit fabric]

    netadmin@qfabric# set aliases node-device ABCD1252 row21-rack1

    netadmin@qfabric# set aliases node-device ABCD1253 row21-rack2

    netadmin@qfabric# set aliases node-device ABCD1254 row21-rack3

    netadmin@qfabric# set resources node-group NW-NG-0 network-domain

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack2

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack3

    tep 2. Define five VLANs

    [edit vlans]

    netadmin@qfabric# set v1100 vlan-id 1100

    netadmin@qfabric# set v1100 vlan-id 1101

    netadmin@qfabric# set v1100 vlan-id 1102

    netadmin@qfabric# set v1100 vlan-id 1103

    netadmin@qfabric# set v1100 vlan-id 1104

    tep 3. LAG configuration NNG connecting to MX eries device

    [edit]

    netadmin@qfabric# set chassis node-group NW-NG-0 aggregated-devices ethernet

    device-count 24

    [edit interfaces]

    netadmin@qfabric# set interface-range LAG-ae0 member row21-rack1:xe-0/0/[0-1]

    netadmin@qfabric# set interface-range LAG-ae0 member row21-rack2:xe-0/0/[0-1]

    netadmin@qfabric# set interface-range LAG-ae0 member row21-rack3:xe-0/0/[0-1]

    netadmin@qfabric# set interface-range LAG-ae0 ether-options 802.3ad ae0

    netadmin@qfabric# set interface-range LAG-ae1 member row21-rack1:xe-0/0/[2-3]

    netadmin@qfabric# set interface-range LAG-ae1 member row21-rack2:xe-0/0/[2-3]

    netadmin@qfabric# set interface-range LAG-ae1 member row21-rack3:xe-0/0/[2-3]

    netadmin@qfabric# set interface-range LAG-ae1 ether-options 802.3ad ae0

    netadmin@qfabric# set NW-NG-0:ae0 aggregated-ether-options lacp active

    netadmin@qfabric# set NW-NG-0:ae1 aggregated-ether-options lacp active

    tep 4. Assign IP address to LAG interfaces

    [edit interfaces]

    netadmin@qfabric# set NW-NG-0:ae0.0 family inet address 192.168.0.2/30

    netadmin@qfabric# set NW-NG-0:ae1.0 family inet address 192.168.1.2/30

    tep 5. Configure VI for five VLANs

    [edit interfaces]

    netadmin@qfabric# set vlan1100.0 family inet address 10.84.100.1/24

    netadmin@qfabric# set vlan1101.0 family inet address 10.85.100.1/24

    netadmin@qfabric# set vlan1102.0 family inet address 10.86.100.1/24

    netadmin@qfabric# set vlan1103.0 family inet address 10.87.100.1/24

    netadmin@qfabric# set vlan1104.0 family inet address 10.88.100.1/24

  • 8/2/2019 8010083-en

    21/32

    Copyright 2011, Juniper Networks, Inc. 2

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    tep 6. bind the VI interface to the VLAN

    [edit interfaces]

    netadmin@qfabric# set vlans v1100 l3-interface vlan.1100

    netadmin@qfabric# set vlans v1101 l3-interface vlan.1101

    netadmin@qfabric# set vlans v1102 l3-interface vlan.1102

    netadmin@qfabric# set vlans v1103 l3-interface vlan.1103

    netadmin@qfabric# set vlans v1104 l3-interface vlan.1104

    tep 7. Enale OPF and include LAG interface and VI to area 0

    [edit]

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface NW-NG-0:ae0.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface NW-NG-0:ae1.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface vlan.1100

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface vlan.1101

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface vlan.1102

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface vlan.1103

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface vlan.1104

    tep 8. Verify OPF neighor

    [edit]

    root@SV-POC-QF> show ospf neighbor

    Address Interface State ID Pri Dea

    192.168.0.3 NW-NG-0:ae0.0 Full 13.13.13.1 128 36

    192.168.0.4 NW-NG-0:ae1.0 Full 12.12.12.1 128 31

    tep 9. Verify routing tale

    netadmin@qfabric> show route terse

    inet.0: 12 destinations, 12 routes (12 active, 0 holddown, 0 hidden)

    + = Active Route, - = Last Active, * = Both

    A Destination P Prf Metric 1 Metric 2 Next hop AS path

    * 10.84.100.0/24 D 0 NW-NG-0:vlan.1100

    * 10.84.100.1/32 L 0 Local

    * 10.85.100.0/24 D 0 NW-NG-0:vlan.1101

    * 10.85.100.1/32 L 0 Local

    * 10.86.100.0/24 D 0 NW-NG-0:vlan.1102

    * 10.86.100.1/32 L 0 Local

    * 10.87.100.0/24 D 0 NW-NG-0:vlan.1103

    * 10.87.100.1/32 L 0 Local

    * 10.88.100.0/24 D 0 NW-NG-0:vlan.1104

    * 10.88.100.1/32 L 0 Local

    * 192.168.0.0/24 D 0 NW-NG-0:ae0.0

    NW-NG-0:ae1.0

    * 192.168.0.1/32 L 0 Local* 192.168.0.2/32 L 0 Local

    * 0.0.0.0/0 O 10 1 >192.168.0.3

    192.168.0.4

    * 224.0.0.5/32 O 10 1 MultiRecv

  • 8/2/2019 8010083-en

    22/32

    22 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    seCase3:uttingQFabricArchitectureintoSFStubArea

    Another use case is to run QFaric architecture in an OPF stu area. This scenario is applicale where the user wants

    to minimize the routing tale size.

    Most of the configurations are the same as those in Use Case 2. The only difference is to configure the OPF area as

    stu at tep 7, then add VI interfaces to the stu area. Note that it is possile not to advertise summary routes in the

    stu area y adding the no-summaries option.

    [edit]

    netadmin@qfabric# set protocols ospf area 0.0.0.1 stub no-summaries

    netadmin@qfabric# set protocols ospf area 0.0.0.1 interface NW-NG-0:ae0.0

    netadmin@qfabric# set protocols ospf area 0.0.0.1 interface NW-NG-0:ae1.0

    netadmin@qfabric# set protocols ospf area 0.0.0.1 interface vlan.1100

    netadmin@qfabric# set protocols ospf area 0.0.0.1 interface vlan.1101

    netadmin@qfabric# set protocols ospf area 0.0.0.1 interface vlan.1102

    netadmin@qfabric# set protocols ospf area 0.0.0.1 interface vlan.1103

    netadmin@qfabric# set protocols ospf area 0.0.0.1 interface vlan.1104

    seCase4:Connectingne-ArmedSRXSeriesDeviceasActive/ActivewithQFabricArchitecture

    It is frequently required to connect firewalls to the core/aggregation device. The next two use cases will discuss how

    X eries ervices Gateways can e deployed with QFaric solutions. The diagram elow shows a typical deployment

    in which two Juniper Networks X5800 ervices Gateway devices running in active/active mode connect to an EX

    eries/MX eries device in a onearmed fashion.

    Figure9:SRXSeriesone-armeddeploymentinatwo-tierarchitecture

    EX4200Virtual Chassis

    SRX5800_BSRX5800_A

    EX4200Virtual Chassis

    EX Series/MX Series

    Core/Edge Tier

    VLAN 500, 1001, 1003, 1005 VLAN 600, 1000, 1002, 1004 VLAN 1000 VLAN 1001

  • 8/2/2019 8010083-en

    23/32

    Copyright 2011, Juniper Networks, Inc. 23

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    hen a customer migrates to the QFaric architecture, the one-armed deployment will appear as in Figure 10. There

    is no need to change the configuration on the X5800 side. The fundamental QFaric solution configuration is the

    same as on the EX eries/MX eries devices in Figure 9.

    Figure10:ne-armedSRXSeriesactive/activedeploymentwithQFabrictechnology

    In this example, X5800_A and X5800_b connect to the QFaric solution as one-armed devices, deployed as an

    active/active cluster.

    The first VLAN trunk is handling VLANs 500, 1001, 1003, and 1005, while the second trunk handles VLANs 600, 1000,

    1002, and 1004. This VLAN traffic will e distriuted to the X5800 cluster X5800_A and X5800_b. A solid line

    denotes the primary link for the given VLAN, while a dotted line indicates the ackup. ith the virtual router functions

    of QFaric architecture, inter-VLAN routing wont ensure that these two groups are totally isolated at the Layer 3 level.

    This is feasile in a multi-tenant environment. VLANs 500 and 600 will e used for uplink connections to the AN

    edge router from the X eries under the set security zones security-zone uplink interface stanza. ere the first

    VLAN trunk is in virtual router instance 10 (V10) while the second VLAN trunk is in V20. In addition, VI VLANs 500

    and 600 will e in Core V to provide uplink connection to the AN edge routers. ervers just need to send packets to

    the VP address on the X eries gateway in each VI VLAN (1000 through 1005).

    Note that X eries configuration details are not covered since they are out of scope for this document. The following

    configuration examples focus on network node group configuration. Please review previous use case or the L2 design

    guide for server node group configuration information.

    tep 1. Define QF/Node alias and NNG

    to WAN EdgeSRX5800_A

    SRX5800_B

    VLAN 500, 1001, 1003, 1005

    VLAN 600, 1000, 1002, 1004

    VLAN 1000

    VLAN 1001

    [edit fabric]

    netadmin@qfabric# set aliases node-device ABCD1252 row21-rack1

    netadmin@qfabric# set aliases node-device ABCD1253 row21-rack2

    netadmin@qfabric# set aliases node-device ABCD1254 row21-rack3

    netadmin@qfabric# set aliases node-device ABCD1255 row21-rack4

    netadmin@qfabric# set resources node-group NW-NG-0 network-domain

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack2

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack3

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack4

  • 8/2/2019 8010083-en

    24/32

    24 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    tep 2. Define VLANs

    [edit vlans]

    netadmin@qfabric# set v500 vlan-id 500

    netadmin@qfabric# set v600 vlan-id 600

    netadmin@qfabric# set v1000 vlan-id 1000

    netadmin@qfabric# set v1001 vlan-id 1001

    netadmin@qfabric# set v1002 vlan-id 1002

    netadmin@qfabric# set v1003 vlan-id 1003netadmin@qfabric# set v1004 vlan-id 1004

    netadmin@qfabric# set v1005 vlan-id 1005

    tep 3. Map VLAN to interface

    set interfaces row21-rack3:xe-0/0/20 unit 0 family ethernet-switching port-mode

    trunk

    set interfaces row21-rack3:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1000

    set interfaces row21-rack3:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1002

    set interfaces row21-rack3:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1004set interfaces row21-rack3:xe-0/0/21 unit 0 family ethernet-switching vlan

    members v500

    set interfaces row21-rack4:xe-0/0/20 unit 0 family ethernet-switching port-mode

    trunk

    set interfaces row21-rack4:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1001

    set interfaces row21-rack4:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1003

    set interfaces row21-rack4:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1005

    set interfaces row21-rack4:xe-0/0/21 unit 0 family ethernet-switching vlan

    members v600

    tep 4. bind the VI interface to the VLAN

    [edit interfaces]

    netadmin@qfabric# set vlans v500 l3-interface vlan.500

    netadmin@qfabric# set vlans v600 l3-interface vlan.600

    netadmin@qfabric# set vlans v1000 l3-interface vlan.1000

    netadmin@qfabric# set vlans v1001 l3-interface vlan.1001

    netadmin@qfabric# set vlans v1002 l3-interface vlan.1002

    netadmin@qfabric# set vlans v1003 l3-interface vlan.1003

    netadmin@qfabric# set vlans v1004 l3-interface vlan.1004

    netadmin@qfabric# set vlans v1005 l3-interface vlan.1005

    tep 5. Configure VI for VLAN 500 and 600

    [edit interfaces]

    netadmin@qfabric# set vlan500.0 family inet address 10.84.100.1/24

    netadmin@qfabric# set vlan600.0 family inet address 10.84.101.1/24

  • 8/2/2019 8010083-en

    25/32

    Copyright 2011, Juniper Networks, Inc. 25

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    tep 6. Create virtual router instance and include VIs

    Conguring VR10

    netadmin@qfabric# set routing-instances VR-TEN instance-type virtual-router

    netadmin@qfabric# set routing-instances VR-TEN interface vlan.1001

    netadmin@qfabric# set routing-instances VR-TEN interface vlan.1003

    netadmin@qfabric# set routing-instances VR-TEN interface vlan.1005

    netadmin@qfabric# set routing-instances VR-TEN protocols ospf area 0.0.0.0

    interface all

    Conguring VR20

    netadmin@qfabric# set routing-instances VR-TWENTY instance-type virtual-router

    netadmin@qfabric# set routing-instances VR-TWENTY interface vlan.1000

    netadmin@qfabric# set routing-instances VR-TWENTY interface vlan.1002

    netadmin@qfabric# set routing-instances VR-TWENTY interface vlan.1004

    netadmin@qfabric# set routing-instances VR-TWENTY protocols ospf area 0.0.0.0

    interface all

    Conguring CoreVR

    netadmin@qfabric# set routing-instances core instance-type virtual-router

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack1:xe-0/0/10.0netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack1:xe-0/0/11.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack2:xe-0/0/10.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack2:xe-0/0/11.0

    netadmin@qfabric# set routing-instances core interface vlan.500

    netadmin@qfabric# set routing-instances core interface vlan.600

    netadmin@qfabric# set routing-instances core protocols ospf area 0.0.0.0

    interface all

    seCase5:Connectingne-ArmedSRXSeriesasActive/BacupwithQFabricArchitecture

    The X eries can also e deployed in an active/ackup manner. Again, the configuration is simple with QFaric

    technology ecause it uses the same approach as the EX eries switches. Users simply need to create a VLAN for

    terminating server connections, create an VI (VLAN 100 in this case as shown in Figure 11) for uplink connection, and

    put VI in L3 routing. The X eries devices are configured as the primary security gateway for their respective VLANs,

    so servers just need to send packets to the VP address of the X eries in each VLAN.

  • 8/2/2019 8010083-en

    26/32

    26 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    Figure11:ne-armedSRXSeriesactive/activedeploymentwithQFabricarchitecture

    tep 1. Define QF/Node alias and NNG

    to WAN EdgeSRX5800_A

    SRX5800_B

    VLAN 1001, 1003, 1005

    VLAN 1000, 1002, 1004

    VLAN 100

    [edit fabric]

    netadmin@qfabric# set aliases node-device ABCD1252 row21-rack1

    netadmin@qfabric# set aliases node-device ABCD1253 row21-rack2

    netadmin@qfabric# set aliases node-device ABCD1254 row21-rack3

    netadmin@qfabric# set aliases node-device ABCD1255 row21-rack4

    netadmin@qfabric# set resources node-group NW-NG-0 network-domain

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack2

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack3

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack4

    tep 2. Define VLANs

    [edit vlans]

    netadmin@qfabric# set v100 vlan-id 100

    netadmin@qfabric# set v1000 vlan-id 1000

    netadmin@qfabric# set v1001 vlan-id 1001

    netadmin@qfabric# set v1002 vlan-id 1002

    netadmin@qfabric# set v1003 vlan-id 1003

    netadmin@qfabric# set v1004 vlan-id 1004

    netadmin@qfabric# set v1005 vlan-id 1005

  • 8/2/2019 8010083-en

    27/32

    Copyright 2011, Juniper Networks, Inc. 27

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    tep 3. Map VLAN to interface

    set interfaces row21-rack3:xe-0/0/20 unit 0 family ethernet-switching port-mode

    trunk

    set interfaces row21-rack3:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1000

    set interfaces row21-rack3:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1002

    set interfaces row21-rack3:xe-0/0/20 unit 0 family ethernet-switching vlanmembers v1004

    set interfaces row21-rack3:xe-0/0/21 unit 0 family ethernet-switching vlan

    members v100

    set interfaces row21-rack4:xe-0/0/20 unit 0 family ethernet-switching port-mode

    trunk

    set interfaces row21-rack4:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1001

    set interfaces row21-rack4:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1003

    set interfaces row21-rack4:xe-0/0/20 unit 0 family ethernet-switching vlan

    members v1005

    set interfaces row21-rack4:xe-0/0/21 unit 0 family ethernet-switching vlan

    members v100

    tep 4. bind the VI interface to the VLAN

    [edit interfaces]

    netadmin@qfabric# set vlans v100 l3-interface vlan.100

    tep 5. Configure VI for VLAN 100

    [edit interfaces]

    netadmin@qfabric# set vlan100.0 family inet address 10.84.100.1/24

    tep 6. Enale OPF and include uplink interfaces and VI to area 0

    [edit]

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack1:xe-0/0/10.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack1:xe-0/0/11.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack2:xe-0/0/10.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack2:xe-0/0/11.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface vlan.100

  • 8/2/2019 8010083-en

    28/32

    28 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    seCase6:Connectingne-ArmedSRXSeriesGatewaytoQFabricArchitecture(VRF-BasedSteering

    Mode)

    If a customer would like to create a security zone per VF asis and apply those security policies to inter-VF traffic,

    QFaric needs to act as the first hop router, and the X eries will e used for services delivery only. ith this model,

    it is important to note that the QFaric solution routes a significantly higher volume of traffic that doesnt need

    services, so that needs to e taken into consideration to avoid capacity or scaling prolems. For example, Figure 12

    shows that intra-VF traffic (vlan.1001) wont hit the X eries, while inter-VF traffic (vlan.1000 and vlan.1004) will.

    Figure12:Applyingsecuritypolicytointer-VRFroutingonQFabricarchitecture

    V-Zone-A contains VLAN 1000, 1001, and 1002. V-Zone-b includes VLAN 1003, 1004, and 1005. Each V has

    a default route entry which is pointing to the X eries. The configuration is descried elow. Note that it is still

    necessary to have separate V instances to connect to the AN edge, which is configured as Core V. (Only the V

    portion will e covered here.)

    Configuring V-ZONE-A

    to WAN EdgeSRX5800_A

    SRX5800_B

    VLAN 500, 1001, 1003, 1005

    VLAN 600, 1000, 1002, 1004

    VLAN 1000

    VLAN 1001

    VLAN 1004

    netadmin@qfabric# set routing-instances VR-ZONE-A instance-type virtual-router

    netadmin@qfabric# set routing-instances VR-ZONE-A interface vlan.1000

    netadmin@qfabric# set routing-instances VR-ZONE-A interface vlan.1001

    netadmin@qfabric# set routing-instances VR-ZONE-A interface vlan.1002

    netadmin@qfabric# set routing-instances VR-ZONE-A routing-options static route

    0.0.0.0/0 next-hop x.x.x.x [VRRP address of each RVI on SRX]

    netadmin@qfabric# set routing-instances VR-ZONE-A protocols ospf area 0.0.0.0

    interface all

  • 8/2/2019 8010083-en

    29/32

    Copyright 2011, Juniper Networks, Inc. 29

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    Configuring V-ZONE-b

    netadmin@qfabric# set routing-instances VR-ZONE-B instance-type virtual-router

    netadmin@qfabric# set routing-instances VR-ZONE-B interface vlan.1003

    netadmin@qfabric# set routing-instances VR-ZONE-B interface vlan.1004

    netadmin@qfabric# set routing-instances VR-ZONE-B interface vlan.1005

    netadmin@qfabric# set routing-instances VR-ZONE-B routing-options static route

    0.0.0.0/0 next-hop x.x.x.x [VRRP address of each RVI on SRX]

    netadmin@qfabric# set routing-instances VR-ZONE-B protocols ospf area 0.0.0.0

    interface all

    Configuring Core V

    netadmin@qfabric# set routing-instances core instance-type virtual-router

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack1:xe-0/0/10.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack1:xe-0/0/11.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack2:xe-0/0/10.0

    netadmin@qfabric# set protocols ospf area 0.0.0.0 interface row21-

    rack2:xe-0/0/11.0netadmin@qfabric# set routing-instances core interface vlan.500

    netadmin@qfabric# set routing-instances core interface vlan.600

    netadmin@qfabric# set routing-instances core protocols ospf area 0.0.0.0

    interface all

    seCase7:QFabricArchitectureBac-to-BacxtensionwithL3LAG

    Currently, there is a 150 meter distance limitation etween a QFaric Node and the QFaric Interconnect due to the

    QFP+ specification. owever, there is a way to increase this distance through the use of a ack-to-ack extension. In

    Figure 13, QFaric 1 can reach QFaric 2, which is up to 300 meters away, with FP+, and 640GE andwidth. The

    solution consists of eight QFaric Nodes in a network node group, allowing it to form eight LAGs etween the remote

    NNGs. Currently, QFaric architecture supports eight-way equal-cost multipath (ECMP). Following the configuration

    example in Figure 13 only covers the L3 LAG extension portion; please review previous use cases and the L2 designguide for other configuration options.

    Figure13:Bac-to-bacextensionwithLAG

    160Gbps

    Fabric/QFabric Node

    160Gbps

    Fabric/QFabric Node

    QFabric_1 QFabric_2

    8x10GbE LAG

    8x10GbE LAG

    8x10GbE LAG

    8x10GbE LAG

    8x10GbE LAG

    8x10GbE LAG

    8x10GbE LAG

    8x10GbE LAG

    QFabricDirector

    QFabricDirector

  • 8/2/2019 8010083-en

    30/32

    30 Copyright 2011, Juniper Networks, Inc.

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    tep 1. Define QF/Node alias and NNG

    [edit fabric]

    netadmin@qfabric# set aliases node-device ABCD1252 row21-rack1

    netadmin@qfabric# set aliases node-device ABCD1253 row21-rack2

    netadmin@qfabric# set aliases node-device ABCD1254 row21-rack3

    netadmin@qfabric# set aliases node-device ABCD1255 row21-rack4

    netadmin@qfabric# set aliases node-device ABCD1256 row21-rack5

    netadmin@qfabric# set aliases node-device ABCD1257 row21-rack6

    netadmin@qfabric# set aliases node-device ABCD1258 row21-rack7

    netadmin@qfabric# set aliases node-device ABCD1259 row21-rack8

    netadmin@qfabric# set resources node-group NW-NG-0 network-domain

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack1

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack2

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack3

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack4

    netadmin@qfabric# set resources node-group NW-NG-0 node-device row21-rack5

    netadmin@qfaric# set resources node-group N-NG-0 node-device row21-rack6

    netadmin@qfaric# set resources node-group N-NG-0 node-device row21-rack7

    netadmin@qfaric# set resources node-group N-NG-0 node-device row21-rack8

    tep 2. LAG configuration NNG connecting to QFaric 2

    [edit]

    netadmin@qfaric# set chassis node-group N-NG-0 aggregated-devices ethernet device-count 24

    [edit interfaces]

    netadmin@qfaric# set interface-range LAG-ae0 memer row21-rack1:xe-0/0/[0-7]

    netadmin@qfaric# set interface-range LAG-ae0 ether-options 802.3ad ae0

    netadmin@qfaric# set interface-range LAG-ae1 memer row21-rack2:xe-0/0/[0-7]

    netadmin@qfaric# set interface-range LAG-ae1 ether-options 802.3ad ae0

    netadmin@qfaric# set interface-range LAG-ae2 memer row21-rack3:xe-0/0/[0-7]

    netadmin@qfaric# set interface-range LAG-ae2 ether-options 802.3ad ae0

    netadmin@qfaric# set interface-range LAG-ae3 memer row21-rack4:xe-0/0/[0-7]

    netadmin@qfaric# set interface-range LAG-ae3 ether-options 802.3ad ae0

    netadmin@qfaric# set interface-range LAG-ae4 memer row21-rack5:xe-0/0/[0-7]

    netadmin@qfaric# set interface-range LAG-ae4 ether-options 802.3ad ae0

    netadmin@qfaric# set interface-range LAG-ae5 memer row21-rack6:xe-0/0/[0-7]

    netadmin@qfaric# set interface-range LAG-ae5 ether-options 802.3ad ae0

    netadmin@qfaric# set interface-range LAG-ae6 memer row21-rack7:xe-0/0/[0-7]

    netadmin@qfaric# set interface-range LAG-ae6 ether-options 802.3ad ae0

    netadmin@qfaric# set interface-range LAG-ae7 memer row21-rack8:xe-0/0/[0-7]

    netadmin@qfaric# set interface-range LAG-ae7 ether-options 802.3ad ae0

  • 8/2/2019 8010083-en

    31/32

    Copyright 2011, Juniper Networks, Inc. 3

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architectur

    netadmin@qfaric# set N-NG-0:ae0 aggregated-ether-options lacp active

    netadmin@qfaric# set N-NG-0:ae1 aggregated-ether-options lacp active

    netadmin@qfaric# set N-NG-0:ae2 aggregated-ether-options lacp active

    netadmin@qfaric# set N-NG-0:ae3 aggregated-ether-options lacp active

    netadmin@qfaric# set N-NG-0:ae4 aggregated-ether-options lacp active

    netadmin@qfaric# set N-NG-0:ae5 aggregated-ether-options lacp active

    netadmin@qfaric# set N-NG-0:ae6 aggregated-ether-options lacp active

    netadmin@qfaric# set N-NG-0:ae7 aggregated-ether-options lacp active

    netadmin@qfaric# set N-NG-0:ae8 aggregated-ether-options lacp active

    tep 3. Add IP address to LAG interfaces

    [edit interfaces]

    netadmin@qfaric# set N-NG-0:ae0.0 family inet address 192.168.0.1/24

    netadmin@qfaric# set N-NG-0:ae1.0 family inet address 192.168.1.1/24

    netadmin@qfaric# set N-NG-0:ae2.0 family inet address 192.168.2.1/24

    netadmin@qfaric# set N-NG-0:ae3.0 family inet address 192.168.3.1/24

    netadmin@qfaric# set N-NG-0:ae4.0 family inet address 192.168.4.1/24

    netadmin@qfaric# set N-NG-0:ae5.0 family inet address 192.168.5.1/24

    netadmin@qfaric# set N-NG-0:ae6.0 family inet address 192.168.6.1/24

    netadmin@qfaric# set N-NG-0:ae7.0 family inet address 192.168.7.1/24

    Summary

    The exponential data center demands exponential power, flexiility, and control, along with exponential reductions in

    energy consumption and TCO. The QFaric architecture with provides just such a flexile solution for deploying a faric

    across the data center, enaling unique network designs that fundamentally simplify while maintaining any-to-any

    connectivity, reducing the numer of managed devices and connections, and centralizing data center management.

    by following this design and implementation guide, Layer 3 QFaric architecture can e successfully deployed. The

    designs suggested in this document will help estalish complete data center solutions y integrating MX eries, X

    eries, and Juniper Networks Virtual Gateway products in a way that not only solves the increasing prolems of scale

    and data center economics, ut has the potential to enale dramatic new levels of computing for years to come.

    AboutJuniperNetwors

    Juniper Networks is in the usiness of network innovation. From devices to data centers, from consumers to cloud providers,

    Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking.

    The company serves customers and partners worldwide. Additional information can e found at www.juniper.net.

  • 8/2/2019 8010083-en

    32/32

    IMPLEMENTATION GUIDE - D esigning a Layer 3 Data Center Network with the QFaric Architecture

    Copyright 2011 Juniper Networks, Inc. All rights res erved. Juniper Networks, the Juniper Networks logo, Junos,

    Netcreen, and creenO are registered trademarks of Juniper Networks, Inc. in the United tates and other

    countries. All other trademarks, service marks, registered marks, or registered ser vice marks are the property of

    their respective owners. Juniper Networks assumes no responsiility for any inaccuracies in this document. Juniper

    Networks reserves the right to change, modify, transfer, or otherwise revise this pulication without notice.

    MAeadquarters

    Juniper Networks Ireland

    Airside business Park

    words, County Dulin, Ireland

    Phone: 35.31.8903.600

    EMEA ales: 00800.4586.4737

    Fax: 35.31.8903.601

    AACeadquarters

    Juniper Networks (ong ong)

    26/F, Cityplaza One

    1111 ings oad

    Taikoo hing, ong ong

    Phone: 852.2332.3636

    Fax: 852. 2574.7803

    CorporateandSaleseadquarters

    Juniper Networks, Inc.

    1194 North Mathilda Avenue

    unnyvale, CA 94089 UA

    Phone: 888.JUNIPE (888.586.4737)

    or 408.745.2000

    Fax: 408.745.2100

    www.juniper.net

    To purchase Juniper Networks solutions,

    please contact your Juniper Networks

    representative at 1-866-298-6428 or

    authorized reseller.