Upload
juniper-networks
View
1.067
Download
2
Embed Size (px)
DESCRIPTION
This document shares insight investments in convergence-enabling equipment and begin reaping the benefits of convergence in their data centers now that the first wave of standards have are complete. Juniper Networks QFX3500 Switch is the first top-of-rack switch built to solve all of the challenges posed by access-layer convergence. Find out how enterprises can benefit from server- and access-layer I/O convergence while continuing to leverage their investment in their existing aggregation, core LAN, and Fibre Channel (FC) backbones.
Citation preview
WHITE PAPER
Copyright © 2011, Juniper Networks, Inc. 1
FCoE CoNvERgENCE AT THE ACCEss LAyER WITH JuNIPER NETWoRks QFX3500 sWITCH First Top-of-Rack switch Built to solve All the Challenges Posed by Access-Layer Convergence
2 Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
TableofContents
Executive summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Access-Layer Convergence Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
option 1: FCoE Transit switch (DCB switch with FIP snooping) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
FCoE servers with CNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
option 2: FCoE-FC gateway (using NPIv Proxy) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
option 3: FCoE-FC switch (Full FCF) (Not Recommended) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Deployment Models Available Today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Rack-Mount servers and Top-of-Rack FCoE-FC gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Blade servers with Pass-Through Modules and Top-of-Rack FCoE-FC gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Blade servers with Embedded DCB switch and Top-of-Rack FCoE-FC gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Blade servers with Embedded FCoE-FC gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
servers Connected Through FCoE Transit switch to an FCoE-Enabled Fibre Channel sAN Fabric . . . . . . . . . . . . . . . . . . . . . . . . . 10
The standards that Allow for server I/o and Access-Layer Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Enhancements to Ethernet for Converged Data Center Networks—DCB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Enhancements to Fibre Channel for Converged Data Center Networks—FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Future Direction for FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
A Brief Note on isCsI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
About Juniper Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
TableofFigures
Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully converged network. . . 4
Figure 2: operation FCoE transit switch vs. FCoE-FC gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Figure 3: operation of an FCoE transit switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Figure 4: FCoE servers with CNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Figure 5: Rack-mount servers and top-of-rack FCoE-FC gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Figure 6: Blade servers with pass-through modules and top-of-rack FCoE-FC gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Figure 7: Blade servers with embedded DCB switch and top-of-rack FCoE-FC gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 8: servers connected to FCoE transit switch through to an FCoE-enabled FC sAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Figure 9: PFC ETs and QCN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Copyright © 2011, Juniper Networks, Inc. 3
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
ExecutiveSummary
In 2011, customers will finally be able to invest in convergence-enabling equipment and begin reaping the benefits of
convergence in their data centers. With the first wave of standards now complete—both the IEEE Data Center Bridging (DCB)
enhancements to Ethernet and the InterNational Committee for Information Technology standards (INCITs) T11 FC-BB-5
standard for Fibre Channel over Ethernet (FCoE), enterprises can benefit from server- and access-layer I/o convergence
while continuing to leverage their investment in their existing aggregation, core LAN, and Fibre Channel (FC) backbones.
so why the focus on server and access-layer I/o convergence? simply put, the industry recognizes that the first wave of
standards does not meet the needs of full convergence and so it is working on a second wave of standards—including FC-
BB-6—as well as various forms of fabric technology to better address the challenges of full convergence. The new standards
are designed to provide lower cost convergence strategies for the smaller enterprise, and to address the scaling issues that
come about from convergence in general as well as increased data center scale. As a result, 2011 is the year to focus on the
benefits to be gained from converging the access layer while laying a foundation for the future.
Juniper Networks® QFX3500 switch is the first top-of-rack switch built to solve all of the challenges posed by access-layer
convergence. It works for both rack-mount and blade servers, and for organizations with combined or separate LAN and
storage area network (sAN) teams. It is also the first product to leverage a new generation of AsIC technologies. It offers
1.28 terabits per second (Tbps) of bandwidth implemented with a single ultra-low latency AsIC and soft-programmable
ports capable of gigabit Ethernet (gbE), 10gbE, 40gbE, and 2/4/8 gbps FC, supported through small form-factor pluggable
transceiver (sFP+) gbE copper, 10gbE copper and optical digital to analog converter (DAC), and quad small form-factor
pluggable (QsFP) dense optical connectivity.
4 Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
Figure1:Thephasesofconvergence,fromseparatenetworks,toaccesslayerconvergence,tothefullyconvergednetwork.
SAN A
Phase 1: Separate Networks
Phase 2: Access LayerConvergence
Phase 3: Full Convergence
SAN B
SAN ASAN B
Copyright © 2011, Juniper Networks, Inc. 5
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
Introduction
The network is the critical enabler of all services delivered from the data center. A simple, streamlined, and scalable data
center network fabric can deliver greater efficiency and productivity, as well as lower operating costs. such a network
also allows the data center to support much higher levels of business agility and not become a bottleneck that hinders a
company from releasing new products or services.
To allow businesses to make sound investment decisions, this white paper will look at the following areas to fully clarify the
most interesting options for convergence in 2011:
1. Review the different types of convergence-capable products that are available on the market based upon the current
standards and consider the capabilities of those products
2. Consider the deployment scenarios for those products
3. Look forward to some of the new product and solution capabilities expected over the next couple of years
Access-LayerConvergenceModes
When buying a convergence platform, it is possible to deploy products based on three very different modes of operation.
Products on the market today may be capable of one or more of these modes depending on hardware and software
configuration and license enablement.
• FCoE transit switch—DCB switch with FCoE Initialization Protocol (FIP) snooping
• FCoE-FC gateway—using N_Port ID virtualization (NPIv) proxy
• FCoE-FC switch—full Fibre Channel Forwarder (FCF) capability
In principle, these systems can be used in multiple places within a deployment. However, for the purpose of this document
and based on the most likely deployments in 2011, only the server access-layer convergence model will be covered.
Figure2:OperationFCoEtransitswitchvs.FCoE-FCgateway
FCoE servers with CNA
FC/FCoE Switch
DCB Port
DCB Port
DCB Port
DCB Port
FCoE servers with CNA
FC Switch
FCOE Transit Switch vs. FCOE-FC Gateway
NPIV Proxy
F_Port F_Port
N_Port N_Port
VF_Port
FCoE Transit SwitchFIP Snooping
FIPACL
DCB Port
VF_Port
FIPACL
DCB Port
FIPACL
DCB Port
VF_Port
VN_PortVN_Port VN_PortVN_Port VN_Port VN_Port
DCB Port
DCB Port
DCB Port
VF_Port VF_Port VF_Port
6 Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
Option1:FCoETransitSwitch(DCBSwitchwithFIPSnooping)
In this model, the sAN team enables their backbone sAN fabric for FCoE, while the network team deploys a top-of-rack DCB
switch with FIP snooping. servers are deployed with Converged Network Adapters (CNAs), and blade servers are deployed
with pass-through modules or embedded DCB switches. These are connected to the top-of-rack switch, which then has
Ethernet connectivity to the LAN aggregation layer and Ethernet connectivity to the FCoE ports of the sAN backbone.
A common question at this point is whether a DCB switch with no Fibre Channel stack can indeed be a viable part of
a converged deployment and, in particular, whether such a switch gives not just the necessary security but also the
performance and manageability required in a storage network deployment.
since this is, at one level, just a Layer 2 switch, this solution ensures that the switch in each server rack is not consuming
an FC domain ID. Fibre Channel networks have a scale restriction that limits them to just a couple of tens of switches. As
convergence and 10gbE forces a move towards top-of-rack switches, any solution deployed must ensure that convergence
does not cause an FC sAN scaling problem.
Figure3:OperationofanFCoEtransitswitch
FCoEServerswithCNA
A rich implementation of an FCoE transit switch will provide strong management and monitoring of the traffic separation,
allowing the sAN team to monitor FCoE traffic throughput. specifically, a fully manageable DCB switch will allow the user to
monitor traffic on a per user priority and per priority group basis and not just per port.
FIP snooping as defined in the FCoE standard provides perimeter protection, ensuring that the presence of an Ethernet layer
in no way impacts existing sAN security. The sAN backbone can be simply FCoE-enabled with either FCoE blades within
chassis-based systems or FCoE-FC gateways connected to the edge of the sAN backbone. In addition, the traditional Fibre
Channel security Profile (FC-sP) mechanisms work seamlessly through FCoE, allowing CNA-to-FCF authentication to be
used through the DCB switch.
Perhaps less obviously, FIP snooping also means that the switch has a very clear view of each and every FCoE session that is
running through it, both in terms of the path, which is derived from the source and destination media access control (MAC)
frames of the virtual Fibre Channel ports, as well as the actual status of the virtual FC connection, which is monitored by
snooping the FIP keepalive.
FCoE
FCoE
FCoE
FCoEEnabled
SANLAG LAG
Copyright © 2011, Juniper Networks, Inc. 7
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
Just as with any Ethernet deployment, the switch can use link aggregation group (LAg) to balance the Ethernet packets
(including FCoE) across multiple links. As with any FC switch, this load balancing can include the oxID (Fibre Channel
exchange ID) in order to carry out the Fibre Channel best practice of exchange-based load balancing. Finally, the FCoE
protocol includes FCoE load-balancing capabilities to ensure that the FCoE servers are evenly and appropriately distributed
across the multiple FCoE FC fabric connections.
FCoE transit switches have several advantages:
• Low-cost top-of-rack DCB switch
• Rich monitoring of FCoE traffic at top of rack (QFX3500 switch)
• FCoE enablement of sAN backbone (FCoE blades or FCoE-FC gateway) managed by the sAN team for clean
management separation
• Load balancing carried out between CNAs and FCoE ports of the sAN fabric as well as point-to-point throughout the
Ethernet infrastructure
• Comprehensive security maintained through FIP snooping and FC-sP
• No heterogeneous support issues, as top of rack is L2 connectivity only
Figure4:FCoEserverswithCNA
Option2:FCoE-FCGateway(UsingNPIVProxy)
In this model, the sAN and Ethernet teams agree jointly to deploy an FCoE-FC top-of-rack gateway. From a cabling
perspective, the deployment is identical to option 1, with the most visible difference being that the cable between the top of
rack and the sAN backbone is now carrying native Fibre Channel traffic rather than FCoE traffic.
As with option 1, this solution ensures that the switch in each server rack is not consuming an FC domain ID. In this case,
however, unlike option 1, a much richer level of Fibre Channel functionality has been enabled within the switch. The FCoE-
FC gateway uses NPIv technology so that it presents to the servers as an FCoE-enabled Fibre Channel switch, and presents
to the sAN backbone as a group of FC servers. It then simply proxies sessions from one domain to the other with intelligent
load-balancing and automated failover capability across the Fibre Channel links to the fabric.
FC
FC
FCoE
FCSAN
LAG LAG
8 Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
FCoE-FC gateways have several advantages:
• Clean separation of management through role-based access control (QFX3500 switch)
• No need for FCoE enablement of the sAN backbone
• Fine-grained FCoE session-based load balancing (at the virtual machine level for NPIv-enabled hypervisors—QFX3500
switch) and full Ethernet LAg with exchange-based load balancing on the Ethernet-facing connectivity
• No heterogeneous support issues, as the FCoE-FC gateway presents to the sAN fabric as a Fibre Channel-enabled server
(N_Port to F_Port)
• Available post deployment as a license upgrade and fungible port reconfiguration with no additional hardware (QFX3500
switch)
• support for an upstream DCB switch such as an embedded switch in blade server shelf (QFX3500 switch), as well as
direct CNA connectivity or connectivity via blade server pass-through modules
Option3:FCoE-FCSwitch(FullFCF)(NotRecommended)
For deployments of any size, there is no value to local switching, as any rack is either pure server or pure storage. In addition,
although the sAN standards limit deployments to 239 switches, the practical supported limits are typically within the 16 to
32 range (in reality, most deployments are kept well below these limits). As such, this option has limited value in production
data centers.
For very small configurations where a single switch needs to connect to both servers and storage, Juniper believes that
Internet small Computer system Interface (isCsI) is the best approach in 2011, while the FC-BB-6 vN2vN model (see
“Future Direction for FCoE” section later in this white paper) will be the preferred FCoE end-to-end model in 2012.
DeploymentModelsAvailableToday
As previously noted, this paper focuses on deployments that apply for server access layer convergence. As such, it is assumed
that this access layer is in turn connecting both to some form of Ethernet aggregation/core layer on one side and a Fibre
Channel backbone on the other. The term “Fibre Channel backbone” implies a traditional FC sAN of some form which has
attached to the FC disk and tape as well as most likely existing FC servers.
By leveraging either an FCoE transit switch or an FCoE-FC gateway, whether separately or together, there are a number
of deployment options for supporting both rack-mount servers and blade servers. Each approach has its merits, and
organizations may want to use different approaches, depending on their requirements.
In terms of physical deployment in most data centers, the Ethernet aggregation and core, the FC backbone, and the FC disk
and tape are likely to be colocated in some centralized location within the data center with the server racks. From a cabling
perspective, this means that the same physical cable infrastructure can easily support any of the deployment models
discussed below.
Rack-MountServersandTop-of-RackFCoE-FCGateway
This deployment model is perhaps the most recognized and best understood. The QFX3500 switch fully supports this model
and, unlike other products, the QFX3500 enables this mode through a single license that allows up to 12 of its 48 sFP+ ports
to be configured for 2/4/8 gbps FC instead of 10gbE.
Copyright © 2011, Juniper Networks, Inc. 9
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
Figure5:Rack-mountserversandtop-of-rackFCoE-FCgateway
BladeServerswithPass-ThroughModulesandTop-of-RackFCoE-FCGateway
This model is similar to the previous rack-mount servers and top-of-rack FCoE-FC gateway model. The challenge with this
model is the complex cabling that accompanies pass-through modules. using pass-through has the benefit of removing an
entire layer from the network topology, thereby simplifying the data center, ensuring a single network operating system at all
layers, and allowing the edge of the network to leverage the richer functionality available with the feature-rich AsICs used at
top of rack. The use of modern pass-through modules and well constructed cabling solutions provide all the cable simplicity
benefits of an embedded blade switch with none of the limitations.
Figure6:Bladeserverswithpass-throughmodulesandtop-of-rackFCoE-FCgateway.
Ethernet FC
FCSANEthernet
Network
TOR FCoE-FC Gateway
Pass-ThroughModule
10 Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
BladeServerswithEmbeddedDCBSwitchandTop-of-RackFCoE-FCGateway
To support this deployment model, it is necessary to ensure that both the CNAs and the FCoE-FC gateway have particularly
feature-rich implementations of the full FC-BB-5 standard in order to support many-to-many L2 visibility for fan-in load
balancing and high availability.
The QFX3500 switch is the first fully FC-BB-5-enabled gateway capable of easily supporting upstream DCB switches,
including third-party embedded blade shelf switches. Juniper strongly recommends using such switches only if they have
implemented FIP snooping for perimeter detection, and they have fully standards-based, feature-rich DCB implementations.
When deploying a DCB switch in between the servers and the gateway, an Ethernet LAg is formed between the two devices,
providing optimum packet distribution. In the case of the QFX3500 switch, the Fibre Channel oxID is included in the LAg,
ensuring exchange-based load balancing across the link. Additionally, for enhanced scaling, the ports of the QFX3500 can be
configured in a trusted mode where it is known that there is an upstream DCB switch with FIP snooping.
Increasingly, however, this option is seen as undesirable, as it adds an additional network tier and makes it hard to
standardize the network access layer in a multivendor server environment.
Figure7:BladeserverswithembeddedDCBswitchandtop-of-rackFCoE-FCgateway
BladeServerswithEmbeddedFCoE-FCGateway
Typically, embedded switches have a limited power and heat budget, so the simpler the module the better. There is also an
issue with limited space for port connections. With a gateway, some of these ports must be Ethernet and some must be Fibre
Channel, further restricting the available bandwidth in both cases. In addition, such modules are not commonly available
for all blade server families, making the deployment of a standard and consistent infrastructure challenging. overall, these
issues make this is an undesirable use case.
ServersConnectedThroughFCoETransitSwitchtoanFCoE-EnabledFibreChannelSANFabric
There is an interesting case for using FCoE transit switches as the access layer connecting both to Ethernet aggregation and
to an FCoE-enabled Fibre Channel sAN fabric. An FCoE-FC gateway has to be actively managed and monitored by both the
sAN and LAN teams—a considerable challenge for some organizations. An FCoE transit switch is not active at the FCoE layer
of the protocol stack, so there is nothing for the sAN team to actively manage. Therefore, while the sAN team would still
need monitoring capabilities, there is no active overlap of management, and this minimizes the possibility of configuration
mistakes by different groups.
FC
FCoE
FCSAN
Copyright © 2011, Juniper Networks, Inc. 11
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
There are various ways to enable the FC sAN fabric. one model is to include some FCoE-enabled switches within the sAN
fabric; this can be accomplished by adding an FCoE blade to one of the chassis-based sAN directors. As with the previous
use cases, the FCoE ports deployed on the sAN fabric must support multiple virtual fabric ports per physical port for
this deployment to be viable. Another option is to use the QFX3500 switch configured as an FCoE-FC gateway, which is
connected locally to a pure FC sAN fabric and administered by the sAN team.
For larger customers, where the merging of LAN and sAN network teams is unlikely to happen for several years, this provides
a very clean and simply converged deployment model.
Figure8:ServersconnectedtoFCoEtransitswitchthroughtoanFCoE-enabledFCSANfabric
TheStandardsthatAllowforServerI/OandAccess-LayerConvergence
EnhancementstoEthernetforConvergedDataCenterNetworks—DCB
Ethernet, originally developed to handle traffic using a best-effort delivery approach, has mechanisms to support lossless
traffic through 802.3X Pause, but these are rarely deployed. When used in a converged network, Pause frames can lead to
cross-traffic blocking and congestion. Ethernet also has mechanisms to support fine-grained queuing (user priorities), but
again, these are rarely deployed within the data center. The next logical step for Ethernet will be to leverage these capabilities
and enhance existing standards to meet the needs of convergence and virtualization, propelling Ethernet into the forefront as
the preeminent infrastructure for LANs, sANs, and high-performance computing (HPC) clusters.
These enhancements benefit Ethernet I/o convergence (remembering that most servers have multiple 1gbE network
interface cards not for bandwidth but to support multiple network services), and existing Ethernet- and IP-based storage
protocols such as network access server (NAs) and isCsI. These enhancements also provide the appropriate platform for
supporting FCoE. In the early days when these standards were being developed and before they moved under the auspices of
the IEEE, the term Converged Enhanced Ethernet (CEE) was used to identify them.
Managed by SAN Team
FCoE
FCSAN
Managed by SAN Team
FCoE
Managed by Server Team
Managed byLAN Team
FCSAN
12 Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
DCB—a set of IEEE standards. Ethernet needed a variety of enhancements to support I/o, network convergence, and
server virtualization. server virtualization is covered in other Juniper white papers, even though it is part of the DCB protocol
set. With respect to I/o and network convergence, the development of new standards began with the following existing
standards:
1. user Priority for Class of service—802.1p—which already allows identification of eight separate lanes of traffic (used as-is)
2. Ethernet Flow Control (Pause, symmetric, and/or asymmetric flow control)—802.3X—which is leveraged for priority flow
control (PFC)
3. MAC Control Frame for PFC—802.3bd—to allow 802.3X to apply to individual user priorities (modified)
A number of new standards that leverage these components have been developed and have either been formally approved
or are in the final stages of the approval process. These include:
1. PFC—IEEE 802.1Qbb—which applies traditional 802.3X Pause to individual priorities instead of the port
2. Enhanced Transmission selection (ETs)—IEEE 802.1Qaz—which is a grouping of priorities and bandwidth allocation to
those groups
3. Ethernet Congestion Management (QCN)—IEEE 802.1Qau—which is a cross network as opposed to a point-to-point
backpressure mechanism
4. Data Center Bridging Exchange Protocol (DCBx), part of the ETs standard for DCB auto-negotiation
The final versions of the standards specify minimum requirements for compliance, detail the maximum in terms of external
requirements, and also describe in some detail the options for implementing internal behavior and the downside of some
lower cost but standards-compliant ways of implementing DCB. It is important to note that these standards are separate
from the efforts to solve the Layer 2 multipathing issues that are not technically necessary to make convergence work. Also,
neither these standards nor those around L2 multipathing address a number of other challenges that arise when networks
are converged and flattened.
Figure9:PFCETSandQCN
TX Queue 0
TX Queue 0
TX Queue 1
TX Queue 2
Class Group 11 2 3
2 6 5
2 4 3
2 5 5
2 3 3
1 2 2
Class Group 2
Ph
ysic
al P
ort
– P
FC
Ph
ysic
al P
ort
– E
TS
Ph
ysic
al P
ort
– P
FC
Class Group 3
T1 T2
O�ered Tra�cT3 T1 T2
Realized Tra�cT3
TX Queue 3
TX Queue 4
TX Queue 5
TX Queue 6
TX Queue 7
RX Bu�er 0
TX Queue 1
RX Bu�er 1
TX Queue 2
RX Bu�er 2
TX Queue 3
RX Bu�er 3
TX Queue 4
RX Bu�er 4
TX Queue 5
RX Bu�er 5
TX Queue 6
RX Bu�er 6
TX Queue 7
RX Bu�er 7
PFCON
RX Bu�er 0
TX Queue 0
TX Queue 1
RX Bu�er 1
TX Queue 2
RX Bu�er 2
TX Queue 3
RX Bu�er 3
TX Queue 4
RX Bu�er 4
TX Queue 5
RX Bu�er 5
TX Queue 6
RX Bu�er 6
TX Queue 7
RX Bu�er 7
Keeps sending
pause
DROP
STOP
PFCON
PFCON
PFCON
PFCON
PFCOFF
PFCOFF
PFCOFF
PFCON
PFCON
PFCON
PFCON
PFCON
PFCOFF
PFCOFF
PFCOFF
Copyright © 2011, Juniper Networks, Inc. 13
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
EnhancementstoFibreChannelforConvergedDataCenterNetworks—FCoE
FCoE—the protocol developed within T11. The proposed FCoE protocol has been developed by the T11 Technical Committee—a
subgroup of the International Committee for Information Technology standards (INCITs)—as part of the Fibre Channel
Backbone 5 (FC-BB-5) project. The standard was passed over to INCITs for public comment and final ratification in
2009, and has since been formerly ratified. In 2009, T11 started development work on Fibre Backbone 6 (FC-BB-6), which
is intended to address a number of issues not covered in the first standard, and develop a number of new deployment
scenarios.
FCoE was designed to allow organizations to move to Ethernet-based storage while, at least in theory, minimizing the cost of
change. To the storage world, FCoE is, in many ways, just FC with a new physical media type; many of the tools and services
remain the same. To the Ethernet world, FCoE is just another upper level protocol riding over Ethernet.
The FC-BB-5 standard clearly defines all of the details involved in mapping FC through an Ethernet layer whether directly
or through simplified L2 connectivity. It lays out both the responsibilities of the FCoE-enabled endpoints and FC fabrics as
well as of the Ethernet layer. Finally, it clearly states the additional security mechanisms that are recommended to maintain
the level of security that a physically separate sAN traditionally provides. overall, apart from the scale-up and scale-down
aspects, FC-BB-5 defines everything needed to build and support the products and solutions discussed earlier.
While the development of FCoE as an industry standard will bring the deployment of unified data center infrastructures
closer to reality, FCoE by itself is not enough to complete the necessary convergence. Many additional enhancements to
Ethernet and changes to the way networking products are designed and deployed are required to make it a viable, useful, and
pragmatic implementation. Many, though not all, of the additional enhancements are provided by the standards developed
through the IEEE DCB committee. In theory, the combination of the DCB and FCoE standards allows for full network
convergence. In reality, they only solve the problem for relatively small-scale data centers. The challenge of applying these
techniques to larger deployments involves the use of these protocols purely for server- and access-layer I/o convergence
through the use of FCoE transit switches (DCB switches with FIP snooping) and FCoE-FC gateways (using N_Port ID
virtualization to eliminate sAN scaling and heterogeneous support issues).
Juniper Networks EX4500 Ethernet switch and QFX3500 switch both support an FCoE transit switch mode. The QFX3500
also supports FCoE-FC gateway mode. These products are industry firsts in many ways:
1. The EX4500 and QFX3500 are fully standards-based with rich implementations from both a DCB and FC-BB-5
perspective.
2. The EX4500 and QFX3500 are purpose-built FCoE transit switches.
3. QFX3500 is a purpose-built FCoE-FC gateway which includes fungible combined Ethernet/Fibre Channel ports.
4. QFX3500 features a single Packet Forwarding Engine (PFE) design.
5. The EX4500 and QFX3500 switches both include feature-rich L3 capabilities.
6. QFX3500 supports low latency with cut-through switching.
FutureDirectionforFCoE
There are two key initiatives underway within FC-BB-6, which will prove critical to the adoption of FCoE for small and large
businesses alike.
For smaller businesses, a new FCoE mode has been developed, allowing for a fully functional FCoE deployment without
the need for either the traditional FC services stack or FC L3 forwarding. Instead, the FCoE end devices directly discover and
attach to each other through a pure L2 Ethernet infrastructure. This can be as simple as a DCB-enabled Ethernet switch,
with the addition of FIP snooping for security. It makes FCoE simpler than either isCsI or NAs, since it no longer needs a
complex Fibre Channel (or FCoE) switch, and because the FCoE endpoints have proper discovery mechanisms. This mode
of operation is commonly referred to as vN_Node to vN_Node or vN2vN. It can be used by itself for small to medium scale
FCoE deployments, or in conjunction with the existing FCoE models for larger deployments to allow them to benefit from
local L2 connectivity.
14 Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 switch
2000422-001-EN July 2011
Copyright 2011 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, Netscreen, and screenos are registered trademarks of Juniper Networks, Inc. in the united states and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
EMEAHeadquarters
Juniper Networks Ireland
Airside Business Park
swords, County Dublin, Ireland
Phone: 35.31.8903.600
EMEA sales: 00800.4586.4737
Fax: 35.31.8903.601
APACHeadquarters
Juniper Networks (Hong kong)
26/F, Cityplaza one
1111 king’s Road
Taikoo shing, Hong kong
Phone: 852.2332.3636
Fax: 852.2574.7803
CorporateandSalesHeadquarters
Juniper Networks, Inc.
1194 North Mathilda Avenue
sunnyvale, CA 94089 usA
Phone: 888.JuNIPER (888.586.4737)
or 408.745.2000
Fax: 408.745.2100
www.juniper.net
Printed on recycled paper
To purchase Juniper Networks solutions,
please contact your Juniper Networks
representative at 1-866-298-6428 or
authorized reseller.
For larger businesses, a set of approaches is being investigated to remove the practical FC scaling restrictions that currently
limit deployment sizes. As this work continues, it is hoped that the standards will evolve not only to solve some of these
scaling limitations, but also to more fully address many of the other challenges that arise as a result of blending L2 switching,
L3 FC forwarding, and FC services.
Juniper fully understands these challenges, which are similar to the challenges of blending L2 Ethernet, L3 IP forwarding, and
higher level network services for routing. As part of Juniper’s 3-2-1 data center architecture, we have already demonstrated
many of these approaches with Juniper Networks EX series Ethernet switches, MX series 3D universal Edge Routers, sRX
series services gateways, and Juniper Networks Junos® space.
ABriefNoteoniSCSI
Although not the subject of this white paper, it is important to note that the implementation of DCB and products—such as
the QFX3500 and Juniper Networks QFabric™ family of products, along with the latest generation of CNAs and storage—
provides many benefits to isCsI for those deployments where the FC-BB-5 standards prove too limiting.
This is of particular interest given that most CNAs and many storage subsystems can now be deployed through different
licensing as either FCoE or isCsI, giving the end user significant protection against the protocol debate.
Conclusion
Juniper Networks® QFX3500 switch is the first top-of-rack switch built to solve all of the challenges posed by access-layer
convergence. It is the first fully FC-BB-5-enabled gateway capable of easily supporting upstream DCB switches, including
third-party embedded blade shelf switches. It works for both rack-mount and blade servers, and for organizations with
combined or separate LAN and storage area network (sAN) teams. It is also the first product to leverage a new generation of
powerful AsICs.
Industry firsts in many ways, Juniper Networks EX4500 Ethernet switch and QFX3500 switch both support an FCoE
transit switch mode, and the QFX3500 also supports FCoE-FC gateway mode. They are fully standards-based with rich
implementations from both a DCB and FC-BB-5 perspective and feature-rich L3 capabilities. The QFX3500 switch is a
purpose-built FCoE-FC gateway which includes fungible combined Ethernet/FC ports, a single PFE design, and low latency
cut-through switching.
There are a number of very practical server I/o access-layer convergence topologies that can be used as a step along the
path to full network convergence. During 2011 and 2012, further events such as LAN on motherboard (LoM), QsFP, 40gbE,
and the FCoE Direct Discovery Direct Attach model will further bring Ethernet economics to FCoE convergence efforts.
AboutJuniperNetworks
Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers,
Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking.
The company serves customers and partners worldwide. Additional information can be found at www.juniper.net.