Upload
doanquynh
View
237
Download
4
Embed Size (px)
Citation preview
Advanced Storage Area Network Design
Edward Mazurek
Technical Lead Data Center Storage Networking
@TheRealEdMaz
• BRKSAN-2883
• Introduction
• Design Principles
• Storage Fabric Design Considerations
• Data Center SAN Topologies
• Intelligent SAN Services
• Q&A
Agenda
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
An Era of Massive Data GrowthCreating New Business Imperatives for IT
IDC April 2014: The Digital Universe of Opportunities: Rich Data and Increasing Value of Internet of Things
10X Increase in Data Produced (From 4.4T GB to 44T GB)
32B IoT Devices (Will be Connected to Internet)
85% of Data for Which Enterprises Will Have
Liability and Responsibility
40% of Data Will Be “Touched” by Cloud
By 2020
BRKSAN-2883 5
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolution of Storage Networking….
Block and/or File Arrays
Enterprise Apps: OLTP, VDI, etc. Big Data, Scale-Out NAS Cloud Storage (Object)
Multi-Protocol (FC, FICON, FCIP, FCoE, NAS, iSCSI, HTTP)
Performance (16G FC, 10GE, 40GE, 100GE)
Scale (Tens of Thousands P/V Devices, Billions of Objects)
Operational Simplicity (Automation, Self-Service Provisioning)
Fabric
Fabric
Compute Nodes
REST API
BRKSAN-2883 6
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Enterprise Flash Drives = More IO
• Drive performance hasn’t changed since 2003 (15K drives)
• Supports new application performance requirements
• Price/performance making SSD more affordable
• Solid state drives dramatically increase IOPS that a given array can support
• Increased IO directly translates to increased throughput
Significantly More IO/s per Drive at Much Lower Response Time
100% Random Read Miss 8KB
One Drive per DA Processor - 8 processors
0
10
20
30
40
50
60
70
80
90
100
110
0 5000 10000 15000 20000 25000 30000 35000 40000 45000
IOPs
Re
sp
on
se
Tim
e M
se
c
SATA drives
(8 drives)
15K rpm drives
(8 drives)
Enterprise Flash
Drives (8 drives)
BRKSAN-2883 7
• VSANs
• Zoning and Smart Zoning
• 16G and Forward Error Correction
• N-Port Virtualization
• Trunking and Port-channeling
• MDS Internal CRC handling
• Device-alias
• SAN Security
• FC and/or FCoE
Design Principles
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
VSANs
• A Virtual SAN (VSAN) Provides a Method to Allocate Ports within a Physical Fabric and Create Virtual Fabrics
• Analogous to VLANs in Ethernet
• Virtual fabrics created from larger cost-effective redundant physical fabric
• Reduces wasted ports of a SAN island approach
• Fabric events are isolated per VSAN which gives further isolation for High Availability
• FC Features can be configured on a per VSAN basis.
• ANSI T.11 committee and is now part of Fibre Channel standardsas Virtual Fabrics
Introduced in 2002
Per Port Allocation
BRKSAN-2883 10
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
VSAN• Assign ports to VSANs
• Logically separate fabrics
• Hardware enforced
• Prevents fabric disruptions
• RSCN sent within fabric only
• Each fabric service (zone server, name server, login server, etc.) operates independently in each VSAN
• Each VSAN is configured and managed independently
vsan databasevsan 2 interface fc1/1vsan 2 interface fc1/2vsan 4 interface fc1/8vsan 4 interface fc1/9
phx2-9513# show fspf vsan 43FSPF routing for VSAN 43FSPF routing administration status is enabledFSPF routing operational status is UPIt is an intra-domain router Autonomous region is 0MinLsArrival = 1000 msec , MinLsInterval = 2000 msecLocal Domain is 0xe6(230)Number of LSRs = 3, Total Checksum = 0x00012848
phx2-9513# show zoneset active vsan 43zoneset name UCS-Fabric-B vsan 43zone name UCS-B-VMware-Netapp vsan 43
BRKSAN-2883 11
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
VSAN 2
Disk1
Host2Disk4
Host1
Disk2Disk3
Zone A
Zone B
Zone C
VSAN 3
Disk6
Disk5
Host4
Host3
Zone B
Zone A
Zoneset 1
Zoning & VSANs
1. Assign physical ports to VSANs
2. Configure zones within each VSAN• A zone consists of multiple zone members
3. Assign zones to zoneset• Each VSAN has its own zoneset
4. Activate zoneset in VSAN
• Members in a zone can access each other; members in different zones cannot access each other
• Devices can belong to more than one zone
Zoneset 1
BRKSAN-2883 12
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Zoning examples
• Non-zoned devices are members of the default zone
• A physical fabric can have a maximum of 16,000 zones (9700-only network)
• Attributes can include pWWN, FC alias, FCID, FWWN, Switch Interface fc x/y, Symbolic node name, Device alias
zone name AS01_NetApp vsan 42member pwwn 20:03:00:25:b5:0a:00:06member pwwn 50:0a:09:84:9d:53:43:54
device-alias name AS01 pwwn 20:03:00:25:b5:0a:00:06
device-alias name NTAPmember pwwn 50:0a:09:84:9d:53:43:54
zone name AS01_NetApp vsan 42member device-alias AS01member device-alias NTAP
BRKSAN-2883 13
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
The Trouble with sizable ZoningAll Zone Members are Created Equal
Standard zoning model just has “members”
Any member can talk to any other member
Recommendation: 1-1 zoning
Each pair consumes an ACL entry in TCAM
Result: n*(n-1) entries 0
2,000
4,000
6,000
8,000
10,000
010
20
30
40
50
60
70
80
90
100N
um
ber
of
AC
L E
ntr
ies
Number of Members
Number of ACLs
BRKSAN-2883 14
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Operation Smart Zoning
Zones Cmds ACLs
Create
zones(s)1 13 64
Add an
initiator+1 +8
Add a
target+1 +16
Operation Today – Many - Many
Zones Cmds ACLs
Create
zones(s)1 13 132
Add an
initiator+1 +24
Add a
target+1 +24
Smart Zoning
• Feature added in NX-OS 5.2(6)
• Allows storage admins to create larger zones while still keeping premise of single initiator & single target
• Dramatic reduction SAN administrative time for zoning
• Utility to convert existing zone or zoneset to Smart Zoning
8 x I
4 x T
Operation Today – 1:1 Zoning
Zones Cmds ACLs
Create
zones(s)32 96 64
Add an
initiator+4 +12 +8
Add a
target+8 +24 +16
BRKSAN-2883 15
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
How to enable Smart ZoningNew Zone Existing Zone
BRKSAN-2883 16
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Zoning Best Practices
• no zone default-zone permit
• All devices must be explicitly zoned
• zone mode enhanced
• Acquires lock on all switches while zoning changes are underway
• Enables full zoneset distribution
• zone smart-zoning enable
• Allows for more efficient zoning
• zone confirm-commit
• Causes zoning changes to be displayed during zone commit
• zoneset overwrite-control – New in NX-OS 6.2(13)
• Prevents a different zoneset than the currently activated zoneset from being inadvertently activated
BRKSAN-2883 17
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
VSAN 2
Disk1
Host2Disk4
Host1
Disk2Disk3
Zone A
Zone B
Zone C
VSAN 3
Disk6
Disk5
Host4
Host3
Zone BZone A
Zoneset 1
IVR - Inter-VSAN Routing
• Enables devices in different VSANs to communicate
• Allows selective routing between specific members of two or more VSANs
• Traffic flow between selective devices
• Resource sharing, i.e., tape libraries and disks
• IVR Zoneset
• A collection of IVR zones that must be activated to be operational
Zoneset 1
BRKSAN-2883 18
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Forward Error Correction - FEC
• Allows for the correction of some errors in frames
• Almost zero latency penalty
• Can prevent SCSI timeouts and aborts
• Applies to MDS 9700 FC and MDS 9396S
• Applies to 16G fixed speed FC ISLs only
switchport speed 16000
• Configured via:
switchport fec tts
• No reason not to use it!
9710-2# show interface fc1/8
fc1/8 is trunking
…
Port mode is TE
Port vsan is 1
Speed is 16 Gbps
Rate mode is dedicated
Transmit B2B Credit is 500
Receive B2B Credit is 500
B2B State Change Number is 14
Receive data field Size is 2112
Beacon is turned off
admin fec state is up
oper fec state is up
Trunk vsans (admin allowed and active) (1-2,20,237)
BRKSAN-2883 19
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Trunking & Port Channels
Up to 16 links can be combined into a PortChannel
increasing the aggregate bandwidth by distributing traffic
granularly among all functional links in the channel
Load balances across multiple links and maintains optimum
bandwidth utilization. Load balancing is based on the
source ID, destination ID, and exchange ID
If one link fails, traffic previously carried on this link is
switched to the remaining links. To the upper protocol, the
link is still there, although the bandwidth is diminished. The
routing tables are not affected by link failure
Single-link ISL or PortChannel ISL can be configured to
become EISL – (TE_Port)
Traffic engineering with pruning VSANs on/off the trunk
Efficient use of ISL bandwidth
Trunking
VSAN3
Trunk
VSAN2
VSAN1
TE Port TE Port
VSAN3
VSAN2
VSAN1
PortChannel
Port Channel
TE Port
E Port
TE Port
E Port
BRKSAN-2883 20
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
N-Port Virtualization
• N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a “switch” to act like a server/HBA performing multiple fabric logins through a single physical link
• Physical servers connect to the NPV switch and login to the upstream NPIV core switch
• No local switching is done on an FC switch in NPV mode
• FC edge switch in NPV mode does not take up a domain ID
• Helps to alleviate domain ID exhaustion in large fabrics
Scaling Fabrics with Stability
FC NPIVCore Switch
FC1/1
FC1/2
FC1/3
Server1N_Port_ID 1
Server2N_Port_ID 2
Server3N_Port_ID 3 F_Port
N-Port
F-Port
F-Port
Blade Server NPV Switch
NP-Port
phx2-9513 (config)# feature npiv
BRKSAN-2883 21
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Comparison Between NPIV and NPV
NPIV (N-Port ID Virtualization)
•Used by HBA and FC switches•Enables multiple logins on a single interface•Allows SAN to control and monitor virtual machines (VMs)•Used for VMWare, MS Virtual Server and Linux Xen applications
NPV (N-Port Virtualizer)•Used by FC (MDS 9124, 9148, 9148S, etc.), FCOE switches (Nexus 5K), blade switches and Cisco UCS Fabric InterConnects (UCS 6100, 6200, 6300)•Aggregate multiple physical/logical logins to the core switch•Addresses the explosion of number of FC switches•Used for server consolidation applications• Called “End Host Mode” in UCS
BRKSAN-2883 22
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
NPV Uplink Selection
NPV supports automatic selection of NP uplinks. When a server interface is brought up, the NP uplink
interface with the minimum load is selected from the available NP uplinks in the same VSAN as the
server interface.
When a new NP uplink interface becomes operational, the existing load is not redistributed automatically
to include the newly available uplink. Server interfaces that become operational after the NP uplink can
select the new NP uplink.
Manual method with NPV Traffic-Maps associates one or more NP uplink interfaces with a server
interface.
Note: Use of parallel NPV links will pin traffic to one NPV link. Use of SAN Portchannels with NPV actual
traffic will be load balanced.
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli_rel_4_0_1a/CLIConfigurationGuide/npv.html#wp1534672
BRKSAN-2883 23
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
NPV Uplink Selection – UCS Example
• NPV uplink selection can be automatic or manual
• With UCS autoselection, the vHBAs will be uniformly assigned to the available uplinks depending on the number of logins on each uplink
FC NPIVCore SwitchBlade Server NP-Port
F_PortFC1/1
FC1/2
FC1/3
FC1/4
FC1/5
FC1/6
F_Port
F-Port
Cisco Nexus 5548NPV Switch
BRKSAN-2883 24
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Uplink Port Failure
• Failure of an uplink moves pinned hosts from failed port to up port(s)
• Path selection is the same as when new hosts join NPV switch and pathingdecision is made
FC NPIVCore Switch
Port is Down
Blade Server
F_Port
F_Port
FC1/1
FC1/2
FC1/3
FC1/4
FC1/5
FC1/6
NP-Port
F-Port
Cisco Nexus 5548NPV Switch
2 devices re-login
BRKSAN-2883 25
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Uplink Port Recovery
• No automatic redistribution of hosts to recovered NP port
FC NPIVCore Switch
Port is Up
Blade Server
F_PortFC1/1
FC1/2
FC1/3
FC1/4
FC1/5
FC1/6
F_Port
NP-Port
F-Port
Cisco Nexus 5548NPV Switch
BRKSAN-2883 26
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
New F-Port Attached Host
• New host entering fabric is automatically pinned to recovered NP_Port
• Previously pinned hosts are still not automatically redistributed
FC NPIVCore Switch
Blade Server
F_Port
F_Port
FC1/1
FC1/2
FC1/3
FC1/4
FC1/5
FC1/6
NP-Port
F-Port
Cisco Nexus 5548NPV Switch
BRKSAN-2883 27
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Auto-Load-Balance
npv_switch(config)# npv auto-load-balance disruptive
This is Disruptive!
Disruptive load balance works independent of automatic selection of interfaces and a configured traffic map of external
interfaces. This feature forces reinitialization of the server interfaces to achieve load balance when this feature is
enabled and whenever a new external interface comes up. To avoid flapping the server interfaces too often, enable this
feature once and then disable it whenever the needed load balance is achieved.
If disruptive load balance is not enabled, you need to manually flap the server interface to move some of the load to a
new external interface.
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/6_2/configuration/guides/interfaces/nx-os/cli_interfaces/npv.html#pgfId-1072790
BRKSAN-2883 29
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
F-Port Port Channel and F-Port TrunkingEnhanced Blade Switch Resiliency
F-Port Port Channel w/ NPV
Bundle multiple ports in to 1 logical link
Any port, any module
High-Availability (HA)
Blade Servers are transparent if a cable, port, or line cards fails
Traffic Management
Higher aggregate bandwidth
Hardware-based load balancing
F-Port Trunking w/ NPV
Partition F-Port to carry traffic for multiple VSANs
Extend VSAN benefits to BladeServers
Separate management domains
Separate fault isolation domains
Differentiated services: QoS, Security
Storage
Bla
de
Sys
tem
F-Port Port Channel
Blade 1
Blade 2
Blade N
F-Port Port Channel
F-PortsN-Ports
Core Director
Bla
de
Sys
tem
F-Port TrunkingCore
Director
VSAN3
F-Port Trunking
F-Port
Blade 1
Blade 2
Blade N
N-Port
VSAN2
VSAN1
BRKSAN-2883 30
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FLOGI – Before Port Channel
phx2-5548-3# show flogi database
--------------------------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NODE NAME
--------------------------------------------------------------------------------
fc2/9 12 0x020000 20:41:00:0d:ec:fd:9e:00 20:0c:00:0d:ec:fd:9e:01
fc2/9 12 0x020001 20:02:00:25:b5:0b:00:02 20:02:00:25:b5:00:00:02
fc2/9 12 0x020002 20:02:00:25:b5:0b:00:04 20:02:00:25:b5:00:00:04
fc2/9 12 0x020003 20:02:00:25:b5:0b:00:01 20:02:00:25:b5:00:00:01
fc2/10 12 0x020020 20:42:00:0d:ec:fd:9e:00 20:0c:00:0d:ec:fd:9e:01
fc2/10 12 0x020021 20:02:00:25:b5:0b:00:03 20:02:00:25:b5:00:00:03
fc2/10 12 0x020022 20:02:00:25:b5:0b:00:00 20:02:00:25:b5:00:00:00
Total number of flogi = 7
phx2-5548-3#
5548
fc2/9 fc2/10
fc2/1 fc2/2
Fabric
Interconnect
D2
BRKSAN-2883 31
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FLOGI- After port channel
phx2-5548-3# show flogi database
--------------------------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NODE NAME
--------------------------------------------------------------------------------
San-po3 12 0x020040 24:0c:00:0d:ec:fd:9e:00 20:0c:00:0d:ec:fd:9e:01
San-po3 12 0x020001 20:02:00:25:b5:0b:00:02 20:02:00:25:b5:00:00:02
San-po3 12 0x020002 20:02:00:25:b5:0b:00:04 20:02:00:25:b5:00:00:04
San-po3 12 0x020003 20:02:00:25:b5:0b:00:01 20:02:00:25:b5:00:00:01
San-po3 12 0x020021 20:02:00:25:b5:0b:00:03 20:02:00:25:b5:00:00:03
San-po3 12 0x020022 20:02:00:25:b5:0b:00:00 20:02:00:25:b5:00:00:00
Total number of flogi = 6
phx2-5548-3#
5548
2/9 2/10
2/1 2/2
Fabric
Interconnect
D2
BRKSAN-2883 32
san-po-3
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Port-channel design considerations
• Name port-channels the same on both sides
• Common port allocation in both fabrics
• ISL speeds should be >= edge device speeds
• Maximum 16 members per port-channel allowed
• Multiple port-channels to same adjacent switch should be equal cost
• Member of VSAN 1 + trunk other VSANs
• Check TCAM usage on NPIV core switch:
• show system internal acl tcam-usage
• show system internal acltcam-soc tcam-usage
All types of switches
BRKSAN-2883 33
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Port-channel design considerations
• Split port-channel members across multiple line cards
• When possible use same port on each LC:
• Ex. fc1/5, fc2/5, fc3/5, fc4/5, etc.
• If multiple members per linecard distribute across port-groups
• show port-resources module x
Director class
BRKSAN-2883 34
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Port-channel design considerations
• Ensure enough credits for distance
• Can “rob” buffers from other ports in port-group that are “out-of-service”
• Split PC members across different FWD engines to distribute ACLTCAM
• For F port-channels to NPV switches (like UCS FIs)• Each device’s zoning ACLTCAM programming will be repeated on each PC member
• For E port-channels(or just ISLs) using IVR • Each host/target session that gets translated will take up ACLTCAM on each member
• Use following table:• Ex. On a 9148S a 6 member port-channel could be split across the 3 fwd engines as follows:
• fc1/1, fc1/2, fc1/17, fc1/18, fc1/33 and fc1/34
• Split large F port-channels into two separate port-channels each with half members
• Consider MDS 9396S for larger scale deployments
Fabric switches
BRKSAN-2883 35
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Port-channel design considerations
• Check TCAM usage after major zoning operations
• MDS9148s-1# show system internal acltcam-soc tcam-usage
• TCAM Entries:
• =============
• Region1 Region2 Region3 Region4 Region5 Region6
• Mod Fwd Dir TOP SYS SECURITY ZONING BOTTOM FCC DIS FCC ENA
• Eng Use/Total Use/Total Use/Total Use/Total Use/Total Use/Total
• --- --- ------ ---------- --------- ------------ --------- --------- ---------
• 1 1 INPUT 19/407 1/407 98/2852 * 4/407 0/0 0/0
• 1 1 OUTPUT 0/25 0/25 0/140 0/25 0/12 1/25
• 1 2 INPUT 19/407 1/407 0/2852 * 4/407 0/0 0/0
• 1 2 OUTPUT 0/25 0/25 0/140 0/25 0/12 1/25
• 1 3 INPUT 19/407 1/407 0/2852 * 4/407 0/0 0/0
• 1 3 OUTPUT 0/25 0/25 0/140 0/25 0/12 1/25
• ---------------------------------------------------
Fabric switches Zoning region is
the most likely to
be exceeded
BRKSAN-2883 36
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
F port-channel design considerations
Switch/ModuleFwd
EnginesPort Range(s) Fwd-Eng Number
Zoning Region
Entries
Bottom Region
Entries
MDS 9148 3 fc1/25-36 & fc1/45-48 1 2852 407
fc1/5-12 & fc1/37-44 2 2852 407
1-4 & 13-24 3 2852 407
MDS 9250i 4 fc1/5-12 & eth1/1-8 1 2852 407
fc1/1-4 & fc1/13-20 &
fc1/37-402 2852 407
fc1/21-36 3 2852 407
ips1/1-2 4 2852 407
MDS 9148S 3 1-16 1 2852 407
17-32 2 2852 407
33-48 3 2852 407
Ports are allocated to fwd-engines according the following table:
BRKSAN-2883 37
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
F port-channel design considerations
Switch/ModuleFwd
EnginesPort Range(s) Fwd-Eng Number
Zoning Region
Entries
Bottom Region
Entries
MDS 9396S 12 1-8 0 49136 19664
9-16 1 49136 19664
17-24 2 49136 19664
25-32 3 49136 19664
33-40 4 49136 19664
41-48 5 49136 19664
49-56 6 49136 19664
57-64 7 49136 19664
65-72 8 49136 19664
73-80 9 49136 19664
81-88 10 49136 19664
89-96 11 49136 19664
BRKSAN-2883 38
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
F port-channel design considerations
Switch/ModuleFwd
EnginesPort Range(s) Fwd-Eng Number
Zoning Region
Entries
Bottom Region
Entries
DS-X9248-48K9 1 1-48 0 27168 2680
DS-X9248-96K9 2 1-24 0 27168 2680
25-48 1 27168 2680
DS-X9224-96K9 2 1-12 0 27168 2680
13-24 1 27168 2680
DS-X9232-256K9 4 1-8 0 49136 19664
9-16 1 49136 19664
17-24 2 49136 19664
25-32 3 49136 19664
DS-X9248-256K9 4 1-12 0 49136 19664
13-24 1 49136 19664
25-36 2 49136 19664
37-48 3 49136 19664
BRKSAN-2883 39
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
F port-channel design considerations
Switch/ModuleFwd
EnginesPort Range(s) Fwd-Eng Number
Zoning Region
Entries
Bottom Region
Entries
DS-X9448-768K9 6 1-8 0 49136 19664
9-16 1 49136 19664
17-24 2 49136 19664
25-32 3 49136 19664
33-40 4 49136 19664
41-48 5 49136 19664
BRKSAN-2883 40
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
MDS Internal CRC handling
• New feature in 6.2(13) to handle frames internally corrupted due to bad HW
• Frames that are received corrupted are dropped at the ingress port
• These frames are not included in this feature
• In rare cases frames can get corrupted internally due to bad hardware
• These are then dropped
• Sometimes difficult to detect
• New feature detects the condition and isolates hardware
• 5 possible stages where frames can get corrupted
BRKSAN-2883 41
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Internal CRC handling
Stages of Internal CRC Detection and Isolation
The five possible stages at which internal CRC errors may occur in a switch:
1. Ingress buffer of a module
2. Ingress crossbar of a module
3. Crossbar of a fabric module
4. Egress crossbar of a module
5. Egress buffer of a module
BRKSAN-2883 42
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Internal CRC handling
• The modules that support this functionality are:
• Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module
• Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet Switching Module
• Cisco MDS 9700 Fabric Module 1
• Cisco MDS 9700 Supervisors
• Enabled via the following configuration command:
• hardware fabric crc threshold 1-100
• When detected failing module is powered down
• New in NX-OS 6.2(13)
BRKSAN-2883 43
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Device-alias
• device-alias(DA) is a way of naming PWWNs
• DAs are distributed on a fabric basis via CFS
• device-alias database is independent of VSANs
• If a device is moved from one VSAN to another no DA changes are needed
• device-alias can run in two modes:
• Basic – device-alias names can be used but PWWNs are substituted in config
• Enhanced – device-alias names exist in configuration natively – Allows rename without zoneset re-activations
• device-alias are used in zoning, IVR zoning and port-security
• copy running-config startup-config fabric after making changes!
BRKSAN-2883 44
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Device-alias
• device-alias confirm-commit
• Displays the changes and prompts for confirmation
MDS9710-2(config)# device-alias confirm-commit enable
MDS9710-2(config)# device-alias database
MDS9710-2(config-device-alias-db)# device-alias name edm pwwn 1000000011111111
MDS9710-2(config-device-alias-db)# device-alias commit
The following device-alias changes are about to be committed
+ device-alias name edm pwwn 10:00:00:00:11:11:11:11
Do you want to continue? (y/n) [n]
BRKSAN-2883 45
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Device-alias
• Note: To prevent problems the same device-alias is only allowed once per commit.
• Example:
MDS9148s-1(config)# device-alias database
MDS9148s-1(config-device-alias-db)# device-alias name test pwwn 1122334455667788
MDS9148s-1(config-device-alias-db)# device-alias rename test test1
Command rejected. Device-alias reused in current session :test
Please use 'show device-alias session rejected' to display the rejected set of commands and for the device-alias best-practices recommendation.
BRKSAN-2883 46
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
SAN Design Security ChallengesSAN design security is often overlooked as an area of concern
• Application integrity and security is addressed, but not back-end storage network carrying actual data
• SAN extension solutions now push SANs outside datacenter boundaries
Not all compromises are intentional
• Accidental breaches can still have the same consequences
SAN design security is only one part of complete data center solution
• Host access security—one-time passwords, auditing, VPNs
• Storage security—data-at-rest encryption, LUN security
SAN
LAN
FC
UnauthorizedConnections
(Internal)ApplicationTampering
(Trojans, etc.)
Privilege Escalation/Unintended Privilege
External DOSor OtherIntrusion
DataTampering
Theft
BRKSAN-2883 48
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
SAN SecuritySecure management access
• Role-based access control – CLI, SNMP, Web
Secure management protocols• SSH, SFTP, and SNMPv3
Secure switch control protocols• TrustSec
• FC-SP (DH-CHAP)
AAA - RADIUS, TACACS+ and LDAP• User, switch and iSCSI host authentication
Fabric Binding• Prevent unauthorized switches from joining fabric
Port-security• Ensure only approved devices login to fabric
FC CT Management Security
• Ensure only approved devices send FC CT cmds
Device/SANManagement
Security Via SSH, SFTP, SNMPv3, and
User Roles
SAN Protocol Security(FC-SP)
Shared Physical Storage
VSANs ProvideSecure
Isolation
iSCSI-AttachedServers
Hardware-Based Zoning Via Port and WWN
RADIUS or TACACS+ or LDAP
Server for Authentication
Port-security
BRKSAN-2883 49
FC CT Management security New in NX-OS 6.2(9)
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Slow Drain
• Devices can impart slowness in a fabric
• MDS has advanced features that will identify and mitigate slow draining devices
• BRKSAN-3446 SAN Congestion! Understanding, Troubleshooting, Mitigating in a Cisco Fabric
Slow Drain Device Detection and Congestion Avoidance
http://www.cisco.com/en/US/prod/collateral/ps4159/ps6409/ps12970/white_paper_c11-729444.pdfWhite paper (2013)
BRKSAN-2883 50
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FC and/or FCoE ?
• FC and FCoE are widely available on all Nexus and MDS switches
• FC and FCoE freely interoperate
• FCoE is Operationally Identical
• Need to understand data throughput for various speeds
• FCoE is newer and has less diagnostics
• Distance must be considered for both FC and FCoE
BRKSAN-2883 51
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FC and FCoE are widely available on both Nexus and MDS switches
• MDS supports 2/4/8/16G FC and 10/40G FCoE
• MDS does not supported consolidated IO
• Nexus 2000/5000/6000 supports 2/48/16G FC and 10/40G FCoE
• Supports consolidated IO
• Nexus 7000 supports 10/40G FCoE
• Nexus 9000 TOR(Top of Rack) supports FCoE in NPV
BRKSAN-2883 52
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FC and FCoE freely interoperate
FC
FCoE FC
FCoE
FCoEFCoE
FC
FC tape systemFCoE Disk Array
FC Disk Array N7K
MDS
MDS
BRKSAN-2883 53
Nexus 2/5/6/K
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FCoE is Operationally Identical
• Supports both FC and FCoE
• FCoE is treated exactly the same as FC
• After zoning device perform registration and then performs discovery
phx2-9513# show fcns database vsan 42
VSAN 42:--------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE--------------------------------------------------------------------------0xac0600 N 50:0a:09:83:8d:53:43:54 (NetApp) scsi-fcp:target0xac0700 N 50:0a:09:84:9d:53:43:54 (NetApp) scsi-fcp:target0xac0c00 N 20:41:54:7f:ee:07:9c:00 (Cisco) npv0xac1800 N 10:00:00:00:c9:6e:b7:f0 scsi-fcp:init fc-gs0xef0000 N 20:01:a0:36:9f:0d:eb:25 scsi-fcp:init fc-gs
Which
are
FCoE
hosts?
BRKSAN-2883 54
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
The Story of Interface Speeds• Comparing speeds is more
complex than just the “apparent” speed
• Data throughput is based on both the interface clocking (how fast the interface transmits) and how efficient the interface transmits (how much encoding overhead)
ProtocolClocking
GbpsEncodingData/Sent
Data Rate
Gbps MB/s
8G FC 8.500 8b/10b 6.8 850
10G FC 10.51875 64b/66b 10.2 1,275
10G FCoE 10.3125 64b/66b 10.0 1,250
16G FC 14.025 64b/66b 13.6 1,700
32G FC 28.050 64b/66b 27.2 3,400
40G FCoE 41.250 64b/66b 40.0 5,000
55BRKSAN-2883 55
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FCoE is newer and has less diagnostics
• Nexus switches especially Nexus 5000/6000 best for smaller deployments
• MDS FC has the most robust:
• Capabilities• Director class – Dual supervisors
• FC-SP/TrustSec
• FEC
• Smart Zoning
• Diagnostics • Port-monitor
• ISL Diagnostics
• SFP Detailed Diagnostics
• Troubleshooting• Slowport-monitor
• TxWait
• SNMP OIDs for slow drain
BRKSAN-2883 56
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
SAN Extension – FC over long distance BB_Credits and Distance
16 Km
2 Gbps FC
8 Gbps FC~0.25 km per Frame
~1 km per Frame
4 Gbps FC~0.5 km per Frame
16 Gbps FC ~0.125 km per Frame
phx2-9513(config)# feature fcrxbbcredit extended
phx2-9513(config)# interface 1/1
phx2-9513(config-if)# switchport fcrxbbcredit extended 1000
phx2-9513# show interface 1/1
fc1/1 is up
…..
Transmit B2B Credit is 128
Receive B2B Credit is 1000
• BB_Credits are used to ensure enough FC frames in flight
• A full (2112 byte) FC frame is approx 1 km long @ 2 Gbps, ½ km long @ 4 Gbps ¼ km long at 8 Gbps
• As distance increases, the number of available BB_Creditsneed to increase as well
• Insufficient BB_Credits will throttle performance - no data will be transmitted until R_RDY is returned
BRKSAN-2883 57
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Distance must be considered for both FC and FCoE
• Both FC and FCoE are distance sensitive
• For FC - if insufficient credits then link will go idle
• 8 credits per KM @ 16Gbps
• MDS Fabric switches and Nexus have lower amounts of B2B credits
Frame 1 Frame 2 Frame 3 Frame 4 Frame 5
Tx Credits 5
Tx Remaining Credits 0
R_Rdy
Frame 5
RRdy
Time t0 – Tx credits at 0
Time t1 – Credit received – link idle
Link is idle
BRKSAN-2883 58
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
SAN Extension – FCoE over long distanceFCoE Flow Control
For long distance FCoE, receiving switch Ingress Buffer must be large enough to absorb all packets in flight from the time the Pause frame is sent to the to time the Pause Frame is received
• A 10GE, 50 km link can hold ~300 frames
• That means 600+ frames could be either in flight or will be transmitted by the time the receiver detects buffer congestion and sends a Pause frame to the time the Pause frame is received and the sender stops transmitting
Pause
Frame Frame Frame Frame Frame
Frame
Frame
Frame
Frame
Frame
Frame
Frame
FrameEgress Buffer
Ingress Buffer
Frame
Latency BufferPause threshold
Latency Buffer turning is platform specific
BRKSAN-2883 59
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Distance must be considered for both FC and FCoEFCoE PFC Pause
Frame 1 Frame 2 Frame 3 Frame 4 Frame 5
Pause
Time t0 – xoff threshold reached
Time t1 – Pause frame received
Time t2 – traffic paused
Frame 9 & 10 dropped
Frame 1
Frame 2
Frame 3
Frame 4
Frame 5
Frame
Frame
Frame
Frame
t1
Xoff
Xon
Xoff
Xon
Frame
Frame
Frame
Frame
t0
Frame
Frame
Frame
Frame
Frame 1
Frame 2
Frame 3
Frame 4
Frame 5
Frame 6
Frame 7
Frame 8
Xoff
t2
Xon
Frame 9
Frame 10
FCoE buffers must be large enough to contain frames that can be transmitted for a RTT
FCoE drops can occur otherwise
Verify supported distance for each FCoE device
Frame 6 Frame 7 Frame 8 Frame 9 Frame 10
Pause
BRKSAN-2883 60
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
SAN designs traditionally robust: dual fabrics, data loss is not tolerated
Must manage ratios
• Fan in/out
• ISL oversubscription
• Virtualized storage IO streams (NPIV attached devices, server RDM, LPARs, etc.)
• Queue depth
Latency
• Initiator to target
• Slow drain
• Performance under load: does my fabric perform the same
Application independence
• Consistent fabric performance regardless of changes to SCSI profile• Number of frames
• Frame size
• Speed or throughput
The Importance of “Architecture”
BRKSAN-2883 62
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
SAN Major Design FactorsPort density
• How many now, how many later?
• Topology to accommodate port requirements
Network performance
• What is acceptable? Unavoidable?
Traffic management
• Preferential routing or resource allocation
Fault isolation
• Consolidation while maintaining isolation
Management
• Secure, simplified management
1
High Performance
Crossbar
2
QoS, Congestion
Control, Reduce FSPF
Routes
Failure of One Device Has No Impact on Others
Large Port Count
Directors
63
3
4
8 8 8 8 8 8 8 88 8 88
BRKSAN-2883 63
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scalability—Port DensityTopology Requirements
Considerations
• Number of ports for end devices
• How many ports are needed now?
• What is the expected life of the SAN?
• How many will be needed in the future?
• Hierarchical SAN design
Best Practice
• Design to cater for future requirements
• Doesn’t imply “build it all now,” but means “cater for it” and avoids costly retrofits tomorrow
64
8 8 8 8 8 8 8 88 8 88
Large Port Count
Directors
BRKSAN-2883 64
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scalability—Port Density – MDS Switch selection
• MDS 9148S – 48 ports 16G FC
• MDS 9250i – 40 ports 16G FC + 8 port 10G FCoE + 2 FCIP ports
• MDS 9396S – 96 ports 16G FC
• MDS 9706 – Up to 192 ports 16G FC and/or 10G FCoE and/or 40G FCoE
• MDS 9710 – Up to 384ports 16G FC and/or 10G FCoE and/or 40G FCoE
• MDS 9718 – Up to 768 ports 16G FC and/or 10G FCoE and/or 40G FCoE
• All MDS 97xx chassis are 32G ready!
• All 16G MDS platforms are full line rate
BRKSAN-2883 65
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scalability—Port Density – Nexus Switch selection
• Nexus 55xx – Up to 96 ports 10G FCoE and/or 8G FC ports
• Nexus 5672UP – Up to 48 10G FCoE and/or 16 8G FC ports
• Nexus 5672UP-16G – Up to 48 10G FCoE and/or 24 16G FC ports
• Nexus 5624Q – 12 ports 40G or 48 ports 10G FCoE
• Nexus 5648Q – 24 ports 40G or 96 ports 10G FCoE
• Nexus 5696Q – Up to 32 ports 100G / 96 ports 40G / 384 ports 10G FCoE or 60 8G FC
• Nexus 56128P – Up to 96 10G FCoE and/or 48 8G FC ports
• All Nexus platforms are full line rate
BRKSAN-2883 66
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Traffic Management
Do different apps/servers have different performance requirements?
• Should bandwidth be reserved for specific applications?
• Is preferential treatment/QoS necessary?
Given two alternate paths for traffic between data centers, should traffic use one path in preference to the other?
• Preferential routes
8 8 8 8 8 8 8 88 8 88
QoS, Congestion
Control, Reduce FSPF
Routes
BRKSAN-2883 67
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Port Channels Help Reduce
Oversubscription While Maintaining HA Requirements
Host Oversubscription
Largest variance observed at this level. DB servers close to line rate, others highly
oversubscribed
16Gb line cards non-oversubscribed
Network PerformanceOversubscription Design Considerations
All SAN Designs Have Some Degree of Oversubscription
• Without oversubscription, SANs would be too costly
• Oversubscription is introduced at multiple points
• Switches are rarely the bottleneck in SAN implementations
• Device capabilities (peak and sustained) must be considered along with network oversubscription
• Must consider oversubscription during a network failure event
Disk Oversubscription
Disk do not sustain wire-rate I/O with ‘realistic’ I/O mixtures
Vendors may recommend a 6:1 to as high as 20:1 host to disk
fan-out ratio
Highly application dependent
ISL Oversubscription
Two-tier design ratio less than fan-out ratio
Tape Oversubscription
Need to sustain close to maximum data rate
LTO-6 Native Transfer Rate ~ 160 MBps
8 8 8 8 88
BRKSAN-2883 68
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Fault IsolationConsolidation of Storage
• Single Fabric = Increased Storage Utilization + Reduced Administration Overhead
Major Drawback
• Faults Are No Longer Isolated
• Technologies such as VSANs enable consolidation and scalability while maintaining security and stability
• VSANs constrain fault impacts
• Faults in one virtual fabric (VSAN) are contained and do not impact other virtual fabrics
Physical SAN Islands Are Virtualized onto Common
SAN Infrastructure
Fabric#1
Fabric#3
Fabric#2
BRKSAN-2883 69
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Structured Cabling
• Pricing advantage for manufactured cabling systems
• Removes guessing game of how many strands to pull per cabinet
• Growth at 6 or 12 LC ports per cassette
• Fiber-only cable plant designs possible
Supporting new EoR & ToR designs
BRKSAN-2883 72
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Core-Edge
• Traditional SAN design for growing SANs
• High density directors in core and fabric switches, directors or blade switches on edge
• Predictable performance
• Scalable growth up to core and ISL capacity
• Evolves to support EoR & ToR
• MDS 9718 as core
Highly Scalable Network Design
Blade ServerEnd of Row Top of Rack
BRKSAN-2883 74
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Large Edge-Core-Edge/End-of-Row DesignLarge Edge/Core/Edge(1920/2160 End Device Ports per Fabric)
• Traditional Edge-Core-Edge design Is ideal for very large centralized services and consistent host-disk performance regardless of location
• Full line rate ports, no fabric oversubscription
• 8Gb or 16Gb hosts and targets
• Services consolidated in the core
• Easy expansion
“A” Fabric Shown, Repeat for “B” Fabric
240 Storage ports at 16Gb
(optional 480 @ 8Gb without changing bandwidth ratios)
240 ISLs from storage edge to core @ 16Gb
240 ISLs from host
edge to core @ 16Gb
1680 hosts @ 8Gb
or 16Gb
Per Fabric Total
Ports Deployed 3,456 6,912
Used Ports 2,880 @ 16Gb
3,120 @ 8Gb
5,760 @ 16Gb
6,240 @ 8Gb
Storage Ports 240 @ 16Gb, or
480 @ 8Gb
480 @ 16Gb, or
960 @ 8Gb
Host Ports 1,680 3,360
ISL ports 960 1,920
Host ISL
Oversubscription
7:1 @ 16Gb
End to End
Oversubscription
7:1 @ 16Gb storage
7:1 @ 8Gb storage
MD
S 9
71
0
24
60
BRKSAN-2883 75
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Very Large Edge-Core/End-of-Row Design
Very Large Edge/Core/Edge(4608 End Device Ports per Fabric)
• Traditional Core-Edge design Is ideal for very large centralized services and consistent host-disk performance regardless of location
• Full line rate ports, no fabric oversubscription
• 16Gb hosts and targets
• Services consolidated in the core
• Easy expansion
“A” Fabric Shown, Repeat for “B” Fabric
576(288 per
switch) Storage ports at 16Gb
768(48 per
switch) ISLs from host edge to core
@ 16Gb
4032 (252
per switch) hosts @ 8Gb
or 16Gb
Per Fabric Total
Ports available 6912 13824
Used Ports 6144 @ 16Gb 12,288 @ 16Gb
Storage Ports 576 @ 16Gb 1152 @ 16Gb
Host Ports 4032 8064
ISL ports 1536 3072
Host ISL
Oversubscription
7:1 @ 16Gb
End to End
Oversubscription
7:1 @ 16Gb storage
MDS 9710
MDS 9718
24
BRKSAN-2883 76
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
SAN Top of Rack – MDS 9148S
Rack
Per Fabric Total
Ports Deployed 2,688 5,376
Used Ports 2,672 5,344
Storage Ports 176 @ 16Gb 352 @ 16Gb
Host Ports 2,112 4,224
ISL Ports 192 384
Host ISL
Oversubscription
12:1 @ 16Gb
End to End
Oversubscription
12:1 @ 16G hosts
4 ISLs from
each edge to core @ 16Gb
352 Storage ports at 16Gb
2,112 hosts @ 16Gb
48 Racks
44 Dual-attached servers per rack
A B
SAN Top of Rack(2,288 End Device Ports per Fabric)
• Ideal for centralized services while reducing cabling requirements
• Consistent host/target performance regardless of location in rack
• 8Gb hosts & 16Gb targets
• Easy edge expansion until maxed out
• Massive cabling infrastructure avoided as compared to EoRdesigns
MDS 9710
MDS 9148S
Both “A” and “B” Fabric Shown
4
BRKSAN-2883 77
4
2 x 9148S per rack
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Top-of-Rack Design - Blade Centers
Rack
Ports Deployed 1,920
Used Ports 192 @ 16Gb
1056 @ 8Gb
Storage Ports 192 @ 16Gb, or
192 @ 8Gb
Host Ports 2304
Host ISL Oversubscription 4:1 @ 8G
End to End Oversubscription 6:1 @ 16Gb Storage
12:1 @ 8Gb Storage
8 ISLs from each
edge to core @ 8Gb
96 Storage ports
at 16Gb
960 hosts @ 8Gb
12 Racks, 72 chassis
96 Dual-attached blade servers per rack
SAN Top of Rack – Blade Centers(1,920 Usable Ports per Fabric)
• Ideal for centralized services
• Consistent host/target performance regardless of location in blade enclosure or rack
• 8Gb hosts & 16Gb targets
• Need to manage more SAN Edge switches/Blade Switches
• NPV attachment reduces fabric complexity
• Assumes little east-west SAN traffic
• Add blade server ISLs to reduce fabric oversubscription
A B
MDS 9710
Blade Center
Both “A” and “B” Fabric Shown
BRKSAN-2883 78
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Medium Scale Dual FabricCollapsed Core Design
Ports Deployed 768
Used Ports 768@ 16Gb
Storage Ports 96 @ 16Gb
Host Ports 672 @ 16Gb
Host ISL Oversubscription N/A
End to End Oversubscription 7:1 @ 16Gb
96 Storage ports at 16Gb
672 hosts @ 16Gb
“A” Fabric Shown, Repeat for “B” FabricMedium Scale Dual Fabric
(768 End Device Ports per Fabric)
• Ideal for centralized services
• Consistent host/target performance regardless of location
• 8Gb or 16Gb hosts & targets (if they exist)
• Relatively easy edge expansion to Core/Edge
• EoR design
• Supports blade centers connectivity
MDS 9710
BRKSAN-2883 79
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
POD SAN Design
6 ISLs from each edge to core @ 16Gb
36-48 Storage ports at 16Gb
252 hosts @ 16Gb or 288 hosts @ 10Gb
6 Racks, 252 chassis
42 Dual-attached servers per rack
POD SAN Design - Ideal for centralized services
• Consistent host/target performance regardless of location in blade enclosure or rack
• 10/16Gb hosts & 16Gb targets
• Need to manage more SAN Edge switches/Blade Switches
• NPV attachment reduces fabric complexity
• Add blade server ISLs to reduce fabric oversubscription
A B
6 Racks, 288 blades
48 Dual-attached blade servers per rack
8 ISLs from each edge to core @ 8Gb
UCS FI 6248UP
MDS 9396SMDS 9396S
MDS 9148S
BRKSAN-2883 80
6
Per Fabric Total
Ports Deployed 384 768
Used Ports 336 672
Storage Ports 36-48@ 16Gb 72-96 @ 16Gb
Host Ports 252 504
ISL Ports 36 72
Host ISL
Oversubscription
7:1 @ 16Gb
End to End
Oversubscription
7:1 @ 16G hosts
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FI 6332-16UP, FI 6332 UCS SAN Design
FI 6332-16UP Use Case FI 6332 Use Case
FI 6332-16UP
MDS9700
Storage Array
Nexus
7K/9K
UCS
B-Series
B200
B260
B460
and
IOM 2304
UCS
C-Series
C220
C240
C460
40G 40G
16G FC
40G
16G FC
40G
FI 6332
MDS9700
Storage Array
Nexus
7K/9K
UCS
B-Series
B200
B260
B460
and
IOM 2304
UCS
C-Series
C220
C240
C460
40G 40G
40G FCoE
40G
40G FCoE
40G
BRKSAN-2883 81
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Enhancing SAN Design with Services
Extend Fabrics
• FCIP
• Extended Buffer to Buffer credits
• Encrypt the pipe
SAN Services extend the effective distance for remote applications
• SAN IO acceleration
• Write acceleration
• Tape acceleration
Enhance array replication requirements
Reduces WAN-induced latency
Improves application performance over distance
Data Migration
SAN Extension with FCIP
Fabric is aware of all data frames from initiator to target
Data Migration withDMM IO Acceleration
BRKSAN-2883 83
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
SAN Extension with FCIP
• Encapsulation of Fibre Channel frames into IP packets and tunneling through an existing TCP/IP network infrastructure, in order to connect geographically distant islands
• Write Acceleration to improve throughput and latency
• Hardware-based compression
• Hardware-based IPSec encryption
Fibre Channel over IP
FCIP Tunnel TE Port
Array to Array Replication
BRKSAN-2883 84
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
FC Redirect - How IOA Works
MAN/WAN
IOA= I/O Accelerator
IOA
IOA
IOA
IOA
ReplicationStarts
Replication Starts
Initiator to target
Flow redirected to IOA Engine
Flow accelerated and sent towards normal
routing path
Initiator Target
Virtual Initiator Virtual Target
Initiator Target
BRKSAN-2883 85
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Data Acceleration
• Accelerate SCSI I/O
Over both Fibre Channel (FC) and Fibre Channel over IP (FCIP) links
For both Write Acceleration (WA) and Tape Acceleration (TA)
• I/O Acceleration Node platforms: MSM-18/4, SSN-16, MDS-9222i, MDS-9250i
• Uses FC Redirect
A fabric service to accelerate I/O between SAN devices
MAN/WAN
IOA= I/O Accelerator
IOA
IOA
IOA
IOA
BRKSAN-2883 86
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
IOA FCIP Tape Backup
Highly resilient– Clustering of IOA engines allows for load balancing and failover
Improved Scalability- Scale without increasing management overhead
Significant reutilization of existing infrastructure- All chassis and common equipment re-utilized
Flat VSAN topology- Simple capacity and availability planning
Large Health Insurance Firm
MDS IOA Results
FCIP92% throughput increase
BRKSAN-2883 87
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Data Mobility
• Migrates data between storage arrays for
• Technology refreshes
• Workload balancing
• Storage consolidation
• DMM offers
• Online migration of heterogeneous arrays
• Simultaneous migration of multiple LUNs
• Unequal size LUN migration
• Rate adjusted migration
• Verification of migrated data
• Dual fabric support
• CLI and wizard-based management with Cisco Fabric Manager
• Not metered on no. of terabytes migrated or no. of arrays
• Requires no SAN reconfiguration or rewiring
• Uses FC Redirect
Data Mobility Manager
ApplicationI/O
Old Array
Data Migration
NewArray
Application Servers
BRKSAN-2883 88
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
• 8 channels WDM using 20nm spacing
• Colored CWDM SFPs used in FC switch
• Optical multiplexing done in OADM
• Passive device
SAN Extension - CWDMCourse Wavelength Division Multiplexing
Optical fiber pairTX
Optical
transmittersOptical
receivers
TX
TX
TX
RX
RX
RX
RX
OADM
Transmission
BRKSAN-2883 89
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
• DWDM systems use optical devices to combine the output of several optical transmitters
• Higher density technology compared with CWDM, <1nm spacing
SAN Extension - DWDMDense Wavelength Division Multiplexing
TX
Optical
transmittersOptical
receivers
TX
TX
TX
RX
RX
RX
RX
DWDM devices
Optical fiber pair
Transmission
Optical Splitter Protection
BRKSAN-2883 90
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Dense vs Coarse (DWDM vs CWDM)DWDM CWDM
Application Long Haul Metro
Amplifiers Typically EDFAs Almost Never
# Channels Up to 80 Up to 8
Channel Spacing 0.4 nm 20nm
Distance Up to 3000km Up to 80km
Spectrum 1530nm to 1560nm 1270nm to 1610nm
Filter Technology Intelligent Passive
ONSMDS
Array
DWDM CWDM
Site 1 Site 2 Site 1 Site 2
BRKSAN-2883 91
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Summary
Drivers in DC are forcing change
• 10G convergence & server virtualization
• It's not just about FCP anymore. FCoE, NFS, iSCSI are being adopted
Proper SAN design is holistic in the approach
• Performance, Scale, Management attributes all play critical roles
• Not all security issues are external
• Fault isolation goes beyond SAN A/B separation
• Consider performance under load
• Design for SAN services
Many design options
• Optimized for performance
• Some for management
• Others for cable plant optimization
BRKSAN-2883 92
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Additional Relevant Sessions
• BRKARC-1222 - Cisco MDS/Nexus SAN Portfolio: Next phase of Storage Networking
• Wednesday 8AM
• BRKSAN-3446 - SAN Congestion! Understanding, Troubleshooting, Mitigating in a Cisco Fabric
• Wednesday 4PM
• BRKSAN-3101 - Troubleshooting Cisco MDS 9000 Fibre Channel Fabrics
• Thursday 8AM
• BRKCCIE-3351 Storage Networking for CCIE Data Center Candidates• Thursday 8AM
Storage Networking – Cisco Live Las Vegas 2016
BRKSAN-2883 93
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Call to Action
• Visit the World of Solutions for:
• Multiprotocol Storage Networking booth• See the MDS 9718, Nexus 5672UP, 2348UPQ, and MDS 40G FCoE blade
• Data Center Switching Whisper Suite• Strategy & Roadmap (Product portfolio includes: Cisco Nexus 2K, 5K, 6K, 7K, and MDS products).
• Technical Solution Clinics
• Meet the Engineer
• Available Tuesday and Thursday
BRKSAN-2883 94
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Complete Your Online Session Evaluation
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.
• Complete your session surveys through the Cisco Live mobile app or from the Session Catalog on CiscoLive.com/us.
BRKSAN-2883 95
© 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Lunch & Learn
• Meet the Engineer 1:1 meetings
• Related sessions
BRKSAN-2883 96
Please join us for the Service Provider Innovation Talk featuring:
Yvette Kanouff | Senior Vice President and General Manager, SP Business
Joe Cozzolino | Senior Vice President, Cisco Services
Thursday, July 14th, 2016
11:30 am - 12:30pm, In the Oceanside A room
What to expect from this innovation talk
• Insights on market trends and forecasts
• Preview of key technologies and capabilities
• Innovative demonstrations of the latest and greatest products
• Better understanding of how Cisco can help you succeed
Register to attend the session live now or
watch the broadcast on cisco.com