1Rev 4d
Nimble Technical Sales Professional Accreditation
Nimble Storage Array Introduction, Installation, and Maintenance
Rev 4d
Checking Your Enrollment
1. Login to http://university.nimblestorage.com/2. Click My Account3. Verify todays course is listed and then click Go
4. Ensure your status states Enrolled with an X next to it (Dont click the X)
5. If your screen looks different, ask your instructor for instructions2
Classroom NetworkSSID
Password
2Rev 4d
Introductions
Name Company Position Data storage background What do you hope to get out of the course?
Rev 4d
In this course the following subjects will be discussed:
Section 1: CS-Series Array Introduction
Section 2: Scale-to-Fit Section 3: CASL Architecture Section 4: Networking and Cabling Section 5: Initial Installation Section 6: Array Administration Section 7: Working with Volumes
Section 8: Connecting to Hosts Section 9: Snapshots Section 10: Replication Section 11: Data Protection and DR Section 12: Maintenance and
Troubleshooting Section 13: Support
Topics
3Rev 4d
Section 1: CS-Series Array Introduction
Rev 4d
Raw versus Usable versus Effective Capacity
Raw Capacity Usable
Capacity
EffectiveCapacity
Subtract capacity
forRAID-6 parity,
spares &system
reserves
Add storagecapacity due
to inline compression (typical 30%
to 75%)
Raw: 24 TB Usable: 17 TB Effective: 33 TB(assuming 50% compression)
4Rev 4d
Nimble Storage CS210 At a Glance
7
Model CPU DDR3 Memory
Ethernet Ports
Cache SSD
Data HDD
Effective Capacity0 2x
CS210 1 12GB 4x 1GbE 2x 80GBor
160GB
8x 1TBor
8TB RAW
4TB 9TB
No 10 GbEoption
Capacity Expansion Add up to 1 additional shelf
Scaling Performance Supports scaling cache (X2 and X4)
Rev 4d
Nimble Storage CS220 at a Glance
8 2012 Nimble Storage, Inc.
Model CPU DDR3 Memory
Ethernet Ports
Cache SSD Cache Total
Data HDD
Eff. Capacity0 2x
CS220
1 12GB
6x1GbE
4x80GB 320GB
12x1TBor
12 TBRAW
8TB 16TBCS220G 2x1GbE2x10GbE
Capacity Expansion Add up to 3 additional shelf
Scaling Performance Scale Compute and Cache (x2, x4, or x8)
5Rev 4d
Model CPU DDR3 Memory
Ethernet Ports
Cache SSD Cache Total
Data HDD
Eff. Capacity0 2x
CS240
1 12GB
6x1GbE
4x160GB 640GB
12x2TBor
24 TBRAW
17TB 33TBCS240G 2x1GbE2x10GbE
Nimble Storage CS240 at a Glance
9 2012 Nimble Storage, Inc.
Capacity Expansion Add up to 3 additional shelf
Scaling Performance Scale Compute and Cache (x2, x4)
Rev 4d
Nimble Storage CS260 at a Glance
10 2012 Nimble Storage, Inc.
Model CPU DDR3 Memory
Ethernet Ports
Cache SSD Cache Total
Data HDD
Eff. Capacity0 2x
CS260
1 12GB
6x1GbE
4x300GB 1.2TB
12x3TBor
36 TBRAW
25TB 50TBCS260G
2x1GbE2x10GbE
Capacity Expansion Add up to 3 additional shelf
Scaling Performance Scale Compute and Cache (x2, x4)
6Rev 4d
Nimble Storage CS420 at a Glance
11 2012 Nimble Storage, Inc.
Model CPU DDR3 Memory
Ethernet Ports
Cache SSD Cache Total
Data HDD
Eff. Capacity0 2x
CS420(1)
2 24GB6x1GbE
4x160GB or 4x300GB
640GBto
1.2TB
12TB 8TB 16TBCS440 24TB 17TB 33TBCS460 2x1GbE
2x10GbE 36TB 25TB 50TB
Capacity Expansion Add up to 3 additional shelf
Scaling Performance Cache (x2, x4, or x8*)*only the CS420 supports the x8 option
(1) Sold only with X2, X4, or x8 options
Rev 4d
Hardware Tour - Front
3U
7Rev 4d
Hardware Tour Controller Unit Front
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
CS-Series 210
HDD HDDblank blank
LEDsLEDs
PWRSSD options*
*default configuration uses two SSD slots and two blanks
Rev 4d
Hardware Tour Controller Unit Front
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
CS220 and higher
HDD HDDSSD
8Rev 4d
Disks
Disks: 16 hot swappable drive bays populated with:
8 or 12 SATA (with SAS interposers) or SAS disks 2 or 4 solid-state drives (SSD)
When replacing a drive, ensure you replace drives with the appropriate type!
Rev 4d
Nimble ES-Series External Storage Shelf
Connect one per CS210 or up to three to the CS220 and higher model numbers
Scale storage capacity non-disruptively Uses 4 Lane 6Gb SAS connectivity from controller to shelf
Support redundant data paths from controller to shelves
Each shelf is its own RAID Group Spares assigned for each shelf
16
ES1-H25 ES1-H45 ES1-H65
Individual disk drive size 1TB Disks 2TB Disks 3TB Disks
Raw Capacity 15 TB 30 TB 45 TB
Effective Capacity(w/ 0x-2x compression) 11 22 TB 23 45 TB 34 68 TB
Flash 160 GB 300 GB 600 GB
Connectivity 2x 6Gb SAS / IO module
IO Modules Dual hot-swappable SAS controllers
9Rev 4d
Hardware Tour Front
HDD
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Expansion Shelf
HDDSSD
Rev 4d
Hardware Tour - Back
1 2
3 456
8
7
10
Rev 4d
Hardware Components
Power supplies 2X 100-240V, 50-60Hz, 4-10 Amp Power requirement 500 watts
Rev 4d
Controllers
Work in Active / Standby configuration Hot swappable Supports non-disruptive Nimble OS upgrades Review all messages regarding controller failure to identify the proper
controller Any of the following events can indicate that a controller has failed:
LEDs indicate that no activity currently occurs on the controller that was active NVRAM LEDs are dark Heartbeat LED is dark Event appears in the Events list Receipt of an alert email from the array
20 2012 Nimble Storage, Inc.
11
Rev 4d
Controllers Nimble OS Upgrade
One-click, zero-downtime Nimble OS upgrades
Before you begin: Check your current version Obtain the most recent version Check system health
Rev 4d
Nimble OS Upgrade Process
Active Standby
Firmware
ActiveStandbyFailover
1. Load new firmware to standby2. Reboot standby to run new rev.3. Load new firmware to other controller4. Reboot active to activate new rev.
causes failover and the standby becomes active
Firmware
12
Rev 4d
Section 2: Scale to Fit
Rev 4d
Nimble Scaling for Mainstream Applications
Mainstream Applications
PER
FOR
MAN
CE
CAPACITY
+ NODES+ NODES+ NODES
13
Rev 4d
Nimble Scaling for Mainstream Applications
Mainstream Applications
PER
FOR
MAN
CE
CAPACITY
`
Real-time Analytics
VDI
SQL Server
Exchange
Backup, DR
Archival, Cheap and Deep
SharePoint
Oracle
Rev 4d
Scale Capacity by Adding Disk Shelves
Add capacity non-disruptively by adding external disk shelves
A disk shelf contains high capacity HDDs and a SSD
Add multiple disk shelves per Nimble Storage array
Add up to three shelves Only one shelf for the CS-210
Mix and match different capacity shelves
Sufficientperformance, but need more capacity
P
C
Add
Once Expansion Shelves have been added they cannot be removed.
14
Rev 4d
Scale Capacity Cabling
4 Lane 6Gb cable 1 to 3 meters in length
Do not connect SAS cables to an expansion shelf
until after the array has been
upgraded to 1.4
Rev 4d
Adding a Shelf
1. Check to see if you have proper Nimble OS version2. Cable new expansion shelf and power on3. The new shelf is discovered by the control head
Newly discovered shelves will be shown as Available in the GUI/CLI4. Using the GUI or CLI, activate the new shelf
28 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
15
Rev 4d
Adding a Shelf
29 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Discovering
Available Activate In Use
Is there data on
the disks?
Foreign
Faulty
No
Yes Force Activate
Rev 4d
Scale Capacity Storage Pool
Storage pool grows when an expansion shelf is activated. Segment Layer is updated with new ending block data
The segment layer provides a map between the Nimble file system addresses and disk locations
The map is dynamically created for each incoming write request Essentially the segment layer works as a traffic cop by directing writes to the
proper disks
30 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
16
Rev 4d
Expanding Existing System
31 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
50% Capacity
Fill Expansion Shelf until capacity utilization
matches the control head
Then balance capacity between them
Controller ShelfExpansion Shelf
21
50% Capacity
Rev 4d
Managing Internal Storage and Expansion Shelves
X X
NO
17
Rev 4d
Power On/Off Order
On Power expansion shelves
first, then the controller shelves
33 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Off Power off the controller
shelf and then the expansion shelves
Rev 4d
Scale Compute The 400 Series Controllers
Provides additional processing power and memory Provides two CPUs each with:
6 cores 12 GB of DRAM
Scales performance Replaces existing controllers
A CPU is not installed into current controllers
+
18
Rev 4d
Controller Upgrade CS200 >> CS400
35 2012 Nimble Storage, Inc.
Standby ActiveHalt Standby
RemoveLabeled Cables
Remove ControllerInsert
New 400 series ControllerCable using labels and pictures
Make sure it is Healthy and in
Standby
Failover to Active
Repeat steps for this controller
Rev 4d
Nimble Array Scaling Compute
There is no CS420 array part number CS420-{X2, X4, X8} only Upgrading a CS220 to a CS420-X2, -X4,
or -X8 Cache is upgraded when scaling compute
on the CS220 array Upgrading compute on CS240 and
CS260 Compute can be upgraded without
upgrading cache
36
19
Rev 4d
Controller Upgrade
Before you start: Ensure you have Nimble OS version 1.4 or later Ensure one controller is in active mode and the other is in standby
Note which controller is active and which is in standby Note your current controller shelf model
Note your current SSD size
37 2012 Nimble Storage, Inc.
Rev 4d
Controller Upgrade CS200 >> CS400
1. Halt (shut down) the standby controller Pre 2.0 release use the CLI:
halt --array -- controller Post 2.0 release use the GUI go to Administration >> Array
2. Disconnect cables3. Remove the controller4. Insert the replacement controller5. Connect all cables6. Verify the controller powers up and is in standby mode7. Perform a failover to the new controller
In the GUI go to Manage >> Array and click the Failover button8. Repeat steps 1 79. Verify the model number has changed from 200 series to a 400 series
38 2012 Nimble Storage, Inc.
20
Rev 4d
Scale Cache X2, X4, and X8
Provides additional cache Scales performance
There are two variations: -x2 two-times the standard cache size -x4 four-times the standard cache size -x8 eight-times the standard cache size
Only for use with the CS220 and CS420 arrays Only supported in FW 1.4.8 and up and 2.0.5 and up
+
Rev 4d
Nimble Storage Cache at a Glance
40 2012 Nimble Storage, Inc.
Flash Capacity CS210 CS220 CS240 CS260 CS420* CS440 CS460
Base 2X80160
4X80320
4X160640
4X3001,200
--- 4X160640
4X3001,200
-X2 4X80320
4X160640
4X3001,200
4X6002,400
4X160640
4X3001,200
4X6002,400
-X4 4x160640
4X3001,200
4X6002,400
--- 4X3001,200
4X6002,400
---
-X8 --- 4X6002,400
--- --- 4X6002,400
--- ---
Note there is no CS420 part number, only CS420-x2/4/8
Capacities are in megabytes (MB)
21
Rev 4d
Scale Cache Upgrade
1. Remove the bezel2. Starting from the left remove the first SSD3. Wait until the red LED under the slot lights up4. Install the new larger SSD into the slot5. Wait until the red LED turns off6. Repeat steps 1-5 for remaining SSDs7. Verify that the model number of the controller shelf and the capacity of
the SSDs have changed to x2, x4, or x88. Replace the bezel
41 2012 Nimble Storage, Inc.
Rev 4d
Utilize multiple arrays as a single storage entity Scales bandwidth, CPUs, memory, capacityProvides high performance with high capacity
Scale Out
Single Management
IP
22
Rev 4d
Simplify Storage Management
Manage the scale-out cluster from a single console
Add and remove storage arrays Get status and view performance
and capacity reports Create and manage storage
pools and volumes Manage host connectivity to the
scale-out cluster Automatic MPIO configuration
and path management Discover/add data IP through a
single connection
Rev 4d
Scale Out Pool
23
Rev 4d
Scale Out Pool
Rev 4d
Scale Out Understanding Groups
Nimble Connection Manager or PSP
plug-in for VMware
Switch 1 Switch 2
NIC 1 NIC 2
Automatic MPIO configuration and path management eliminates manual connection setup to individual arrays
24
Rev 4d
Section 3: Cache Accelerated Sequential Layout (CASL)
Rev 4d
Choices we need to make when choosing storage
48 2012 Nimble Storage, Inc.
Rank in Order of Importance
Performance Capacity Cost Reliability
25
Rev 4d
Data Layout Write in place file system (EMC, EQL)
Pros Cons
Simple to implement, long history Good sequential read performance
without cache
Poor random write performance
Slow, high overhead compression
Rev 4d
Data Layout Hole filling (WAFL, ZFS)
Pros Cons
Good random write performance until disk fills up
More efficient redirect-on-write snapshots
Performance degrades over time
Slow, high overhead compression
WAFL Write Anywhere File LayoutZFS Copy on write transactional model
26
Rev 4d
Data Layout Always write full stripes (CASL)
Pros
Good AND consistent write performance
Very efficient snapshots Fast inline compression Efficient flash utilization,
long flash life
Ground up design relies on flash
Enables variable block size Uses a sweeping process to
ensure full stripe write space
Rev 4d
Sweeping
Data blocks are indexed as they are written Over time the deletion of snapshots and data
leaves stale data blocks Sweeping removes stale blocks and forms new
stripe writes with the remaining active blocks
27
Rev 4d
Building a New Array
How would you design a storage solution around SSDs?
As a bolt on flash tier? No flash optimization - SSDs grouped using RAID Requires more expensive SSDs to obtain the high endurance required Performance increase only seen on the flash tier
As a bolt on read cache? No flash optimization - SSDs grouped using RAID to form a read
cache LUN Required SLC SSDs to obtain the high endurance required No improvement to write performance
Rev 4d
Solid State Drives - Tale of the Tape
SLC MLCDensity 16 Mbit 32 Mbit 64MbitRead Speed 100ns 120ns 150nsBlock Size 64Kbyte 128 KbyteEndurance 100,000 cycles 10,000 cyclesOperating Temp Industrial Commercial
SLC MLCHigh densityLow cost per bitEnduranceOp temp rangeLow power consumptionWrite/Erase speedsWrite/Erase endurance
Source: Super Talent SLC vs. MLC: An Analysis of Flash Memory
28
Rev 4d
The Nimble Way Purpose Built CASL
Flash is highly optimized - writes matched to erase block size which minimizes amplification
Erase block size When data is written to flash it is written a byte at a time. But when data is erased it is erased a block at a time. Thus if one bit changes
the entire block must be read, the cells erased and the remaining data written back down along with the change
55 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
cell cell cell cell
1 Block
1bit
1bit
1bit
1bit
1bit
1bit
1bit
1bit
1bit
1bit
1bit
1bit
Rev 4d
Discussion: Disk Storage
What type of RAID should
be supported?
Do we use multiple RAID
groups or a single storage pool?
29
Rev 4d
The Nimble Way Purpose Built CASL
Fine movement of data (4KB real time movement)
Utilizes Cost-effective MLC flash without RAID
Provides a high level of write acceleration with the write-optimized layout on flash AND disk
57 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Mendel Rosenblum
Rev 4d
Inline Compression
DRAM
Universal Compression: Variable-size blocks enable
fast inline compression, saving 30-75%
Elimination of read-modify-write penalty allows compression of all applications
NIMBLE ARRAY
Data Path Write
NVR
AM
NVR
AM
Write Operation1. Write is received by active
controllers NVRAM (1GB)2. Write is mirrored to partner
controllers NVRAM3. Write is acknowledged4. Write is shadow copied to DRAM5. System uses Lempel-Ziv 4 for
inline compression and a modified LZ compression for pre 1.4 software releases.
Variable block based; compresses all data into stripes
Write Operation1. Write is received by active
controllers NVRAM (1GB)2. Write is mirrored to partner
controllers NVRAM3. Write is acknowledged4. Write is shadow copied to DRAM5. System uses Lempel-Ziv 4 for
inline compression and a modified LZ compression for pre 1.4 software releases.
Variable block based; compresses all data into stripes
30
Rev 4d
What you need to know about Lempel-Ziv 4
LZ4 is a fast lossless compression algorithm Provides compression speeds of 300 MB/s per CPU core Provides a fast decoder that provides speeds up to and beyond 1GB/s per
CPU core. It can reach RAM speed limits on multi-core systems.
59 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4d
Application Compression with Nimble
60
Taken from InfoSight Feb 2013
31
Rev 4d
Write Operation1. Write is received by active
controllers NVRAM2. Write is mirrored to partner
controllers NVRAM3. Write is acknowledged4. Write is shadow copied to DRAM5. System uses a modified Lempel-
Ziv for inline compression. Variable block based;
compresses all data into stripes
Write Operation1. Write is received by active
controllers NVRAM2. Write is mirrored to partner
controllers NVRAM3. Write is acknowledged4. Write is shadow copied to DRAM5. System uses a modified Lempel-
Ziv for inline compression. Variable block based;
compresses all data into stripes
Inline Compression
DRAM
Universal Compression: Variable-size blocks enable
fast inline compression, saving 30-75%
Elimination of read-modify-write penalty allows compression of all applications
NIMBLE ARRAY
Data Path Write
NVR
AM
NVR
AM
4.5 MB stripesMany IOPS sent as a stripe reduces IOPS
between controller and disks
2K 18K
6K
8K 7K
1K
2K
1K
8K 21 K
18K 3K 11K4K
3K
2K 4K21K11K
Rev 4d
High-Capacity Disk Storage
All Data
Data Path Write
NIMBLE ARRAY
Inline Compression
Write Optimized Layout Random writes always organized
into large sequential stripes
All data is written sequentially in full RAID stripes to disks. Because of compression and the stripe write there are fewer write operations
Large stripe written to disk in one operation: ~250x faster than write in place layout
Use of low-cost, high-density HDDs coupled with compression lowers costs substantially
DRAM
NVR
AM
NVR
AM
32
Rev 4d
DRAM
Large Adaptive Flash Cache
NVR
AM
NVR
AM
Cache-worthyData
Data Path Write
Inline CompressionSmart Caching
MLC flash: Converting random writes to sequential writes minimizes write amplification, allowing the use of MLC SSDs
No RAID overhead: Using flash as a read cache avoids the overhead of RAID protection
Compression: Data on flash is compressed, saving space
Metadata in cache accelerates all reads
NIMBLE ARRAY
High-Capacity Disk Storage
All Data
Rev 4d
DRAM
Large Adaptive Flash Cache
NVR
AM
NVR
AM
Cache-worthyData
Data Path Read
Inline Compression
Accelerated Reads All random writes and any hot data is
written to Flash Cache.
Serves hot data from flash; responds rapidly to changes
Reads 50x faster than disk (200us vs. 10ms)
NIMBLE ARRAY
High-Capacity Disk Storage
All Data
33
Rev 4d
Data Path Reads
65 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
DRAM
Large Adaptive Flash Cache
NVR
AM
NVR
AM
Cache-worthyData
Inline CompressionNIMBLE ARRAY
High-Capacity Disk Storage
All Data
1
3
4 5
2
Read Operation1. Read from NVRAM2. If not found, check DRAM3. If not found, read from cache
If found, validate checksum, uncompress, and return data
4. If not found, read from disk If found, validate checksum,
uncompress, and return data5. And, if cache-worthy, write to cache
Rev 4d
How does the read operation compare to others?
How might inline compression Vs. full stripe compression effect the read?
How do you think a changed block is handled?
34
Rev 4d
Compression Performance Comparison During Changed Block Operation
876543218 blocks grouped & compressed
87654321 87654321Group placed into N fixed size slots
Entire group read & uncompressed
New group compressed & re-written
Block updated with new data
Fixed block Architecture CASL Variable Blocks
Individual blocks compressed and coalesced into stripe
Updated data block compressed and coalesced into new stripe
87654321 3
Other Array Manufacturers Nimble Storage
Rev 4d
Compression Performance Comparison For A Changed Block
8 blocks grouped & compressed
Group placed into N fixed size slots
Entire group read & uncompressed
New group compressed & re-written
Block updated with new data
Fixed block Architecture CASL Variable Blocks
Individual blocks compressed and coalesced into stripe
Updated data block compressed and coalesced into new stripe
Other Array Manufacturers Nimble Storage
Cost of fixed block architecture relative to CASL:1. Additional M blocks read from disk2. Additional CPU cycles for decompression &
recompression of all N blocks3. Additional M-1 blocks written to disk
35
Rev 4d
Ideal for Exchange
Gain Performance
MAILBOXES PER DISKPublished and verified Microsoft Exchange ESRP benchmark results.
312
EquaLogic32 disks for
10,000 mailboxes
294
EMC34 disks for
10,000 mailboxes
187
NetApp64 disks for
12,000 mailboxes
139
Compellent72 disks for
10,000 mailboxes
Nimble12 disks
for 40,000 mailboxes
3,333
10-24x
Save and Protect
COMPRESSION CUSTOMERS RETAINING SNAPSHOTS FOR >1 MONTH1.8x
48%Actual results across all Nimble customers deploying Exchange 2010
Actual results across all Nimble customers deploying Exchange 2010
We started out deploying SQL workloads primarily on the Nimble array. Very quickly we realized we had
enough performance headroom to consolidate our very demanding Exchange 2010 deployment on the same array.
Ron Kanter, IT Director, Berkeley Research Group
With Nimble, we were able to run 3 snapshot backups a day andreplicate offsite twice daily. Exchange users notice no performancedegradation. Backups take minutes, not hours. Snapshot backups
require very little space and are recoverable and mountable locally and remotely. A mailbox or Exchange system can be recovered in literally minutes. Best of all, we can regularly test our procedures for Disaster Recovery.
Lucas Clara, IT Director, Foster Pepper LLC
# of mailboxes per disk
Rev 4d
Data Security
Data on disk with no dirty cache - ever RAID6
Tolerates 2 simultaneous disk failures Checksum per block (data and index)
Checked on every read and by background scrubber Mismatch triggers RAID-based reconstruction of stripe
Self-description per block (LUN, offset, generation) Detects mis-directed reads and writes
Data on flash Checksum per block (data and index)
Checked on every read Mismatch causes removal from cache
Data in NVRAM Mirrored to peer NVRAM Dual failure data lost, but consistency preserved to last N minutes
36
Rev 4d
Summary
Intelligent data optimization Sweeping Inline data compression for primary storage optimization The combination of SSDs and high-capacity disks in one device Instant, integrated backups
Rev 4d
Summary
3 Unique Elements of Nimble CASL Technology Fully integrated flash (unlike bolt on offerings)
Ground up data layout for flash AND disk to maximize flash benefit A fully sequentialized write layout on disk and flash
Dramatic price/performance advantage WITH inline compression
Highly efficient snapshots (space AND performance)
37
Rev 4d
1Rev 4
Section 4: Nimble Array Networking and Cabling
Rev 4
Understanding IPs
The array management IP address Best Practice: This IP address can be used for data, but this is not
desirable: specific target IP addresses of the interface pairs should be used instead.
The target discovery IP address Best Practice: This IP address can be used for data, but this is not
desirable: specific target IP addresses of the interface pairs should be used instead.
The data IP addresses The two controller diagnostic IP addresses
2Rev 4
Networking Terminology
Interface Pairs Controller A eth1 & Controller B eth1 IP addresses float between
Controller A Controller B
Rev 4
Set iSCSI Timeout in Windows
Set the LinkDownTime to 60 seconds The NWT can set the timeout value for you Or set it manually:
see the Microsoft iSCSI guide at http://technet.microsoft.com/en-us/library/dd904411(WS.10).aspx HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-
E325-11CE-BFC1-08002BE10318}\instance-number\Parameters MaxRequestHoldTime set to 60 seconds (0X3C) LinkDownTime set to 60 seconds (0X2D) HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk
TimeOutValue set to 60 seconds (0X3C)
3Rev 4
Other OS Timeout Value Changes
Changing iSCSI Timeouts on Vmware None Needed
Changing iSCSI Timeouts on Linux (iscsid.conf) For Linux guests attaching iSCSI volumes
node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 60
Rev 4
MPIO
Setup MPIO on your server and make sure its active. Review the Using MPIO section of the User Guide This will require a reboot of the server to make both the registry
edit and the MPIO active so do this ahead of time so as to not delay installation.
Nimble OS 2.0 and later Nimble Connection Manager sets up the optimum number of iSCSI sessions and finds the best data connection to use under MPIO. Includes a Nimble DSM that claims and aggregates data paths for the Nimble array volumes.
4Rev 4
Network Best Practices
Best Practice DetailsDo not use Spanning Tree Protocol (STP)
Do not use STP on switch ports that connect to iSCSI initiators or the Nimble storage array network interfaces.
Configure flow control on each switch port
Configure Flow Control on each switch port that handles iSCSI connections. If your application server is using a software iSCSI initiator and NIC combination to handle iSCSI traffic, you must also enable Flow Control on the NICs to obtain the performance benefit.
Rev 4
Network Best Practices
Best Practice DetailsDisable unicast storm control
Disable unicast storm control on each switch that handles iSCSI traffic. However, the use of broadcast and multicast storm control is encouraged.
Use jumbo frames when applicable
Configure jumbo frames on each switch that handles iSCSI traffic. If your server is using a software iSCSI initiator and NIC combination to handle iSCSI traffic, you must also enable Jumbo Frames on the NICs to obtain the performance benefit (or reduce CPU overhead) and ensure consistent behavior. Do not enable Jumbo Frames on switches unless Jumbo Frames is also configured on the NICs.
5Rev 4
Vmware Settings
Review the Nimble Vmware Integration Guide Configure Round Robin ESX 4.1 only (4.0 will be different)
To set the default to Round Robin for all new Nimble volumes type the following, all on one line: esxcli nmp satp addrule --psp VMW_PSP_RR --satp VMW_SATP_ALUA --vendor Nimble
Configure Round Robin ESXi5 only To set the default to Round Robin for all new Nimble volumes type the following, all on one line:
esxcli storage nmp satp rule add --psp=VMW_PSP_RR --satp=VMW_SATP_ALUA --vendor=Nimble
ESXi5.1 use the GUI to set Round Robin
Push out Vcenter plugin With Nimble OS version 1.4 and higher use Administration >> Plugins vmwplugin --register --username arg --password arg --server arg
Rev 4
Cisco UCS and ISNS
Cisco UCS Support Formal Cisco UCS certification program Nimble will be listed. Boot-from-SAN now officially supported in 1.4 Supported adapters: Palo-only (no other mezzanine adapters.) Supported UCS version: UCS Manager v2.0(3)
Cisco UCS firmware version 2.02r full version string is 5.0(3)N2(2.02r) Supported OSs: VMware esx4.1u1, ESX5.0u1, Windows 2008 R2, RHEL6.2, SUSE 11 Update 1
iSCSI iSNS Support Protocol used for interaction between iSNS servers and iSNS clients. Facilitates automated discovery, management, and configuration of iSCSI devices on a TCP/IP network. Primary driver Microsoft HCL Certification requires it. Managed via the Nimble CLI only.
6Rev 4
Cabling Multi-switch Connectivity
Controller A Controller B
Hosteth1 eth2
Active linkStandby link
eth5 eth6eth5 eth6
Even ports to one switch Odd ports to the opposite switch
Rev 4
What is wrong with this configuration?
If a switch fails controllers cannot perform a proper failover since their sibling interface does not have
connectivity.
Controller A Controller B
Hosteth1 eth2
eth5eth5 eth6eth6
7Rev 4
Section 5: Installation
Rev 4
First Steps
End users: Login to InfoSight at https://infosight.nimblestorage.com
8Rev 4
First Steps
Rev 4
First Steps
Once you have logged into InfoSight download the following:
Latest Release Notes Latest User Guides Latest CLI Reference Guide Nimble Windows Toolkit VMware Integration Toolkit (if applicable) Related Best Practice Guides
9Rev 4
Pre-Install Checklist
Complete Checklist and review in advance of on-site visit
Send to Nimble Support for review Create physical topology with customer and
validate against best practices
Perform an on-site visit prior to the installation
Rev 4
Pre-Install Checklist
Collect all necessary data to perform an installation Organized in the same order that you will be entering in the data Can be left with the customer
18 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Pre-Installation Checklist
10
Rev 4
Before You Begin
Important: The computer used to initially configure the array must be on the same physical subnet as the Nimble array, or have direct (nonrouted) access to it.
Ensure Adobe Flash Player is installed
Rev 4
Prerequisites
20 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Before launching the NWT: Set a static IP: Set your IP address to the same subnet as your array
management IP address will be on. Have your array controllers A & B correctly cabled to your switch fabric per
the previous drawings. Complete all your switch configurations for Flow Control, Jumbo Frames,
Spanning tree, Unicast, etc. Install the Nimble Windows Toolkit (NWT) on the Laptop or Server your
installing with.
11
Rev 4
Nimble Windows Toolkit Installation
The Nimble Windows Toolkit (NWT) includes Nimble Protection Manager (NPM)
Rev 4
Nimble Windows Toolkit Installation
12
Rev 4
Nimble Windows Toolkit Installation
Installer needs to modify a few iSCSI timeout values. Installer will update them only if they are smaller than recommended values. Do you want installer to update the values? Click Yes to update and continue. Click No to continue without updating the values.
Installer needs to modify a few iSCSI timeout values. Installer will update them only if they are smaller than recommended values. Do you want installer to update the values? Click Yes to update and continue. Click No to continue without updating the values.
Installer needs to modify a few iSCSI timeout values. Installer will update them only if they are smaller than recommended values. Do you want installer to update the values? Click Yes to update and continue. Click No to continue without updating the values.
Rev 4
Nimble Windows Toolkit Installation
13
Rev 4
NWT Nimble Array Setup Manager
1) Start the Nimble Array Setup Manager 2) Select the Array to install and click Next
Rev 4
NWT Nimble Array Setup Manager
3) Enter the array name Make it useful such as
row and rack number
4) Set your management IP address Subnet mask & Default Gateway
5) Enter and confirm your array password
6) Click Finish
14
Rev 4
NWT Nimble Array Setup Manager
7) Click Close. Your default browser window will be opened and directed to the management IP. If it does not open a browser and point it to your management IP address
You should get this screen after a few seconds.
Rev 4
Nimble Install Nimble Array Setup Manager
Enter Management IP
If you get this screen click this selection to continue
NWT should take you straight to the Nimble Array Setup Manager screens. If not, you may see this screen.
15
Rev 4
Nimble Install Nimble Array Setup Manager
Log in with the password you just set
Rev 4
Nimble Install Nimble Array Setup Manager
16
Rev 4
Nimble Install Nimble Array Setup Manager
Set the physicalIP addresses
Set the iSCSI Discovery IP
Set the physicalIP addresses
Rev 4
Typical CS240 Configuration
CONTROLLER A
Diagnostic IP 1 (associated with any physical port)
CONTROLLER B
Diagnostic IP 2 (associated with any physical port)
Array Management IP Address and Target IP Address(floating, shared by controllers)
Management & replication
Data ports
17
Rev 4
Nimble Install Nimble Array Setup Manager
Set the Domain and DNS servers
Rev 4
Nimble Install Nimble Array Setup Manager
Set Time Zone and NTP server
18
Rev 4
Nimble Install Nimble Array Setup Manager
Set From AddressSet Send To Address
Check send copy to Nimble storage
Set SMTP Server
Ensure Auto Support in enabled
If using an HTTP Proxy check here
Rev 4
Nimble Install Nimble Array Setup Manager
Your Nimble Storage array is ready to use. Before you start using your array, there are a couple of things you should do to ensure smooth operations.
You must add the management IP address and the controller support addresses you provided to Your mail servers relay list.
You will also need to open the following firewall ports: SSH 2222 hogan.nimblestorage.com (secure tunnel) HTTPS: 443 nsdiag.nimblestorage.com (software downloads,
autosupport, heartbeat.
19
Rev 4
Nimble Install - WEB
Rev 4
Nimble Install Post Initial SetupOpen the AutoSupport screen:
Administration >> Alerts & Monitoring >>
AutoSupport/HTTP Proxy
20
Rev 4
Nimble Install Post Initial Setup
Check Send AutoSupportdata to Nimble Storage
Click Test AutoSupport Settings
You should receive an email with a case for the test
1
2 3Click Send AutoSupport
Rev 4
Post-Install Checklist
Verify an Autosupport email was received Dont leave site without performing this step!
Ensure you have updated firmware Ensure you perform a failover of the controllers Check VMware paths (to be discussed in later section)
40 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
21
Rev 4
Incoming ports use to be aware of
To use: Open local port:
For local IP addresses: Notes:
SSH 22 Array management IPHTTP 80 Array IP
GUI (HTTPS) 443 Array management IP HTTP (port 80) communication is redirected to HTTPS
iSCSI 3260 Discovery and data IP Needed for data accessSNMP 4290 SNMP daemon
GUI charts, NPM
4210 Array management IP
Control 4211 Array management IPReplication
(data)4213 Array management IP
41 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Outgoing ports use to be aware of
To use: Open local port:
For local IP addresses: Notes:
External NTP 123 NTP server IP UDP portExternal DNS 53 DNS server IP UDP and TCP port
SMTP Usually 25 Mail/SMTP server IP Needed for email alerts
SNMP 162 Needed for trapsSSH/SSHD 22 support.nimblestorage.com Needed for manual SCP of
diagnostic information
42 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
22
Rev 4
Section 6: Array Administration
Rev 4
GUI Interface
https://{arrays management IP address}
23
Rev 4
GUI Tour
Capacity Performance Events
Rev 4
GUI Tour
24
Rev 4
Hardware Icons
47 2012 Nimble Storage, Inc.
Rev 4
GUI Tour Hardware Icons
Disk drive is healthy Disk drive is designated as a spare SSD is healthy Disk is failed or missing Rebuilding disk Foreign disk Empty slot A fan is faulty A hardware event has occurred
25
Rev 4
GUI Tour Volume Icons
Volume is online Volume is offline Volume is offline due to a fault Volume replica Volume collection Volume is running out of space
Rev 4
GUI Tour Replication Task Icons
A replication task is scheduled A replication task is pending A replication task is in progress A replication task failed
26
Rev 4
GUI Navigation
Links
Side menus
Pull down menus
Rev 4
Performance Monitoring Monitor >> Performance
27
Rev 4
Performance Monitoring Interfaces
Rev 4
Space Usage Graphs
28
Rev 4
Command-Line Interface At-a-Glance Admin (same password as GUI) for customer/SE use CLI Access
Everything is available in the CLI ssh (putty) to management to Support IP addresses Serial access using dongle
115200, 8bits, No parity, 1 stop, Null-modem cable Never leave home without it!
Commands All commands follow similar form:
--list ; --info ; --edit ; --help man
vol, ip, subnet, route, nic, volcoll, stats help (to see them all)
Refer to the Nimble CS-Series Command Reference Guide
Rev 4
MIB II
MIB II Support Customers use SNMP to view their Nimble array with existing Management Software
E.g. Solarwinds, Nagios, Cacti, MG-SOFT MIB Browser MIB II is the second version of MIB Mandatory for every device that supports SNMP Support for SNMP v1 and v2, but not v3
29
Rev 4
Section 7: Working with Volumes
Rev 4
Volumes Overview
RAID 6Storage Pool
Volume
Physical storage resource
Logical storage resource
30
Rev 4
Thin Provisioning
RAID 6Storage Pool
Consumed SpaceVolume
Space from the pool is consumed as data
is written
Rev 4
Volume Reserves
RAID 6Storage Pool
Volume ReserveVolume
A reservation reserves a guaranteed minimum
amount of physical space from the pool for a volume
31
Rev 4
Volume Quotas
Volume QuotaVolume Reserve
A quota sets the amount of a volume that can be consumed before an alert is sent and writes are disallowed.
Volume
Pool
Rev 4
Performance Policy
Select a pre-defined policy or create a custom policy
Custom policies: Provide a name based on the
application Block size should be < = the
application block size Compression on/off Caching on/off
Block size cannot be changed on a volume without data migration
62 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
32
Rev 4
Access Control (Initiator Groups)
Access control which hosts have access to a volume Best Practice: Always limit access to a host initiator group Allow multiple initiator access For use with clusters, not MPIO
63 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Initiator Groups
A set of host initiators (IQNs) that are allowed to access a specified volume.
Can be created at volume creation or as a separate task Manage >> Initiator Groups
An IQN can only be assigned to one initiator group.
33
Rev 4
Initiator Groups
Initiator Name is the real IQN not an arbitrary name
Case sensitive
Seldom use IP
Multiple initiators for ESX, MSCS
Initiator Groups are applied to: Volumes Volumes+Snapshots Snapshots only
Rev 4
Volume Collection
A grouping of volumes that share snapshot/replication schedules
All volumes in a group will be snapped and replicated as a group
Best practice: Create a volume collection for each application
Oracle Database and log files Ensure you do not create overlapping
schedules
66 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Volume Collection
34
Rev 4
Volume Collection >> App Synchronization
App flushes/quiesces I/O while we take a snapshot and then unfreezes
VMFS consistent snapshots
SQL consistent snapshots
Exchange consistent snapshots
SQL/Exchange uses MS VSS framework and requires NPM on the Application Host more later
Rev 4
Protection Template
Protection templates are sets of defined schedules and retention limits Created apart from a volume
Manage >> Protection >> Protection Templates
68 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
35
Rev 4
Viewing Volume and Replica Usage
Rev 4
Creating a Volume
Demonstration and Lab
36
Rev 4
1Rev 4
Section 8: Connecting to Hosts
Rev 4
Connecting the host
Initiator TargetiSCSI Portal: A targets IP and TCP port number pair (default 3260)
Discovery: The process of an initiator asking a target portal for a list of it's targets and then making those available for configuration 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
2Rev 4
iSCSI IQN
iSCSI Qualified Name
iqn.2007-11.com.nimblestorage:training-vob104e23787e0f74.00000002.736e4164iqn.2007-11.com.nimblestorage
1 2 3 4
1. Type iqn or IEEE EUI-64 (eui)2. Date Year and month the naming authorities domain name was registered3. Naming Authority Domain name for this target4. String After the colon anything the naming authority wants to include
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Connecting to Windows Hosts
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
3Rev 4
Windows iSCSI Target/Initiator
1. Open your iSCSI initiator management tool and select the Discovery tab.
2. Click Discover Portal and add the management IP address of the array into the field. Click OK. The IP address now appears in the list of targets.
3. Tab to Targets and click Refresh (in the Discovered Targets area).
4. Select the volume to connect and click Connect or Log In. If there are no discovered entries in the list, type the IP address or host name into the Target field to discover them. Do not select the control target. You may also need to enter a CHAP secret or other access record information.
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Windows iSCSI Target/Initiator
5. On the dialog that is launched for connection, click the Advanced button to specify physical port connections as described in Understanding IP addressing on page 29.
6. Select the adapter to connect with (usually Microsoft iSCSI adapter) and the target portal IP address to use for the connection, then click OK.
7. Leave the volume selected for Add this connection to the list of Favorite targets if you want the system to automatically try to reconnect if the connection fails, and select Enable Multipath if the connection should use MPIO, then click OK.
8. Click OK to close the Initiator Properties dialog.9. Move to the Disk Management area of your operating system to configure and map the
volume. Select Control Panel > Administrative Tools. Move to Computer Management > Storage > Disk Management.
10. Right-click and initialize the new disk (volume). Important: Use the quick format option when initializing a volume on Windows.
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
4Rev 4
Connecting to VMware Hosts
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Integration Guides
VMware Integration Toolkit Guide Hyper-V Integration Guide Nimble Storage Best Practices for Hyper-V
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
5Rev 4
VMware Networking Best Practices
Do not use NIC Teaming on the iSCSI network VMware and Nimble recommendation
Use 1:1 VMkernel to Physical ports Even Better - 1:1:1 VMkernel to vSwitch to Physical
No additional steps to turn off NIC teaming in this case VMkernel ports must be bound to the iSCSI Initiator
CLI command only in ESX 4.1 and in GUI for ESX 5 esxcli swiscsi nic add -n -d
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
VMware Networking Best Practices
VMware Multi-Path must be Round-Robin Ensure you hit the Change button when
setting
Jumbo Frames (if used) Must be set everywhere (array,
VMware, switches)
Every volume on array should be in an Initiator Group
With ESX must use multi-initiator with the Nimble Volumes
If you fail to do this and you have to restart your ESX hosts one of your hosts will become unusable due to lack of access its VMDK
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
6Rev 4
VMware Networking Setup
Only work with one Volume and one ESX host at a time Given a vSwitch with two Physical Adapters vmnic1 and 2
configure them for iSCSI use:1. Select the ESX Host and click on the configuration tab2. Click on Networking in the Navigation Pane3. Use the Add button to create two VMkernel ports and enable for
iSCSI and vMotion and name them iSCSI0 and iSCSI14. Disable NIC teaming5. Enable iSCSI SW initiator if not already done6. Add VMkernel Ports to iSCSI using CLI command if you are working
with ESX 4.1 or with the vSphere GUI if using ESX 5 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Verify the number of expected paths for each Volume
( ESX Hosts * Physical Ports per Host * Array Data Ports )( Count of subnets * Switches per Subnet )
In the array navigate to:Monitor >> Connections
1 row for each volume initiator combination
Verification Formula
Note: 2 switches with same VLAN/subnet trunked together is 1 switch2 switches with same VLAN/subnet NOT trunked is 2 switches
7Rev 4
Verify the number of expected paths for each Volume
( ESX Hosts (always 1) * Physical Ports per Host * Array Data Ports )( Count of subnets * Switches per Subnet )
In vSphere navigate to:Select host >> Configuration >> Storage Adapter >> iSCSI Software Adapter >> click rescan
Verification Formula
Note: 2 switches with same VLAN/subnet trunked together is 1 switch2 switches with same VLAN/subnet NOT trunked is 2 switches
Rev 4
How many
Physical ports per host?
Array data ports?
ESX hosts connected?
Switch 1 Switch 2
2
2
4
2 X 2 X 41 X 1
= 16Expected paths?
Switches per subnet? 1Number of subnets? 1
Subnet 1
ESX Host 1
NIC1 NIC1
ESX Host 2
NIC1 NIC1
Eth 1
Eth 2
Eth 3
Eth 4
Controller A
Eth 1
Eth 2
Eth 3
Eth 4
Controller B
1 Volume
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
( ESX Hosts * Physical Ports per Host * Array Data Ports )
( Count of subnets * Switches per Subnet )
8Rev 4
How many
Physical ports per host?
Array data ports?
ESX hosts connected?
Switch 1 Switch 2
2
2
4
2 X 2 X 42 X 1
= 8Expected paths? 162
=
Switches per subnet? 1Number of subnets? 2
Subnet 1Subnet 2
ESX Host 1
NIC1 NIC1
ESX Host 2
NIC1 NIC1
Eth 1
Eth 2
Eth 3
Eth 4
Controller A
Eth 1
Eth 2
Eth 3
Eth 4
Controller B
1 Volume
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
( ESX Hosts * Physical Ports per Host * Array Data Ports )
( Count of subnets * Switches per Subnet )
Rev 4
What if
You lost a NIC, link or misconfigured the IP? Where could you look to discover which
paths are missing? The two easiest points to check would
be the switches view of the links and the arrays view of the links. Switch 1 Switch 2
What would your path count be in the iSCSI software adapter screen?
How many paths should there be? How many paths are lost due to the failure?
=
Subnet 1Subnet 2
ESX Host 1
NIC1 NIC1
ESX Host 2
NIC1 NIC1
Eth 1
Eth 2
Eth 3
Eth 4
Controller A
Eth 1
Eth 2
Eth 3
Eth 4
Controller B
1 Volume
826 Paths
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
9Rev 4
What if
Switch 1 Switch 2
You lost a Eth 2 or misconfigured the IP what would your path count be in the iSCSI software adapter screen?
Subnet 1Subnet 2
ESX Host 1
NIC1 NIC1
ESX Host 2
NIC1 NIC1
Eth 1
Eth 2
Eth 3
Eth 4
Controller A
Eth 1
Eth 2
Eth 3
Eth 4
Controller B
1 Volume
How many paths should there be? How many paths are lost due to the failure?
=
826 Paths
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
VMware Networking Best Practices
Additional Troubleshooting Verify physical connectivity (draw a picture)
May want to use switch commands to print connected MAC addresses and compare with MAC address of array ports.
nic --list Verify VLAN/Subnets are correct on all ports Verify links are UP and IPs are correct on array
Under GUI navigate to Manage >> Array ip --list
Clear all appropriate iSCSI static connections in VMware before all rescans
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
10
Rev 4
VMware Networking Best Practices
Work on only one system at a time and check the following before moving to another: Check src/dest IP addresses of all connections on array:
GUI: Monitor::Connections CLI: vol --info
Check paths on VMware Storage Adapters
iSCSI SW Initiator Right click on device and select Manage Paths
Force a failover and check you still have the correct number of connections As Root on active controller
ctrlr -list will display the active controller reboot -controller A or B whichever is the active controller from above
Rev 4
When presented a performance number, do you know how it was
achieved?
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
11
Rev 4
Performance Metrics
Latency Milliseconds (ms) Typically, 0.110ms Random Small IO size; e.g., 4KB QD = 1 I/O size
late
ncy
min latency
QD
late
ncy
min latency
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Performance Metrics
Measuring Random I/O I/Os per second (IOPS) Typically, 1K100K IOPS Small I/O size; e.g., 4KB High QD; e.g., 16 I/O size
Ran
dom
I/O
max IOPS
QD
Ran
dom
I/O
max IOPS
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Latency = 1 / [average latency in seconds + average read/write seek time in seconds]
12
Rev 4
Performance Metrics
Sequential throughput MBytes per second (MBps) Typically, 1001000 MBps Large IO size; e.g., 256MB High QD; e.g., 16 I/O size
Seq
thro
ughp
ut
max MBPS
QD
Seq
thro
ughp
ut
max MBPS
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
MB/s = IOPS X KB per IO/ 1024 or simply MB/s = IOPS X I/O block size
Rev 4
Performance
IOmeter Nimble CS2xxG 1.4.7.0 NimbleCS4xxG 1.4.7.04KRND write IOPS 17,000 45,0004KRNDreadIOPS 17,800 74,0004KRND read/writeIOPS 16,800 59,000256kSeqwritethroughput(MB/S) 400 740256kSeqreadthroughput(MB/S) 900 1100
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Random IOPS Measurement MethodIOPS = throughput/request_sizeRequest size: 4KBRequest order: randomRequest queue depth (parallelism): 32Volume size: 100GBVolume block size: 4KBJumbo Frames: Enabled
Sequential throughput Measurement MethodRequest size: 256KBRequest order: sequentialRequest queue depth (parallelism): 16Volume size: 100GBVolume block size: 32KBJumbo Frames: Enabled
Sequential throughput is limited to 400-600 MB/s on 1GigE models (depending on number of assigned iSCSI data ports)
13
Rev 4
Section 9: Snapshots
Rev 4
Snapshots
26
New Data (non-snapped)
Snapped Data
14
Rev 4
Discussion: What are snapshots & how do they work?
What is a COW?
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Snapshot Reserve
COW Snapshots
28
New Data (non-snapped)
Snapped Data
Changed Block
15
Rev 4
Discussion: What are snapshots & how do they work?
What is a ROW?
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
ROW Snapshots
30
New Data + Changed Blocks
Snapped Data
Changed Block
16
Rev 4
File and Snapshot Lifecycle 09:00
A B C D State of data at 09:00filename filename
A B C D
4-block file created
Rev 4
File and Snapshot Lifecycle 10:00
A B C D State of data at 09:00filename filename
A B C D
4-block file createdA B C DSnap10 10:00 snapshot
10 snap!
17
Rev 4
File and Snapshot Lifecycle 10:20
A B C D State of data at 10:20filename filename
A B C D
4-block file createdA B C DSnap10 10:00 snapshot
1010 snap!
B
If block B is changed, the original state can be recovered by rolling back to the snap taken at 10:00
Rev 4
File and Snapshot Lifecycle 11
A B C D State of data at 11:00filename filename
A B C D
4-block file createdA B C DSnap10 10:00 snapshot
1010 snap!
BA B C DSnap11 11:00 snapshot
1011
The next snap taken captures the change made to block B
18
Rev 4
File and Snapshot Lifecycle 14:01
A B Cfilename filename
A B C D
4-block file createdA B C DSnap10 10:00 snapshot
1010 snap!
BA B C DSnap11 11:00 snapshot
1011
A B C DSnap12 12:00 snapshot
A B C DSnap13 13:00 snapshot
A B C DSnap14 14:00 snapshotE
E
E
E
E
1012 1013 1014
If block D is deleted, it can still be recovered using the SNAP taken at 14:00.
Rev 4
File and Snapshot Lifecycle 17:00
A B Cfilename filename
A B C D
4-block file createdA B C DSnap10 10:00 snapshot
1010
BA B C DSnap11 11:00 snapshot
1011
A B C DSnap12 12:00 snapshot
A B C DSnap13 13:00 snapshot
A B C DSnap14 14:00 snapshotE
E
E
E
E
1012 1013 1014
A B CSnap15 15:00 snapshot (update E)E
A B CSnap16 16:00 snapshot (update E 2X)E
A B CSnap17 17:00 snapshotE
E
1015 1016 1017
E
Note that the snapshot pointers never change, they are immutable. They disappear when the snapshot reaches the end of its retention cycle, at which point the snapshot is deleted along with its pointers.
8Actual
Blocks Stored
39Apparent
Blocks Stored
8 Full Backups
19
Rev 4
File and Snapshot Lifecycle 17:00
A B Cfilename filename
A B C D
4-block file createdA B C DSnap10 10:00 snapshot
1010
BA B C DSnap11 11:00 snapshot
1011
A B C DSnap12 12:00 snapshot
A B C DSnap13 13:00 snapshot
A B C DSnap14 14:00 snapshotE
E
E
E
E
1012 1013 1014
A B CSnap15 15:00 snapshot (update E)E
A B CSnap16 16:00 snapshot (update E 2X)E
A B CSnap17 17:00 snapshotE
E
1015 1016 1017
E
8Actual
Blocks Stored
39Apparent
Blocks Stored
8 Full Backups
Restoring is easy. If the deletion of Block D signalled the start of a sequence of unwanted events, then a roll back to the snapshot taken at 14:00 is required.
Rev 4
File and Snapshot Lifecycle 10:20
A B Cfilename filename
A B C D
4-block file createdA B C DSnap10 10:00 snapshot
1010
BA B C DSnap11 11:00 snapshot
1011
A B C DSnap12 12:00 snapshot
A B C DSnap13 13:00 snapshot
A B C DSnap14 14:00 snapshotE
E
E
E
E
1012 1013 1014
A B CSnap15 15:00 snapshot (update E)E
A B CSnap16 16:00 snapshot (update E 2X)E
A B CSnap17 17:00 snapshotE
E
1015 1016 1017
E
8Actual
Blocks Stored
39Apparent
Blocks Stored
8 Full Backups
By restoring just the pointers from the 14:00 snapshot to the active file (or filesystem or LUN), the state file (or filesystem or LUN) at 14:00 can be restored almost instantly, without having to move any data.
20
Rev 4
Snapshot Status
39 2012 Nimble Storage, Inc.
Rev 4
Snapshots
Snapshot QuotaSnapshot Reserve
Snapped Volume
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Snapshot Reserve An accounting for a set amount of space that will be guaranteed available for the snapshot.
Snapshot Quota An accounting for the total amount of space a snapshot can consume.
RARELY USED
21
Rev 4
Zero Copy Clone
Snapshots are ROW Snapped data is held as a single dataset New writes are directed to available space in the storage pool
Zero copy clone Allows a volume to be created for online use based on a snapshot Any changed data is handled like a ROW snapshot Occupies no additional space until new data is written or changed
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Pointers
Rev 4
Best in Class Space Efficiency
100%
75%
25%
Nimble
56%
44%
NetApp
68%
32%
Equallogic
150%
Nimblew/compression
80%
NetAppw/ dedupe
68%
Equallogic
Raw Capacity Usable Capacity (as % of raw) Effective Capacity (as % of raw)
All vendorsNo snapshots
Snapshot space
Usable (physical) Capacity
Effective Capacity (w/compression)
Parity, Spares and Overheads
Usable capacity as a percent
22
Rev 4
Best in Class Space Efficiency
100%
Raw Capacity Usable Capacity (as % of raw) Effective Capacity (as % of raw)8%8%
32%
Equallogic
60%
44%
NetApp
4%
52%
Nimble
3%
72%
25%
144%
Nimblew/compression
74%
NetAppw/ dedupe
7.5%Equallogic
All vendorsWith snapshots
Snapshot space
Usable (physical) Capacity
Effective Capacity (w/compression)
Parity, Spares and Overheads
Usable capacity as a percent
Rev 4
Section 10: Replication
23
Rev 4
Replication Overview
What is replication and how does it
work?
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Introduction
Replication creates copies of volumes on a separate Nimble array primarily for the purpose of off-site backup and disaster recovery.
Asynchronous Triggered by snapshots Topologies supported: 1:1, N:1, bi-directional Transfers compressed snapshot deltas Replica volume can be brought online instantaneously Controlled by two processes:
Management (scheduling) Data transfer
24
Rev 4
Replication OverviewM
gmt.
Net
wor
k
ReplicaSnapshot Replica
Partners
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Upstream (Source)
Downstream (Destination)
Rev 4
One-to-One Replication
NETWORK
Single volume assigned to Hourly
Replica of volume assigned to Hourly
Multiple volumes assigned to Daily
Replicas of volumes assigned to Daily
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
25
Rev 4
Reciprocal Replication
NETWORK
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Site A
Site B
Rev 4
Many-to-One Replication
NETWORK
Volumes assigned to SQL
Replica of volumes assigned to SQL
Volumes assigned to Outlook
Replica of volumes assigned to Outlook
Volumes assigned to Hourly
Replica of volumes assigned to Hourly
Volumes assigned to datastore1
Replica of volumes assigned to datastore1
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Site A
Site B
Site C
26
Rev 4
How Replication Works The BasicsN
etw
ork
Replica Snapshot
1. Create a replication partnership2. Define replication schedule3. At first replication the entire
volume is copied to the replica partner
4. Subsequent replicas contain only changes that have occurred
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Volume Ownership
Volcolls, schedules, volumes have a notion of ownership On downstream array, replicated objects are owned by upstream array
and cannot be directly modified
27
Rev 4
Software Components
Partner: identifies a Nimble array that will replicate to and/or from Snapshot Schedule: attribute of a volume collection, details when
to snapshot and replicate and to which partner (one or more of these per volume collection)
Throttle: provides the ability to limit replication transmit bandwidth
Rev 4
Partner
Identifies a Nimble array that can replicate to and/or from this array Must be created on upstream and downstream arrays Attributes:
Name: must match array name Hostname: must match arrays management IP address Secret: shared secret between partners (not currently enforced)
Connected: successfully established communications Management process re-affirms 1/minute Test function performs this on demand
Synchronized: successfully replicated configuration, updated as needed and every 4 hours
28
Rev 4
Partner (contd)
Pause/Resume: Terminate all in-progress replications inbound or outbound, to/from
this partner do not allow new ones to start until Resume Persists across restarts
Test (button in GUI): Perform basic connectivity test
Management process Controller -A to B and B to A Data transfer process Controller A to B and B to A
Throttles: Limit transmit bandwidth to this partner Scheduling parameters include days, at time, until time Existence is mutually exclusive with array throttles (a system can
contain array-wide throttles or partner-wide throttles, but not both)
Rev 4
Replication Partner Notes
Replication happens on Management IP
You can have many replication partners
You can pause replication by partner but NOT by Volume Collection or schedule
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
29
Rev 4
Volume Collection Schedules
Groups related volumes into a set that is snapshotted and replicated as a unit
Contains one or more Snapshot Schedules that specify: When to take snapshots To/from replication partner Which snapshots to replicate (--replicate_every) How many snapshots to retain locally How many snapshots to retain on the replica Alert threshold
Created on upstream array, automatically replicated to downstream
Rev 4
Volume Collection (contd)
Replicated as configuration data along with all snapshot schedules that define a downstream partner
Sent to downstream partner as changes are made (transformed on downstream, i.e. Replicate To Replicate From
Volumes created in offline state downstream as needed Clones created downstream only if parent snapshot exists
Partner considered synchronized only if all relevant configuration is successfully replicated (volcolls, schedules, volume creation)
30
Rev 4
Replication Schedules
Replication configured using Volume Collection schedule attributes
Different Schedules in the same Collection must replicate to the same partner
Calculate your change rate and bandwidth can you get it all done??!!!
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Snapshot Collection
Creation of replicable snapcoll triggers its replication to start I.E. Replicate every # of snapshots
Counter starts at creation of the schedule and does not reset
Must replicate in the order they are created Replication deferred if volume collection busy replicating prior
snapcoll Replication will not proceed unless partner is synchronized Replicable snapcoll cannot be removed by user unless replication to
the partner is paused
31
Rev 4
Snapshot Collection (contd)
Replication status: Completed: Replication to partner is completed. Pending: Replication to partner not yet started (pending completion of
prior snapcoll) In-progress: Replication in progress N/A: Upstream: non-replicable, Downstream: always shows this status
Start time, Completion time, Bytes transferred
Rev 4
Replication QOS Bandwidth Limit
Support Multiple QOS Policies
Applies to Partner
Can define a Global QOS for all partners Under Manage Replication Partner
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
32
Rev 4
Replication Sequence
Replication episode: In parallel for each volume in volcoll:
Identify common snapshot on downstream, traversing volume parentage (i.e. for efficient clone support)
Management process checks to ensure replication is not paused and volumes/volume collections are owed by the upstream array
Data transfer process begins to transfer data Management process awaits confirmation from the data transfer process
that replication is complete. Create snapshot collection on downstream after all volumes have
completed data transfer
Concurrency the data transfer process is limited to 3 streams The mangement process periodically retries if resources unavailable
Rev 4
Replication Concepts
64 2012 Nimble Storage, Inc.
Upstream Array Downstream Array
Snap Snap
Snap Snap
10AM 10AM
11AM 11AMPromote
11:30AM
Temp Upstream (Orig. Downstream Array)
33
Rev 4
Temp Upstream (Orig. Downstream Array)
Replication Concepts
65 2012 Nimble Storage, Inc.
Upstream Array
Snap Snap
Snap Snap
10AM 10AM
11AM 11AMPromote
11:30AM
When you promote a downstream replication partner, the system:1. Suspends the replication relationship associated with the
volume collection.2. Give ownership of volumes to the downstream array.3. Creates a second (local) instance of the volume collection
and assumes ownership.4. Clears Replicate From5. Brings the most recently replicated snapshots online as
volumes. The contents of the newly available volumes are then consistent with the last replicated snapshots.
6. Begin taking snapshots per defined schedules
Promote
Only use promote if the upstream array is no longer available.
Rev 4
Replication Concepts
66 2012 Nimble Storage, Inc.
Upstream Array
Snap Snap
Snap Snap
Snap
10AM 10AM
11AM 11AMPromote
11:30AM
12PM
Reconfigure role - downstream
Temp Upstream (Orig. Downstream Array)Temp Downstream (Orig. Upstream Array)
34
Rev 4
Temp Upstream (Orig. Downstream Array)Temp Downstream (Orig. Upstream Array)
Replication Concepts
67 2012 Nimble Storage, Inc.
Snap Snap
Snap Snap
Snap
10AM 10AM
11AM 11AMPromote
11:30AM
12PM
When you Perform a handover the system will:1. Take all associated volumes offline2. Takes a snapshot of all associated volumes3. Replicates these snapshots to a downstream
replication partner4. Transfers ownership of the volume collection to the
partner5. Brings the newly replicated volumes online6. Reverses replication roles/direction
Handover
Handover
Rev 4
Replication Concepts
68 2012 Nimble Storage, Inc.
Upstream Array Downstream Array
Snap Snap
Snap Snap
Snap
10AM 10AM
11AM 11AMPromote
11:30AM
12PM
Handover
SnapSnap
Snap
Reverse roles
Temp Upstream (Orig. Downstream Array)Temp Downstream (Orig. Upstream Array)
Automatic Snap Taken
Before Restore
35
Rev 4
Demote
69 2012 Nimble Storage, Inc.
Rev 4
Demote
Demote: converts non-replica objects to replica objects (lossy failback)
Offlines volumes Clear Replicate to, set Replicate from (reverse rolls) Use greater of local/replica snapcoll retention for replica
retention Give ownership of volcoll objects to specified partner Stop taking local snapshots lossy since volume data is ultimately restored to that of
upstream partnerData on the downstream is not replicated across to the upstream
36
Rev 4
Replication Concepts
71 2012 Nimble Storage, Inc.
Upstream Array Downstream Array
Snap Snap
Snap Snap
Snap
10AM 10AM
11AM 11AMPromote
11:30AM
12PM
Demote
Snap
Snap
Reverse roles
Temp Upstream (Orig. Downstream Array)Temp Downstream (Orig. Upstream Array)
Rev 4
Debugging
Use partner info on upstream to determine connectivity, configuration sync (may include details as to what is preventing synchronization)
Use volcoll info on upstream to determine state of in-progress replication (may include details as to state machine progress or data transfer counts)
Per-partner and per-volume repl stats available (tx and rx byte counts)
37
Rev 4
Replication Status
Can use stats command on CLI to view throughput history
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
1Rev 4
Section 11: Data Protection & DR
Rev 4
Recovery Scenarios
Recovery from local snapshots Single volume, volume collection Replacing entire volume
Testing my DR site without interrupting replication Use of clones
Full Disaster Recovery
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
2Rev 4
Recovery Scenarios Recovery from local snapshots
1. Clone the snapshot (creates a first-class volume)
1 3
2
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Recovery Scenarios Recovery from local snapshots
2. Add/adjust ACLs on the volume (host initiators)3. Mount the volume (could require resignature)4. Register the VM and ether
Perform a cold migration Start the VM and to a Storage vMotion
5. Unmount the cloned volume6. Delete the cloned volume
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
3Rev 4
Recovery Scenarios
Restore to previous snapshot1. Quiesce applications2. Unmount the active volume(s) from the host(s)3. Select the snapshot/snap-collection to restore 4. Click Restore5. Mount the volume(s)6. Start applications
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Recovery Scenarios
Testing my DR site without interrupting replication1. Go to downstream replica2. Clone the snapshot (create a first class volume)3. Add/adjust ACLs on the volume4. Mount the volume5. Interrogate/Test the data and applications (via Windows,
ESX, etc.)6. Unmount the volume7. Delete the cloned volume
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
4Rev 4
Recovery Scenarios
Full Disaster Recovery (Primary site is inaccessible) Failover to DR site1. Promote downstream volume collections at DR site2. Add/adjust ACLs on the volumes3. Mount volumes to application servers (Windows/ESX)4. Start production environment at DR site
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Recovery Scenarios
Full Disaster Recovery (Primary site is inaccessible) Failback to Primary site1. Install new array and configure as downstream partner2. Allow replication of volumes while still running at DR site3. Gracefully shutdown apps at DR site4. Perform Handover to primary site5. Start production environment at primary site
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
5Rev 4
Application-Integrated Data Protection
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
NTFSVSS
NPM Protection for Windows Applications
Improved protection with fast snapshots, Efficient capacity and bandwidth utilization
How it works:1. Protection schedule triggers snapshot process2. NPM talks to MS VSS service.3. VSS tells Exchange to quiesce mail stores.4. VSS tells NTFS to flush buffer cache.5. VSS tells Nimble array to take a snapshot.6. Nimble array captures near instant snapshots of all volumes
in collection.7. Optional: NPM runs database verification on predefined
schedule to ensure consistency and truncates logs8. NPM triggers WAN efficient replication on pre-defined
schedule9. Optional: Existing backup software mounts snapshot for
weekly archive copy to tape10. When needed, snapshots provide fast restore capability
snapshots
BackupServer
NPM leverages VSS (Volume Shadow-Copy Service)
Mail stores
9
91
2
4
6
3
5
Tape
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
6Rev 4
VMware Synchronized Snapshots
Nimble OS can take a VM application consistent snapshot
Define the vCenter host At first use you will also
need to provide: Username with
administrator access Password for the
administrator
11 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
VMware Synchronized Snapshots
Return to the details page of the volume collection and click Validate to ensure:
username and password are correct user has the correct permissions
12 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
7Rev 4
SRM with Nimble Storage Replication Adapter
VMware vCenter Site Recovery Manager (SRM) Host-based application that lets you set up disaster recovery plans for the
VMware environment before you need them. SRM is a vCenter plug-in that allows disaster recovery tasks to be managed inside
the same GUI tool as your other VM management tasks. SRM, when used with the Nimble Storage Replication Adapter, lets you create
and test a Nimble array-based DR recovery plan without having an impact on your production environment.
In DR scenarios, your Nimble CS-Series arrays keep your data protected and replicated for immediate availability from the DR replication partner.
Requires VMware vCenter Site Recovery Manager 4.1 and later
Rev 4
VMware SRM + Nimble ReplicationEfficient DR Automation
Site Recovery
Manager
Site A (Primary) Site B (Recovery)
VMwarevCenter Server
VMware vSphere
Servers
Site Recovery
Manager
VMwarevCenter Server
VMware vSphere
Servers
Nimble arrays support SRM v4.1 and v5.0 Many new features in 5.0
Planned migration (vs. unplanned) Use-case: Disaster avoidance, datacenter
relocation Re-protection
Use-case: After a successful failover, reverse roles of active/replica sites
Failback Use-case: For disaster recovery testing with live
environments with genuine migrations return to their initial site
Disaster Recovery Event An initial attempt will be made to shut down the
protection groups VMs and establish a final synchronization between sites
Scalability (# of VMs, protection groups etc.)
Storage Replication
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
8Rev 4
SRM - Planned Migration
Ensures an orderly and pretested transition from a protected site to a recovery site
Ensures systems are quiesced Ensures all data changes
have been replicated Will halt the workflow if an
error occurs allowing you to evaluate and fix the error
Start the virtual machines at the recovery site
Systems will be application consistent
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
SRM - Reprotection
Reverses replication Ensures continued protection For use after a recovery plan
or planned migration Selecting reprotect will
establish synchronization and attempt to replicate data back to the primary site
Site Recovery
Manager
Site A (Primary) Site B (Recovery)
VMwarevCenter Server
VMware vSphere
Servers
Site Recovery
Manager
VMwarevCenter Server
VMware vSphere
Servers
Storage Replication
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
9Rev 4
SRM - Failback
Failback will run the same workflow used to migrate the environment to the recovery site
Will execute only if reprotection has successfully completed
Failback ensures the following: All virtual machines that were initially migrated to the
recovery site will be moved back to the primary site. Environments that require that disaster recovery
testing be done with live environments with genuine migrations can be returned to their initial site.
Simplified recovery processes will enable a return to standard operations after a failure.
Failover can be done in case of disaster or in case of planned migration.
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
SRM - Disaster Recovery Event
When a disaster even occurs SRM will: attempt to shut down the protection groups VMs Attempt to establish a final synchronization between sites
Designed to ensure that VMs are static and quiescent before running the recovery plan
If the protected site is not available the recovery plan will run to completion even if errors are encountered
18 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
10
Rev 4
vStorage APIs for Array Integration (VAAI)
vStorage APIs for Array Integration (VAAI) VAAI is a feature in the Nimble OS and supports:
Zero Blocks/Write Same primitive (fundamental operation) hardware acceleration feature
Hardware Assisted Locking SCSI Unmap
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
VAAI Write Same
When performing a deletion VMware I/O operations like VM creation, cloning, backup, snapshots, and VMotion require that data be zeroed via a process called block zeroing
The write same API speeds up this process by: moving a large block of zeros and executing repeated writes on the array
The array handles this operation rather than having the host send repetitive commands to the array
20 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
11
Rev 4
VAAI Support - Hardware Assisted Locking
Hardware Assisted Locking (ATS) Enabled if array supports it Allows an ESX server to offload the management of locking to the storage hardware
avoids locking the entire VMFS file system Without ATS
a number of VMFS operations cause the file system to temporarily become locked for exclusive write use by an ESX node. These can include:
Creating a new VM or template Powering ON/OFF a VM Creating deleting a file or snapshot Moving VMs via vMotion
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
VAAI Support - SCSI Unmap
SCSI Unmap (space reclamation) vSphere 5.0 introduced VAAI Thin Provisioning Block Space Reclamation Primitive (UNMAP) designed to efficiently
reclaim deleted space to meet continuing storage needs. In ESX5.1 it is enabled by default.
Before SCSI UnmapHost deletes data Array doesnt
understand that data is no longer relevant.
Data remains on array consuming space.
With SCSI UnmapHost deletes data Array understands
and releases the space.
Space is now reclaimed and now useable.
KB from VMware regarding SCSI Unmap: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1021976
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Object-1 erased, server does not inform storage that space can be released
Object-1 erased, storage understands and releases space used
12
Rev 4
Nimble vCenter Plugin
The Nimble vCenter Plugin works with vCenter to: clone datastores and snapshots resize datastores edit protection schedules take snapshots and set them on/offline restore from a snapshot delete snapshots
23 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
The vCenter plugin requires ESX 4.1 or later
Rev 4
Registering the plugin
24 2012 Nimble Storage. Proprietary and confidential. Do not distribute.
vmwplugin --register --username --password --server CLI
Review the Vmware Integration Guide 1.2 - Using Nimble's vCenter plugin, for details on using this plugin
The plugin is not supported for multiple datastores located on
one LUN, one datastorespanning multiple LUNs, or if the LUN is located on a non-Nimble array.
13
Rev 4
Section 12: Support
Rev 4
Proactive Wellness
26
Customer Site All Customer Sites
Nimble Support
5-minute heartbeats
Comprehensive Telemetry Data
Real-time analysis of over 150,000 heartbeats per dayReal-time analysis of over
150,000 heartbeats per day
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
14
Rev 4
Replication conformance
to RPO
Alerts for unprotected
volumes
MPIOMisconfiguration
Warnings
Opportunities to free up space Connectivity and
health checks before software upgrades
Proactive Wellness
27
Customer Site All Customer Sites
Nimble Support
5-minute heartbeats
Proactive wellness,Automated case creation
2012 Nimble Storage. Proprietary and confidential. Do not distribute.
Rev 4
Proactive Wellness
28
Customer Site All Customer Sites
Nimble Support
5-minute heartbeats
Proactive wellness,Automated case creation
Se