Upload
others
View
7
Download
0
Embed Size (px)
Citation preview
GLOBAL SPONSORS
Scale-Out Storage
Sanel SamardzicSr. Systems Engineer - MIT
VMAX AFA and XtremIO
Internal Use - Confidential
VMAX All Flash
15.4TB SSD for HighestIOP/TB/Floor Tile Density
LatencyDensity
Simplicity
Engineered
Performance numbers based on 8 V-Bricks, 8k RRH
6+M IOPS, <0.5ms Latency150GB/s Bandwidth
Appliance-Like PackagingSoftware Included
Simple, Simple, SimpleOne Tier, Any Skew, No HDDs
The VMAX All Flash Family
Software Package Highlights
SnapVXCompression
F SOFTWARE
FX SOFTWARE Above +
ViPR SuitePowerPath/VEand more…
SRDFD@REeNAS
1M IOPSRRH-8K
1PBe Capacity64 FC/iSCSI Ports1 to 2 V-Bricks
6.7M IOPSRRH-8K
4PBe Capacity192 FC/iSCSI or 256 FICON Ports1 to 8 V-Bricks
VMAX 950FVMAX 250F
VMAX All Flash 950F/FX configuration details
• V-Bricks in single increments– Redundant dual director engine design– 72 Broadwell CPU cores @ 2.3GHz (+ Turbo)
• Up to 3 I/O module pairs per V-Brick– Each 4x 16Gb FC, 16Gb FICON, or 10Gb iSCSI– eNAS 10Gb IP
• 2 DAEs per V-Brick– 240 x 2.5” flash drives per V-Brick
• RAID 5 (7+1) or RAID 6 (14+2)
VMAX All Flash 250F/FX Configuration Details
• V-Bricks in single increments– Redundant dual director engine design– 48 Broadwell CPU cores @ 2.2GHz
• Up to 4 I/O module pairs per V-Brick– Each 4x 16Gb FC or 10Gb iSCSI
› NO mainframe (FICON) support
– eNAS 10Gb IP
• 2 DAEs per V-Brick (12Gb SAS)– 50 x 2.5” flash drives per V-Brick
• RAID 5 (3+1, 7+1) or RAID 6 (6+2)
VMAX All Flash 250F/FX sample configuration
• Dual V-Brick system– 96 cores, 4TB cache– 80 active drives (+2 spares)– Up to 64 host ports– Up to 4 eNAS Data Movers
• 80 x 7.7TB flash (7+1)– ~500TB writable– ~1PB effective (2:1 compression)– ~1.3PB host visible capacity (thin/allocated)
• <5 kVA, <600 lb. (w/o rack)
VMAX All-Flash Technology Refresh Example800TB usable
VMAX 20K 9 bay system
VMAX All FlashSingle bay system
10X MorePerformance
40% LowerTCO
87% LessEnergy
92% SmallerFootprint
98% Fewer driveReplacements
VMAX All-FlashEnterprise Data Services
VMAX All Flash Enterprise Data ServicesEXTENSIVE
ECOSYSTEMPROVEN
PROTECTIONMANAGEMENTAUTOMATION
MASSIVECONSOLIDATION
ENASBLOCK & FILE
VMWAREVAAI & VVOLS
MICROSOFTHYPER-V & ODX
OFFLOAD
BROADEST OS, SERVER/DB,
CLUSTER SUPPORT
LOCAL REPLICATION
AT SCALE
MULTI-SITEREMOTE
REPLICATION
SRDF/METRO:ACTIVE/ACTIVE
PROTECTPOINTAPP RECOVERY
EMBEDDEDUNISPHERE
UNISPHERE360
200 ARRAYS
APPSYNCAPP & DB
INTEGRATION
VIPRSTORAGE
AUTOMATION
SCALE UPSCALE OUT
64 PORTS64,000 LUNS
PRIORITY I/O CONTROL QOS
CLOUD ARRAYOBJECT / CLOUD
INTEGRATION
VMAX All-Flash Online Code Updates: True NDUUnique in the industry
• < 10 second array OS upgrade
• No component downtime– No rolling outage upgrade– No Failover/Failback processes involved – No switching LUN ownership/trespass required
• Ports never drop light– Servers never see logout/login (no fabric RSCN)
• Online downgrades work the same way
• Historical feature going back many generations
DOWNTIME COST
$1.8 millionper day
$45,000 per hour
$750 per minute
“Thank you VMAX for giving me back my weekends”
“VMAX NDU is the gold standard for upgrades”
“Nobody knows its happening – it just works”
Remote Replication Gold Standard – SRDF
Synchronous AsynchronousMetro
Zero data loss High performance
Scalable consistency
Extended distanceMulti-cycle mode
Remote link resiliency
Active/activeAutomated failover/back
Non-disruptive migrations
2-site, 3-site, and 4-site replication
Simple: <2 minutes to configure
Up to 100km
1
42
3
Unlimited Distance
1
23
Witness
Up to 100km
VMAX Non-disruptive Migration (NDM)Migrations simplified
• Three simple steps:– Create, Cutover, Commit
• Customer usable & free of charge
• Application-level migrations– Large scale migrations
• VMAX VMAX All Flash– Broad host support matrix
• Maintains existing replication– Snapshots & SRDF
Source VMAX (5876)
SRDF technology
MultipathingSW
Host(Single or Cluster)
Metro Distances
SRDF Technology
Target VMAX All Flash
(5977)
TimeFinder SnapVX
13© Copyright 2014 EMC Corporation. All rights reserved.© Copyright 2014 EMC Corporation. All rights reserved.
UP TO 256 SNAPS PER SOURCEUP TO 1024
LINKED TARGETS PER SOURCE
INCREASEDAGILITY
USER-DEFINED NAMES/VERSIONSCREATE GROUP
SNAPS IN ONE CLICKAUTOMATIC EXPIRATION
EASEOF USE
TARGET-LESS SNAPSHOTS
REDUCED IMPACT
PRODUCTIONVOLUME
LINKED TARGET
SNAPSHOT
SNAPSHOT
SNAPSHOT
ProtectPoint Storage-integrated ProtectionDramatically faster backup and recovery
Faster backup & recovery
Eliminateapplication impact
Reduce cost and complexity
20x Faster Backup
10x Faster
Recovery
VMAX All-Flash Storage EfficiencyReduces TCO
I n l i n e c o m p r e s s i o n
S n a p s
T h i n p r o v i s i o n i n g
Z e r o s p a c e r e c l a i m
4:1S t o r a g e E f f i c i e n c yC o m p r e s s i o n
2:1*
* Compression rates vary depending on customer applications and environments. 2:1 compression ratio is expected for typical OLTP workloads.
B l o c k
F i l e
Introducing XtremIO X2
Why XtremIO?
Consistent Performance Inline, all the time data services with no performance impact
App Integrated CopiesRich Application Integration. No Compromise Copy services.
Unmatched EfficiencyMaximize Efficiency with Deduplication and Compression.
Flash Optimized w/ Multi-Dimensional Scalability
New Multi-Dimensional Scalable HW
SW-driven Performance/
Efficiency Improvements
iCDM Use Case Enhancements
New Simple HTML-5 UI
New Metadata-Aware Native Replication
HTML
5M
id -2018
Thin Provisioning
All volumes
thin; optimized for data saving
Deduplication
Inline –block
written once
No post-process
XtremIO Virtual Copies
Super efficient –in-memory meta data
copy
Compression
Inline –compressed blocks only
No post-process
D@RE
Always-on Encryption
with no perf impact
XtremIOData Protection
Single “RAID model” double
parity w/ 89%
usable
XtremIO In-Line, All-The-Time Data Services
XtremIO Scale Out CapabilitiesScale-up X-Brick to
138TBScale-out a cluster
to 8 X-Bricks of
1.1 PBu
Assumes 6:1 data reduction
Active ControllerActive Controller
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
Active ControllerActive Controller
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
Active ControllerActive Controller
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
….........
Up to 5.5PB
of effective capacity
100TBEnvironment
25TB on XtremIO4:1 Reduction
20TB on X25:1 Reduction
Increased Storage Efficiency
25 better data reduction(average)
%
PCIe NV-RAM UnitReplaces BBU in X1
• Used for data and metadata vaulting
• Increased reliability
• Reduced complexity
• Reduced cabling
• Reduce overall cluster RUs
• Leverages super capacitor
• Allows odd X-Brick supportSuper-Cap
XtremIO Virtual Copies Are Busy !XTREMIO VIRTUAL COPIES HANDLE ~40% OF IO WORKLOAD
41% 40%
59% 60%
Total Read IOs Total Write IOsIOs to Snapshots IOs to Volumes
XtremIO Virtual Copies v/s Volumes: IOs from Entire Install Base
XtremIO Metadata-aware native replication
• Uses XtremIO in-memory snapshots
• Wizard based
• Full operational disaster recovery
• RPO as low as 30 seconds
• Immediate RTO
• Up to 1000 recover points
• “Fan-in” configurations
• Supports XtremIO High Performance
• Efficient Metadata-aware Replication
• Efficient replication - Compression aware
Easy Operation Best Protection Superior Performance
XtremIO native replication
75%Data reduction
RPOsas low as
30seconds
=S1S2
DELTA(S1, S2)
222
Primary Site DR Site
Deduplicated and net new blocks transferred
Just pointerupdates for
existing blocks
With up to…
4000Virtual Desktops hosted
per X-Brick
33%Lower $/desktop
25%Faster Boot Times
Up to
40%More VDI Users
80%Better application latency
2XBetter copy operations
25%Better data reduction
2X# of XtremIO Virtual Copies
4Xbetter rack density
ImprovedPerformance
TCO SavingsPer X-Brick
ImprovedEfficiency
1/3rdLower $/GB
Up to
Why XtremIO X2?
NVMe
Emergent Non-volatile Media ImpactAddresses memory/storage latency/capacity gaps
10ns 1us 10us1ns 100ns 100us 1ms 10ms
HDD
Medium NV Media(DRAM)
(ProcessorSRAM)
Memory access semantics
IO block access semantics
Low Speed Storage
AccessLatency
Rel
ativ
e ca
paci
ty(n
ot to
sca
le)
Capacity/Latency Gap Fill
Faster NV Media
$$$$ $$$ $$ $ < $ << $
Emergent Memory Domain
MLCSLC
TLCQLC
High Speed Storage(NAND Flash)
Slower NV Media
NVM Express and I/O Latency
Source: Storage Technologies Group, Intel. Comparisons between memory technologies based on in-market product specifications and internal Intel specifications.
Latency
HDD+ SAS
NAND+ SAS
NAND+ NVMe
Drive Latency Controller Latency(i.e. SAS HBA)
Software LatencySCM+ NVMe
NVMe drives down connection latency
Storage-Class Memory technology offers ~10xlatency reduction versus NAND
NAND technology offers ~100x latency reduction versus HDD
Slide credit: Intel and NVM Express
Dell EMC Storage Technology Evolution
Dell EMC is working closely with NVMe and Storage-Class Memory suppliers and will be a leader in integrating, optimizing, and delivering next generation flash solutions
SCSI + HDD
1988
Industry’s first Intelligent Cached Disk Array combining cache and commodity HDDs
SAS + SLC
2008
Industry’s first Enterprise Array to
support SSD Flash and automated tiering
NVMe + NAND
Next
Leadership for Enterprise Array delivering NVMe-connected SSDs
NVMe + SCM
Future
Leadership for Enterprise Array delivering NVMe-connected SCM
~15->1 ~1.2->1 ~5->1
Q&A
Thank you