Upload
erepublic
View
214
Download
0
Embed Size (px)
Citation preview
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
1/30
Optimizing Your Storage
Virtualized Environme
Copyright 2014 Fusion-io, Inc. All rights reserved.
Blake Scott
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
2/30
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
3/30
I/O Blender Alters Performance Reality
VMwarevSphere
akaThe Randomizer
Virtual Machines
64K Random Read/Write
Exchange
64K Sequential Read/Write
SQL Server
8K Random Read/Write
SharePoint
8K Random Read/Write
64K Sequential Writes
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
4/30
Disks Are Not The Answer
4
Mechanical limitations
More time seeking than R/W
Rotational latencies
Theres no fix
Platter
HDDs are Virtualization Krypton
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
5/30
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
6/30
Storage Technology Choices
6
Shared capacity Enables VM HA
Delivers highIOPS
Higher $/GB thtraditional arra
Optimizes storagefor specificworkloads
Leverages physicalstorage
infrastructure
Traditional SAN Software Defined Storage All-Flash Array
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
7/30
Understanding the Storage Challenge
7
VMs share storage resources
Creates contention VMs have unique storage
requirements
IOPS
Throughput
Latency
Capacity
Availability
15,000IOPS
4 TB 5 Lat
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
8/30
Joys of Noisy Neighbors
8
VMs can consume more storage
resources than expected Increases storage resource
contention
Usually has negative impact onother VMs
15,000IOPS
4 TB 5 Lat
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
9/30
Storage Quality of Service
9
Many storage systems enable
QoS maximums to limit IOPS orthroughput
Can be set on VMs, volumes orgroups of volumes
1K IOPS3K IOPS 2K IOPS 4K IOPS
QoS Maximums
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
10/30
But QoS Maximums Dont Work
10
Challenging to predict amount of
IOPS or throughput that VMsrequire
Maximum can throttle VMs whenstorage resources are availableto meet their demands
Lack business intelligence tomeet end-user needs 1K IOPS3K IOPS 2K IOPS 4K IOPS
QoS Maximums
Performance consumed
Performance needed, but not av
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
11/30
Considerations for meeting your storage goal
Architecturematters
Priorities driveperformance
Not all dataneeds flash
P i iti d i N t ll d t
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
12/30
Faster than SSDMax 5.2TB CapacityConsumes 0 drive bays
Faster than HDDMax 2.0TB CapacityConsumes 1 drive bay
SAS/SATA Flash PCIe Flash
Priorities driveperformance
Not all dataneeds flash
Architecture Matters
Legacy Architecture PCIe Flash Integration Outperforms
Why Choose PCIe Fla
4x more read IOPS
10x more write IOPS
5x more Throughput
4x faster response (la
80x more endurance
Priorities drive Not all data
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
13/30
High Priority
Low PriorityAll Data
All writes to disk drives
Wasteful capacity utilization
Controller bottleneck
Priorities driveperformance
Not all dataneeds flash
Architecture Matters
All writes go to flash
Maximizes disk capacity utilization
Intelligence moves data between tiers
Legacy Architecture Datapath Extends Value of Flash
Priorities drive Not all data
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
14/30
Priorities driveperformance
Not all dataneeds flash
Architecture Matters
Legacy Architecture Quality of Service policies ensure predict
Cannot manage performance
Equal access to a small % of flash
Contention leads to inconsistency
Policy managed performance
Ensure minimums are met
Adjustable in real-time
MissionCritical
BusinessCritical
Non-Critical
Unchanged
Performance
Spike
Impact
Equal access to resourcesleads to inconsistency
Priorities drive Not all data
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
15/30
Legacy Architecture Quality of Service policies ensure predict
Impact is shared equally
Inconsistency is amplifiedExpectations are not met
Prescribed resource distribution
Critical app performance sustained
Maintains business priorities
MissionCritical
BusinessCritical
Non-Critical
Unchanged
Slight
Change
Impact
Impact
Impact
Impact
Unplanned Event Unplanned Event
Real time datapath decisions foroptimal performance.
Priorities driveperformance
Not all dataneeds flash
Architecture Matters
Priorities drive Not all data
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
16/30
Legacy Architecture Prioritized active caching for critical d
Consumes valuable performance
Pin-to-flash becomes less efficientWastes all-flash array investment
You choose what data gets flash
Keeps active data-set in flash
Cold, low priority data stay on disk
Low Priority
Cold blocks
Snapshots
Remote Copy
Clones
RAID Parity
Critical Application
Hot Data
Cold Data
Priorities driveperformance
Not all dataneeds flash
Architecture Matters
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
17/30
fusionio.com | At the speed ofNOW
.
Thank YThank Y
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
18/30
SupportDetails
Software updates included Hardware upgrades do not af
cost Proactive phone-home monito Single support contract White glove storage engineer
Offerings
7 day x 24 hour phone | Onsit 7 day x 24 hour phone | NBD 5 day x 9 hour phone suppo
Parts
Lowest support TCO in th
industry
Support contract for base unitfuture hardware upgrades whcontract
n5 Series Technical SpecificationsModel n5-200 n5-300 n5-500 n5-1000
Flash Capacity2.0 TB Flash (base)7.2 TB Flash (max)
2.6 TB Flash (base)7.8 TB Flash (max)
5.2 TB Flash (base)10.4 TB Flash (max)
10.4 TB Flash (base)15.6 TB Flash (max)
Disk Capacity (RAID6)
32 TB Raw (base) /22 TB Usable128 TB Raw (max) /88 TB Usable
64 TB Raw (base) /44 TB Usable256 TB Raw (max) /176 TB Usable
64 TB Raw (base) /44 TB Usable256 TB Raw (max) /176 TB Usable
64 TB Raw (base) /44 TB Usable256 TB Raw (max) /176 TB Usable
Performance Rating
(IOPS)
150,000 IOPS *2.0 GB/sec
Throughput **
200,000 IOPS *2.4 GB/sec
Throughput **
225,000 IOPS *2.7 GB/sec
Throughput **
250,000 IOPS *3.0 GB/sec
Throughput **
RAM 96GB 192GB
Included Features
Quality of ServiceService LevelsDynamic Data PlacementData Protection (Snapshot and Replication)
Storage Processors Dual Active-Active
Network Interfaces Data: (4) 1/10GbE SFP+, iSCSI / Management: (4) 1GbE RJ45, http, https
Hardware Availability Redundant Storage Processors, Fans, Power Supplies and Network ConnectionsDual Port SAS drives
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
19/30
Flash First Architecture and Data Path
19
High Priori
Low Priorit
PCIe-attached flash removes bottlenecks Up to 40x faster
PCIe maximizes performance and capaci Flash used for all writes, and for read cac
Fusion ioControl HybridPCIe Integrated Flash
SSD
PCIe-attached Flash
CPU SAS Controller
CPU PCIe-attached flash
SSD
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
20/30
Performance Benefits
20
VMs Per System64K Block Size
Concurrent Desktop BootsRead Workloads
Run 3x More
Virtual Machines
Boot 4.5x More
Concurrent Desktops
Other Hybrids Fusion-io Other Hybrids Fusion-io
Internal Testing Results Versus Other Hybrids
3x 4.5x
Scalability has Priorities drive Not all daArchitect re Matters
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
21/30
yevolved performance needs fla
Architecture Matters
Legacy Architecture Host-based read cache for low latency
Network becomes bottleneck
Data stranded on serversFlash underutilized
All writes are accelerated by flash
Reads cached at host and array
Eliminate server as SPOF
StrandedData
Network isBottleneck
CriticalApp
Server
CriticalApp
Server
S o f t w a r e
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
22/30
Prioritize workloads bas
on business requiremen Set minimum performan
targets to maintain SLA
Minimums allow VMs to
outperform their target ifresources are available
Guarantee mission criticapplication performanceresources
Manage Flash with Storage Quality of Service
22
Mission Critical
50,000 IOPS
500 MB/s
10 ms
Business Critical High
20,000 IOPS
250 MB/s
20 ms
Business Critical Low
10,000 IOPS
100 MB/s
40 ms
Non Critical High
5,000 IOPS
50 MB/s
100 ms
Non Critical Low
1,000 IOPS
25 MB/s
250 ms
Storage QoS Policy Minimum Performance Levels
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
23/30
ioControl: Manage VDI boot storms
23
PerformanceLevelsMaintained
ControlledPerformanceImpact
VDI BootStorm
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
24/30
Zero Footprint Performance Scalability
24
Affordable flash performance
Shared flash for reads andwrites
High capacity for maximumconsolidation
Some data requires even low
read latency Scale performance to
server/host
Remove storage networklatency
ioControl Hybrid
ServerCache
Server/Host
ServerCache
NetworkUser
ESG Lab Validation Report
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
25/30
ESG Lab Validation ReportOptimizing SQL Server Clusters with End-to-end Flash Hybrid Storage
25
ioControl and ioContro
SPX provide organizatiowith the benefits of flashmixed with traditional diin a cost-effective, end-tend hybrid solution at a
price point that is hard tbeat.
-Enterprise Strategy G
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
26/30
ioControl SPX Advantage: 5x More VMs
26
For details, go to:
http://www.fusionio.com/blog/ioControlSPX-performance-testing/http://get.fusionio.com/iocontrol-tolly-white-paper
Comparable Pricing (within 20%)
Identical Footprint
Conventional Hybrid(NVM+SSD+Disk)
ioControl n5-100(PCIe+Disk)
ioControl n5-SPX(Server Cache+PCIe+Disk)
2.7x more IOPS
vs. ConventionalHybrid
4.6x more IOPS
vs. ConventionalHybrid
2.7xMore IOPS
4.6xMore IOPS
Virtual Machine Workload
http://www.google.com/http://www.fusionio.com/blog/ioControlSPX-performance-testing/http://www.fusionio.com/blog/ioControlSPX-performance-testing/http://get.fusionio.com/iocontrol-tolly-white-paperhttp://get.fusionio.com/iocontrol-tolly-white-paperhttp://www.fusionio.com/blog/ioControlSPX-performance-testing/http://www.google.com/8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
27/30
End-to-End Scalability Options
Max Host Acceleration Lowest latency with read cache Off-load hybrid performance
Max Shared Performance Add two more flash devices Double shared performance
Max Capacity Up to 3 additional disk sh 192TB maximum capaci
27
Hybrid CapacityHybrid PerformanceHost Performance
Server Cache ioControl n5
ioControl n5
E d E d D l M h d
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
28/30
End-to-End Deployment Methods
28
Shared Performance Fully redundant Low latency for many apps
Broadest flash utilization
Max Acceleration Host acceleration Lowest latency
Reliability
Acceleration Improve shared performance Reduce latency
Max interoperability
Server SoftwareServer FlashFlash Appliances
PX600 and SX300
M I f ti
8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
29/30
More Information
29
Whitepapers
Case studies Best practice guides
Recorded presentations
Blog
http://fusionio.com/iocontrol
http://fusionio.com/iocontrolhttp://fusionio.com/iocontrol8/10/2019 GT Bay Area DGS 2014 Presentation Optimizing Storage Environments - B Scott
30/30
fusionio.com | At the speed of NOW.
Thank YThank Y