Upload
dinhanh
View
219
Download
0
Embed Size (px)
Citation preview
Storage Virtualization SeminarPresentation Download
SSppoonnssoorreedd BByy::
The management of storage devices is often tedious and time consuming. And as the fragileeconomy continues to impact everyone this year, we are all going to be required to do more withless. Storage virtualization promises to ease the headaches of increasingly complex storage systems and, ideally, will allow us to still effectively and efficiently do our jobs as IT pros withfewer resources.
This presentation highlights how virtualization has progressed from vaporware to an actual concept storage managers are using to minimize the amount of machines needed to manage,centralize data and change the economics of storage. Marc Staimer will also look at how storagevirtualization makes heterogeneous storage compatible, eases data migration and enables consolidation.
»
Storage Virtualization School
Presented By:• Marc Staimer, President & CDS• Dragon Slayer Consulting• [email protected]• 503-579-3763
Dragon Slayer Consulting Intro
Marc Staimer - President & CDS
• 11+ yearsStorage, SANS, Software, Networking, Servers
Consults vendors (> 100)
Consults end users (> 400)
Analysis at trade shows
Articles for Tech Target websites & magazines
Blog
• 29+ years industry experience
Storage Virtualization School 22Storage Virtualization SchoolAugust 2009
The Socratic Test of Three
Storage Virtualization School 33Storage Virtualization SchoolAugust 2009
Seminar Agenda
Part 1
• What, where, who, when, how storage virtualization
Part 2
• Virtual storage in a virtual server world
Part 3
• Storage as a dynamic online “on-demand” resource
Storage Virtualization School 44Storage Virtualization SchoolAugust 2009
What I assume you know
• SAN versus File Storage
• Storage versus IP Networking
• Scalability issues
• Storage service issues
• Storage management issues
Storage Virtualization School 55Storage Virtualization SchoolAugust 2009
Part 1
What, Where, Who, When, Why, How Storage virtualization
Part 1 Part 2 Part 3
Old Man & The Toad
Storage Virtualization School 7August 2009 7Storage Virtualization School
Part 1 Agenda
What
Where
Who
When
Why
How
August 2009 Storage Virtualization School 8
What is Storage Virtualization?
Abstract the storage image
• From the storage
• Different kinds of storageSAN, NAS, Unified
• Different kinds of storage image abstractionVirtualize, Cluster - Grid, Cloud, & Variations
August 2009 Storage Virtualization School 9
• The act of abstracting, hiding, or isolating the internal function of a storage (sub)system or service from applications, compute servers or general network resources for the purpose of enabling application and network independent management of storage or data.
• The application of virtualization to storage services or devices for the purpose of aggregating, hiding complexity or adding new capabilities to lower level storage resources. Storage can be virtualized simultaneously in multiple layers of a system, for instance to create HSM like systems.
SNIA Storage Virtualization Definition
August 2009 1Storage Virtualization School
Virtualized Storage Image AbstractionAbstracting the image means
• Masking storage services from applicationsProvisioning
Increasing storage
Additions
Filer mounting
Data migration
• Between storage targets or storage tiers
Data protection
Change management
August 2009 Storage Virtualization School 11
Polls: Storage Virtualization Market
Virtualization is neither new or strange
• Per ESG based on 2008 polls52% have already implemented storage virtualization
48% plan to implement
August 2009 Storage Virtualization School 12
IDG 2008 Virtualization Poll (Collected Q4 2007)
Who Took the Survey (464 respondents)
IT Decision Maker
IT Architect
Other
Developer
August 2009 Storage Virtualization School 13
IDG: Where Are You Investing Now?
4%
9%
11%
15%
23%
25%
43%
47%
86%
96%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
No current virtualization investmentIO VirtualizationApplication GridsFile Virtualization
Application VirtualizationEnterprise Data Center Virtualization
Storage VirtualizationDesktop Virtualization
Server VirtualizationCurrently Investing
Current Virtualization Investments
August 2009 Storage Virtualization School 14
IDG: Where Investing Thru 2010
3%
0.4%
15%
18%
31%
38%
42%
53%
62%
81%
97%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90%100%
Don't KnowNo planned virtualization investments
Application GridsIO Virtualization
File VirtualizationApplication Virtualization
Enterprise Data Center VirtualizationStorage VirtualizationDesktop Virtualization
Server VirtualizationPlanning to Invest
Future Virtualization Investments
August 2009 Storage Virtualization School 15
Dragon Slayer Consulting Poll (265 respondents)
8%
11%
12%
14%
22%
30%
45%
48%
56%
78%
92%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90%100%
No planned virtualizationNo Current virtualization
Application GridsIO Virtualization
File VirtualizationApplication Virtualization
Enterprise Data Center VirtualizationStorage VirtualizationDesktop Virtualization
Server VirtualizationHave (or will) implemented virtualization
Current or Planned Virtualization
August 2009 Storage Virtualization School 16
Where Storage Virtualization Occurs
Everywhere
• Odds are you’re storage is already virtualized to a degreeOperating systems
Applications
Volume managers
Hypervisors
Storage arrays – RAID
NAS
Appliances
Switches
Even SSDs
August 2009 Storage Virtualization School 17
Volume Management
Server-based storage virtualization• Abstracts block storage (LUNs, HDD) into virtual “volumes”• Common to modern OS – built in
Windows Logical Disk Manager, Linux LVM/EVMS, AIX LVM, HP-UX LVM, Solaris Solstice, Veritas Storage Foundation
Mostly used for flexibility• Resize volumes• Protect data (RAID)• Add capacity (concatenate or expand stripe or RAID)• Mirror, snapshot, replicate• Migrate data
August 2009 Storage Virtualization School 18
Logical Volume Managers (LVM)Platform Volume Manager Notes
AIX Logical Volume Manager OSF LVM, no RAID 5, no copy-on-write snapshots
HP-UX 9.0+ HP Logical Volume Manager OSF LVM, no RAID 5
FreeBSD Vinum Volume Manager No copy-on-write snapshots
Linux 2.2+Logical Volume Manager and Enterprise Volume Management System
Based on OSF LVM, no RAID 5
Solaris Solaris Volume Manager (was Solstice DiskSuite) Limited allocation options, no copy-on-write snapshots
AIX, HP-UX, Linux, Solaris, Windows
Symantec Veritas Volume Manager (VxVM), Storage Foundation
Full-featured multi-platform volume manager
Windows 2000+ Logical Disk Manager Co-developed with Veritas, limited allocation options, copy-on-write snapshots introduced in Server 2003
Solaris, BSD, Mac OS X 10.6+
ZFS Combined file system and volume manager
August 2009 19Storage Virtualization School
ZFS: Sun’s Super File System
a.k.a. “ZB file system” • Combined file system, LVM, disk/partition manager• Open source (CDDL) project managed by Sun• Replaces UFS (Sun), HFS+ (Apple OSX Snow Leopard Server)
• Extensible full featured storage poolsAcross systems, disks, &optimized for SSDs
• File systems contained in “zpools” on “vdevs” W / striping & optional RAID-Z/Z2
• 128-bit addresses mean theoretical near-infinite capacity• “copy-on-write” w / checksums for snapshots, clones,
authentication
August 2009 20Storage Virtualization School
ZFS Limitations
Adding or removing vdevs is hard/impossible
• Especially removing
Stacked RAID is currently not possible
There is no clustering
• Until Sun adds Lustre
August 2009 Storage Virtualization School 21
IO Path Management Software Virtualizing the SAN Pathing
Virtualizes server – storage connection• Failover• Load balancing strategies
Numerous choices• Veritas DMP (cross-platform, w / Storage Foundation)• EMC PowerPath (supports EMC, HDS, IBM, HP)• IBM SDD (free for IBM)• HDS (HDLM)• Microsoft MPIO (Windows, supports iSCSI & most FC)• VMware Failover Paths
August 2009 22Storage Virtualization School
SD
Request Request
App
IO Path Mgmt
App
SD
HBA
SP-A SP-B
Interconnect topology
SAN Storage Virtualization
An abstraction layer
• Between hosts & physical storage
• That provides a single mgmt pointFor multiple block-level storage devices in a SAN
And presents a set of virtual volumes for hosts to use
August 2009 Storage Virtualization School 23
What does SAN Storage Virtualization Do?
Aggregate storage assets into 1 image
• Manages, provisions, protects, etc.Transforms “n” systems into slice & dice monolith
Homogeneously or heterogeneously
Virtual LUNs
• Mapped to physical LUNs
• Can be larger than physicalUp to a Exabyte
August 2009 Storage Virtualization School 24
SAN Tends to Be a Popular Virtualization Location
Usually requires less configuration & mgmt
• As compared to server based
• And it potentially works with all servers & storage
Resides in the storage fabric
• Appliance, storage controller, switch, & hybrid
• Control & data path combined or split
August 2009 Storage Virtualization School 25
Shared vs. Split Path
August 2009 Storage Virtualization School 26
Shared path intercepts traffic Split path redirects traffic
Where’s my data?
It’s over here!
Contr
ol Pa
th Data Path
Where’s my data?It’s over
there!
Data Path
Pros & Cons of Shared Path
ProsSimpler• Implementation• Operations• Management• Ease of use
ConsScalability limitations• Units/nodes clustered• Performance / unit or node• Capacity / unit or node
Performance hits• Additional latency
August 2009 Storage Virtualization School 27
Pros & Cons of Split Path
ProsScalability• Limited only by fabric
Performance• Limited only by fabric
Flexibility
ConsComplexity• Install, ops, mgmt
SPAID• Limited by intelligent switch BW• Adds latency
Similar to shared path appliance
August 2009 Storage Virtualization School 28
Split Path Notes
Split Path Combinations
• Switch – SPAID a.k.a. split path architecture independent data streams
Requires processing blades
• Server SoftwareMgmt on appliance in fabric
Virtualization agent/driver on server/virtual server
August 2009 Storage Virtualization School 29
SPAID Notes
Switch Centric
• Metadata & LUN map on switch
• Software & processing done on switch
• Each session is directed by switch to destinationScalability constrained by switch processing & latency
August 2009 Storage Virtualization School 30
Server Centric LVM Notes
LVM
• Virtualization SW, pathing SW, and/or LVM on server
• Mapping & processing performed on each server
• Each session is server controlled
• Overall management is difficultEach server has its own management
August 2009 Storage Virtualization School 31
Split Path Hybrids
Split Path Combinations
• Proprietary virtualization SWA.k.a. agent and/or pathing SW
• Meta data appliance or intelligent appliance with SWMeta data mapped from appliance
Management of software from appliance
Replication, data protection, etc. controlled from appliance
August 2009 Storage Virtualization School 32
Advanced Hybrid
Fabric Centric Storage• Leverages std LVMs
Symantec Storage FoundationOS LVMs, etc.
• Leverages Filer (NAS heads) storage virtualizers• Provides proprietary virtualization SW for devices w/o software• Puts mgmt & advanced services on appliance
Snapshots, replication, mirroring, etc.
August 2009 Storage Virtualization School 33
SAN Virtualization Products
Product Architecture Location Multi-vendor Repl. Notes
BlueArc Titan/Mercury Shared Path Controller Yes Yes Clusters up to 8 in GNS (HDS OEMs)
DataCoreSANSymphony/Melody Shared Path Generic x86 appliance Yes Yes Supports FC storage, runs as virtual
appliance on VMware
Bycast Storage Grid Shared Path Grid of x86 appliances Yes Yes Geographically distributed cloud tech.
EMC Invista Split Path x86 appliance + SPAID Yes No Primarily utilized for online data migration (CSCO& BRCD I-Switches)
EMC Unified NX-NS Shared Path Controller No No Up to 8 nodecluster, built-in file dedupe, auto tiering, thin provisioning
FalconStor NSS Shared Path Generic x86 appliance Yes Yes Optional post processing dedupe engine & runs as a VMware virtual appliance
HDS USP V / VM Tagmastore
Combination Shared/Split Path Controller Yes Yes Combination Enterprise controller &
virtualization engine
HP XP 2xxxx Combination Shared/Split Path Controller Yes Yes OEM'ed Hitachi with additional HP
software & services
IBM SVC Shared Path x86 Purpose-built Appliance Yes Yes Supports most FC storage; large
caches; IBM hardwareIncipient iNSP Split Path FC switch – SPAID Yes No No caching; supports Cisco FC blades
LSI StoreAge SVM Split Path Combo x86 appliance + host SW or intelligent switch Yes Yes No caching; split-path FC with low cost
N_Port switch, resold by HP as SVSP
NetApp vFiler Shared Path Controller Yes Yes Active-active cluster, built-in file dedupe, also sold by IBM
RELDATA 9240I Shared PathX86 purpose-built storage controller/appliance
Yes Yes NAS/iSCSI that virtualizes internal SAS & external FC & SAS storage
Seanodes Exanodes Shared Path VMware virtual appliance Yes No Works w/internal & DAS storage
converts into iSCSI SAN
XIOtech Emprise 7000 Shared Path Controller No Yes Works only w/ISE (Intelligent Storage Element)
XIOtech ISE Age Split Path Hybrid Generic x86 appliance Yes Yes Requires XIOtech server virtualization agents or Symantec Storage Foundation
SAN Virtualization Issues
Side effects
• Spreading a storage pool across more RAID sets, &/or systemsIncreases performance, reduces storage management
It also increases probability of data loss
• Probability of 1 system going down is low
• Probability any 1 of many will fail increases rapidly
(P1+ P2+ P3) – (P1* P2* P3)
P = probability of a specific RAID group failing
August 2009 Storage Virtualization School 35
Ways to Mitigate Increased Data Failure Probabilities
1. Self-healing storage elements
2. Redundant array of intelligent RAID (RAIR)
• RAIDing the RAID
3. More scalable individual storage systems within pool
August 2009 Storage Virtualization School 36
Each intelligent storage element is self-healing, reducing the probability of an actual disk failure or RAID set failure to an extremely rare event
Putting the intelligent storage elements into a RAIR reduces that probability even further.
Virtual Network Attached Storage (NAS)
NAS lends itself to virtualization• IP network connectivity and host processing possibilities
Lots of file servers? Virtualize• Global namespace across all NAS & servers• Share excess capacity• Transparently migrate data (easier than redirecting users)• Reduce number of mount points• Tier files on large “shares” with variety of data• Create multiple virtual file servers
August 2009 37Storage Virtualization School
NAS Virtualization ProductsProduct Architecture Location Notes
AutoVirt Move/Clone/Map Split Path Windows servers Server 2003 R2, .NET. Primarily a mapping, data migration, & clone tool
BlueArc (also sold as HDS HNAS) Shared Path Clustered NAS Clustered integrated NAS with global
namespace
EMC Rainfinity Shared Path Appliance or host SW DFS mgmt. Primarily data migration tool.
Exanet ExaStore Shared Path Clustered NAS Clustered integrated NAS with global namespace
F5 Acopia Shared Path Switch - Appliance Split-path architecture, non-DFS
Microsoft DFS Split Path Host SW Windows/SMB only; Server 2008, 2003 R2+ enhanced management
NetApp vFiler Shared Path Active-Active Clustered NAS
Clustered NAS “head” with global namespace
ONStor (LSI) GNS Shared Path Clustered NAS & DFS Combines clustered NAS with DFS into a single global namespace
August 2009 38Storage Virtualization School
File Virtualization Issues (a.k.a. Global Name Space – GNS or Global File System - GFS)
Appliances (x86 or switch) market success fits & starts• Requires a commitment to the appliance versus the NAS• For data protection (Snapshots, replication, etc.)• Another system to manage – often perceived as a point solution• Utilized primarily for non-disruptive data migration
GNS migrating into NAS systems as a feature• BlueArc, NetApp, OnStor (LSI), etc.• NAS GNS only works w/that same vendors NAS
August 2009 Storage Virtualization School 39
Vs.
Embedded Virtualization has Transformed Storage Systems
Common in storage array controllers
• Arrays create large RAID sets Carve out virtual LUNs for use by servers
• Controller clusters (and grids) redirect activity Based on workload &availability
• Snapshots/mirrors & replication are common featuresAugust 2009 Storage Virtualization School 40
Newer Gen of Arrays – Usually Clustered
Include virtualization derived features• Automated ILM• Thin provisioning• Data migration• De-duplication• Self configuring and/or self tuning storage
August 2009 Storage Virtualization School 41
Virtual Storage Appliances (VSAs)
2 types
• Distributed
• Proxy
August 2009 Storage Virtualization School 42
August 2009 43Storage Virtualization School
Distributed VSAs
Converts VM DAS into SAN
• All or some VM DAS In a virtual storage SAN pool
• Data available even if node(s) fail
• Runs as a VM guest Can also be a target for VMs
• On other physical machines
• Low cost vs. NAS or SAN
44Storage Virtualization School
Proxy Virtual SANs
Aggregates VM DAS & SAN into storage pool
• All or some
• Runs as a VM guest iSCSI target for internal & external VMs
• Plus other physical machines
• Low cost vs. NAS or SAN
August 2009
VSA Products
Product Architecture Server Virtualization Supported Notes
DataCore Software SANSymphony Proxy VMware ESX & Microsoft
HyperVSame as std SANSymphony that runs on an appliance ltd to 2TB
FalconStor NSS VSA Proxy VMware ESX Same as IPStor that runs on an x86 appliance
HP LeftHand Networks VSA Distributed VMware ESX Clusters up to ~100 nodes. Designed for
ROBO
Seanodes Exanodes VMware edition Distributed VMware ESX Protects data up to 16 nodes or volume
failures, no replication.
StorMagic SvSAN Proxy VMware ESX (Microsoft HyperV coming) 2TB license is free
August 2009 45Storage Virtualization School
August 2009 46Storage Virtualization School
VSA IssuesDistributed VSAs• Requires a bit more memory & CPU cycles / server
Proxy VSAs• Requires quite a bit more memory & CPU cycles
On the target proxy Virtual SAN servers• Should have limited add’l VM guests
License limits• Some are limited to 2TB
Where to use• Primarily small environments and/or ROBOs
Simplicity• VSAs are just about as easy as NAS• Utilize standard Ethernet technologies
Virtualized IO
Takes a very high BW pipe (10G or more) • Makes it appear as multiple unit & protocol types
FC SAN, TCP/IP Network, iSCSI SAN
• Breaks it out at the switch to different networks & targets
Problem it solves – Fabric sprawl3 types• Infiniband (IBA), Converged Enhanced Ethernet (CEE), MRIOV
August 2009 Storage Virtualization School 47
Virtualized Pipe
TCP/IPEthernet
FC or iSCSISAN
Virtualized IO – 10 to 40G IBA
Standard IO IBA Virtualized IO
August 2009 48Storage Virtualization School
Infiniband IO Virtualization Definitions
HCA – Host Channel Adapter• Server adapter card
TCA – Target Channel Adapter• Storage or Gateway adapter card
Shared IO Gateway – Same as IO Virtualization• IBA to IP, FC, iSCSI gateway
RDMA – Remote Direct Memory Access• Lowest latency memory to memory transfers
iSCSI – IP SCSI• SCSI mapped to TCP/IP
HPCC – High Performance Compute Clusters• Large nodal clusters
IBA Director – Large port count five 9s switch• 288 to 864 port switches
IBA
August 2009 49Storage Virtualization School
Virtualized IO – 10G CEE
Standard IO CEE Virtualized IO
August 2009 50Storage Virtualization School
Ethernet IO Virtualization DefinitionsFCoE – Fibre Channel over Ethernet• FC frames encapsulated in Ethernet packets – lightweight frame maps
iSCSI – IP SCSI• SCSI mapped to TCP/IP
iWARP – RDMA on Ethernet• Required for HPC clusters
CNA – Converged Net Adapters• Concurrent FCoE, iSCSI, iWARP, & TCP/IP on 10GbE NIC
10G TOE – 10G TCP offload engine• Provides TCP offload for 10G adapter
Split stack & full stack offloads
CEE – Converged Enhanced Ethernet• ANSI standard for lossless low latency Ethernet
DCE – Data Center Ethernet• Cisco’s brand name for CEE
10GbE
August 2009 51Storage Virtualization School
How IBA Compares to 10GbE
August 2009 Storage Virtualization School 52
InfiniBand 10GbE NotesMax pt Bandwidth 120Gb/s 10Gb/s Faster is betterE2E latency 1 to 1.2us 10 to 50us Lower is betterSwitch Latency 50 to 150ns 500ns to 10us Lower is better
RDMA Built in Voltaire only Important for clustering
Multipath Yes Voltaire only Important for StorageLossless Fabric Yes Voltaire only
Power/Port 5W 15-135W Lower is betterLargest Enterprise Switch
288 x 20Gbps &86440Gbps
x 288 x 10Gbps More is better
Price/Gbps $30 to 50 $150-700 Lower is better
Multi-Root IO Virtualization
Moving IO outside the box
August 2009 Storage Virtualization School 53
MRIOV – Value Prop
Storage Virtualization School
54August 2009
MRIOVMR OVMR OVMRIOVMRIOVMRIOV
Simpler• Fewer storage & network pts• Fewer storage & network switches
Shared IO – Higher Utilization• 75% IO cost reduction• 60% IO Power reduction• 35% Rackspace reduction
Scalable IO Bandwidth• On demand 1-40Gbps• No add’l cost• Reduced IO adapters & cables
Enhanced Functionality• Shared memory• IPC & PCIe speeds up to 40Gbps
Reduced OpEx• Simplified mgmt• Server, OS, app, network, & switch transparent• HA• Changes w/o physical touch
Reduced CapEx• Smaller, denser servers• Fewer components• Fewer failures• Fewer opportunities for human error
How MRIOV Compares
August 2009 Storage Virtualization School 55
Significant unused IO capacityInflexible, rigid server adaptersWasted space, cooling, power
Traditional Inefficiencies
Flexible, IO capacity on demandHighest IO utilization, lowest TCOStandardized, open technologies
MRIOV Advantages
Comparison w/Other IOV Solutions
August 2009 Storage Virtualization School 56
InfiniBand Solutions
Today’s Solution(No IO Virtualization)
MRIOV Solution(PCI Express)
Config• 2 Racks • 32 Servers• Ethernet & FC• DAS
Utilization Very Low ~15% Very High ~80%+ Low OKReliability Neutral High Best
NeutralIO Perf 10Gb 80Gb 20Gb 10GbTCO High Low High HighMgmt Poor Best Best OKIO Cost High Low High High
FCoE Solutions
IO power = 3000 W
IO power = 700 W
IO power = 2000 W
IO power = 2200 W
2 Racks 1 Rack 2 Racks 2 Racks
IO Cost = $196K
IO Cost = $37K
IO Cost = $156K
IO Cost = $180K
IO components = 270
IO components = 58
IO components =
160
IO components:1
60
IO Virtualization Products
August 2009 57Storage Virtualization School
Vendor Technology Product types Notes
Aprius MRIOV Rack switch, blade switch, silicon PCIe switching w/server software
Brocade CEE CNA & CEE Top of rack switch Strong FC focus & install base, acquisition of Foundry provides equivalent Ethernet expertise
Cisco IBA & CEE HCA, IBA directors , switches, gateways, CEE top of rack switch
Ethernet leader w/strong products in FC, & IBA as well. Invented CEE.
Emulex CEE CNA 1 of the 2 FC HBA leaders making a strong CNA play (same driver interface).
Mellanox IBA & CEE HCA, TCA, IBA directors , switches, gateways, CNA
Dominant IBA silicon leader attempting to leverage position into CEE w/CNAs.
QLogic IBA & CEE HCA, TCA, IBA directors , switches, gateways, CNA
FC HBA leader w/strong positions in IBA silicon,HCA, TCA, directors. switches, & GWs. Invented "Shared IO".
Virtensys MRIOV Rack switch, blade switch, silicon PCIe switching w/o server software
Voltaire IBA HCA, IBA directors , switches, gateways
IBA leader in switches, directors, gateways, software, & HCAs.
Xsigo IBA HCA, gateways Positions as a pure "Virtualized IO". Doesn't mention IBA technology.
IO Virtualization IssuesIBA• Is primarily utilized for HPCC• Not a large install base in the Enterprise today• Few storage systems with native IBA interfac
LSI (SUN, SGI, IBM) & DDN
• However, it is proven & it works
CEE• Technology is early & somewhat immature• A bit pricey (aimed at early adopters)• Requires new NICs & switches to be effective
MRIOV• In rack and Blade System only• Primarily OEM tech (e.g. must be supplied by server vendor)
August 2009 Storage Virtualization School 58
Yes, Even SSDs are Virtualized
Virtualizes single or multi-cell flash
• Algorithms manage writesLoad balances writes across the cell or cells
Makes sure SSDs have similar MTBFs as HDDs
Reduces probability of flash cell write failure
August 2009 Storage Virtualization School 59
SSD Tier-0 Storage Types
Enterprise storage
• Storage system optimizedLooks like a HDD & fits in rack
Simple technology
Easy to implement – low risk
Performance constrained by storage system back end
Memory appliance or server adapter
• Application acceleration focusedConnects via PCIe (highest perf) (IBA, FC, 10GbE soon)
August 2009 60Storage Virtualization School
Where Storage Virtualization Occurs
Everywhere • Operating systems• Applications• Volume managers• Hypervisors• Storage arrays – RAID• NAS• Appliances• Switches• Even SSDs
August 2009 Storage Virtualization School 61
Audience Response
Questions?
Break sponsored by
Part 2
Virtual storage in a virtual server world
Part 1 Part 2 Part 3Part 1 Part 2 Part 3
Top 10 Signs You’re a Storage Admin
1. ~90%of peers & boss, don’t have a clue about what you really do
2. Being sick is defined as can't walk or you're in the hospital
3. Your relatives & family describe your job as "computer geek"
4. All real work gets started after 5pm, weekends, or holidays
5. Vacation is something you always do…next year
6. You sit in a cubicle smaller than your bedroom closet
7. Your resume is on a USB drive around your neck
8. You’re so risk averse, you wear a belt, suspenders, & coveralls
9. It's dark when you go to or leave work regardless of the time of yr.
10. You've sat at same desk for 4 yrs & worked for 3 different companies
August 2009 65Storage Virtualization School
Part 2 Agenda
Level setting
• For virtual servers
Virtual server issues
• Real world problems
Best practices
• To solve
August 2009 Storage Virtualization School 66
August 2009 67Storage Virtualization School
Benefits of the Hypervisor Revolution
Increased app availabilityReduced server hardware w/consolidationReduced infrastructure• Storage network• IP network• Power• Cooling• Battery backup
Simplified DR
Virtualized ServersAdv. features require networked storage• SAN or NAS
Virtualized server advanced functionality• VMware
DRS, VMotion, Storage VMotion, VDI, SRM, SW-FT, VDR, storage API, Thin Provisioning
• Microsoft Hyper-V
Live Migration
• Virtual Iron
Live (Migrate, Capacity, Maint, Recovery, Convert, Snap)
• Citrix XenServer
XenMotion, Global Resource Pooling
August 2009 Storage Virtualization School 68
Source Gartner: 2008 Enterprise Virtual Server Market Share
VMware, 87%
Citrix, 5%
Microsoft, 3% Virtual Iron, 3% SUN, 1%Oracle, 1%
VMware
Citrix
Microsoft
Virtual Iron
SUN
Oracle
August 2009 69Storage Virtualization School
Distributed Resource OptimizationDistributed Resource Scheduler• Dynamic resource pool balancing allocating on pre-defined rules
Value• Aligns IT resources w / bus priorities • Operationally simple• Increases sys admin productivity• Add hardware dynamically • Avoids over-provisioning to peak load• Automates hardware maintenance
August 2009 Storage Virtualization School 70
Dynamic and intelligent allocation of hardware resources to ensure optimal alignment between business and IT
HOT! – VMotionOnline – Increasing Data Availability
• No scheduled downtime
• Continuous service availability
• Complete transaction integrity
• Storage Network SupportiSCSI SAN, FC w / NPIV, & NAS (Specifically NFS)
August 2009 Storage Virtualization School 71
Virtual Desktop InfrastructureIncreased
• Desktop availability, flexibility (not tied to desktop hardware), & security
Decreased
• Management & costs
August 2009 Storage Virtualization School 72
Site Recovery ManagerFaster more automated DR
Integrated with storage
Utilize lower cost DR storage & drives
August 2009 Storage Virtualization School 73
Protected Primary Site Recovery / DR Site
FC Storage SATA Storage
iSCSIFCP
ESG Poll: Does Server Virtualization Improve Storage Utilization?
Net decrease of
< 20%
Net decrease of 11% - 20%
Net decrease of 1% - 10%
No Change Net increase of 1% - 10%
Net increase of 11% - 20%
Net increase of
> 20%
1% 4% 2%
39%
15%21% 18%
Since being implement, what impact has server virtualization had on your organization’s overall volume of storage capacity?
August 2009 Storage Virtualization School 74
ESG Server & Storage Virtualization Poll
August 2009 Storage Virtualization School 75
7%
24%
18%15%
36%
Has your org deployed a storage virtualization solution in conjunction with its virtual server environment?
Don't Know
Yes
No, but plan to implement within next 12 mos
No, but plan to implement within next 24 mos
No,and no plans to implement
Why Use Virtual Storage For Virtual Servers?
Reasons Most Often Sited: Improved
• Mobility of virtual machines Load balancing between physical servers
• DR & BC
• Availability
• Physical server upgradability w/o app disruptions
• Operational recovery of virtual machine images
August 2009 Storage Virtualization School 76
Server Virtualization Market Trends / ESG
Server virtualization driving storage system re-evaluation• 66% of enterprises (>1000 employees) and 81% of small to mid sized
businesses (<1000 employees) expect to purchase a new storage system for their virtualized servers in the next 24 months.*
IP SAN emerged as preferred server virtualization storage• 52% of organizations deploying virtualization plan to use iSCSI
(NAS 36%, FC 27%).*
• Clustered storage architecture advantages with server virtualizationEfficient storage utilizationOptimized performanceSimpler management and flexibilityMore Cost effective HA/DR
* ESG, “The Impact of Server Virtualization on Storage”
August 2009 77Storage Virtualization School
What About Server Virtualization Based DR?
DR is a prime beneficiary of server virtualization• Fewer remote machines idling
• No need for identical equipment
• Quicker recovery (RTO) through preparation &automation
Who’s doing it?• 26% are replicating server images, an additional 39% plan to (ESG 2008)
• Half have never used replication before (ESG 2008)
Based on DSC Polling• 67% say app availability is why they implement server virtualization
Justification is based on server consolidation & app up time
August 2009 Storage Virtualization School 78
Server Virtualization = SAN and/or NAS
Server virtualization transformed the data center • And storage requirements
VMware #1 driver of SAN adoption today!60% of virtual server storage is on SAN or NAS (ESG 2008)86% have implemented some server virtualization (ESG 2008)
Enabled & demanded centralization • And sharing of storage on arrays like never before!
August 2009 Storage Virtualization School 79
SAN NAS
August 2009 80Storage Virtualization School
Types of Networked Storage
NAS – Network Attached Storage
• A.K.A. File based storageNFS – Network File System
CIFS – Computer Internet File System
• To a lesser extent AFP – Apple File Protocol
SAN – Storage Area Network
• A.K.A. Block based storageFibre Channel (FC)
iSCSI
Infiniband (IBA)
August 2009 81Storage Virtualization School
NAS & Server VirtualizationNAS works very well w / Hypervisors & adv features
• NFS – VMware, XenServer, KVM, Virtual Iron
• CIFS – Microsoft Hyper-V
• Common file system visible for all virtual guests
Incredibly simple
• Turn it on, mount it, & you’re done
App performance is generally modest
• Typically less than SAN storageThere are exceptions where it is close to equivalent
• BlueArc & to a much lesser extent NetApp & Exanet
August 2009 82Storage Virtualization School
Why Hypervisor Vendors Don’t Usually Recommend NAS
Performance, performance, performance• There exception being once again BlueArc
And to a lesser extent NetApp & Exanet
Typical NAS
SAN
August 2009 83Storage Virtualization School
Virtual Servers & NAS
Many virtual apps are fine w / NAS performance
Virtual guests can boot from NFS (VMware VMDKs)• NFS built-in to ESX hypervisor
August 2009 84Storage Virtualization School
Virtual Server Issues with NASMost NAS Systems don’t scale well
• Capacity, file system size, max files, & especially performance
• Scaling typically means more systemsMore systems increases complexity exponentially
• Eliminates NAS simplicity advantage
Exceptions include BlueArc, Exanet, & NetApp (GNS or GFS)
• NAS currently does not work w/storage vMotion & SRMGood news! – NFS will soon work with vMotion & SRM (EOY)
SANs & Server Virtualization
SANs works very well w / Hypervisors & adv featuresMixed bag on Complexity• FC is very complex requiring special knowledge & skills• IBA is also complex & requires special knowledge & skills• iSCSI uses Ethernet like NAS & is almost as easy
App performance is very fast• iSCSI is fast• FC is a bit faster• IBA is fastest (albeit few choices)
Regardless of SAN Type• SANs do not overcome storage system scalability limits
August 2009 Storage Virtualization School 85
August 2009 86Storage Virtualization School
Hypervisor Vendors Recommendations
Recommended in this order• iSCSI, FC, IBA
iSCSI Rationale• iSCSI is almost as fast as FC
Uses std Ethernet NICs, switches, cables & TCP/IP
• iSCSI is almost as easy as NAS• iSCSI is far less expensive than FC or IBA
Even less expensive than most brand name NAS
August 2009 87Storage Virtualization School
iSCSI SAN & Server Virtualization
Should have dedicated fabric• Not shared with other IP traffic
Performance mgmt• VLANs helps• QoS prioritization helps• 1 & 10G proper utilization helps• 1G does not require any hardware offload• 10G may depending on performance expectations
700MB/s no offload1GB/s with offload
10G excellent for aggregation of VMs
August 2009 88Storage Virtualization School
FC SAN & Server Virtualization
FC SANs require NPIV • N_Port ID Virtualization
Otherwise there is an HBA / guestOr all guests share the same WWN
All physical server must be in same FC Zone• Enables guests to still see their storage when they move• Critical for live migrations & business continuity
FCoE will have similar rules to FC• Difference is that it runs on 10GbE• Not a routed protocol, still layer 2 switching• Requires “Smart” (a.k.a. expensive) 10G FCoE switch
August 2009 89Storage Virtualization School
FC SAN & Server VirtualizationFC SANs are manually intensive• Implementation, ops, change mgmt, mgmt
Software to ease burden• Akorri• NetApp-Onaro• SAN Pulse• TekTools• Virtual Instruments
FC SANs generally require dual fabrics• 2x the cost• Necessity for change management & HA
FC 8G is 4G & 2G backwards compatible• Same interfaces as 10GbE & IBA
Server Virtualization has Storage Ramifications
Dramaticallyincreased I/O (storage) demands
Patchwork of support, few standards
• “VMware mode” on storage arrays
• Virtual HBA/N_Port ID Virtualization (NPIV)
• Everyone is qualifying everyone and jockeying for position
Can be “detrimental” to storage utilization
Problematic to traditional BU, replication, reporting
August 2009 90Storage Virtualization School
August 2009 91Storage Virtualization School
Virtualized Server Storage Issues
Boils down to 4 things to manage
• Performance
• Complexity
• Troubleshooting
• & of course “Cost”
VMware Storage Option-Shared Block StorageShared storage - common/ workstation approach• Stores VMDK image in VMFS datastores• DAS or FC/iSCSI SAN• Hyper-V VHD is similar
Why?• Traditional, familiar, common (~90%)• Prime features (Storage VMotion, etc)• Multipathing, load balancing, failover*
But…• Overhead of two storage stacks (5-8%)• Harder to leverage storage features• Often shares storage LUN and queue• Difficult storage management
August 2009 Storage Virtualization School 92
DAS or SAN
VMDKs
VMFS
VMware Storage Option-Shared NFS StorageShared storage on NFS – skip VMFS & use NAS• NTFS is the datastore
Simple – no SAN• Multiple queues• Flexible (on-the-fly changes)• Simple snap and replicate*• Enables full Vmotion• Use fixed LACP for trunking
But…• Doesn’t work w/SRM & storage VMotion• CPU load questions• Default limited to 8 NFS datastores• NAS File limitations• Multi-VMDK snap consistency
August 2009 Storage Virtualization School 93
NFS NAS
VMDKs
VMware Storage Options-Raw Device Mapping (RDM)
Guest VMs access storage directly over iSCSI or FC• VMs can even boot from raw devices• Hyper-V pass-through LUN is similar
Great• Per-server queues for performance• Easier measurement• The only method for clustering
But…• Tricky VMotion and DRS• No storage VMotion• More management overhead• Limited to 256 LUNs per data center
August 2009 Storage Virtualization School 94
SAN
Mapping File
I/O
Physical vs. Virtual RDM
Virtual Compatibility Mode
• Appears same as VMDK on VMFS
• Retains file locking for clustering
• Allows VM snapshots, clones, VMotion
• Retains same characteristics If storage is moved
Physical Compatibility Mode
• Appears as LUN on a “hard” host
• Allows V-to-P clustering, VMware locking
• No VM snapshots, VCB, VMotion
• All characteristics & SCSI commands (except “Report LUN”) are passed through – required for some SAN management software
August 2009 Storage Virtualization School 95
Which VMware Storage Method Performs Best?
Mixed Random I/O CPU Cost Per I/O
Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc., 2008
August 2009 96Storage Virtualization School
Server Virtualization Storage Protocol Breakout per IDC: 2007
August 2009 Storage Virtualization School 97
7%
47%24%
22%
iSCSI SAN
FC SAN
DAS
NAS
Which Storage Protocol Performs Best?
Throughput by I/O Size CPU Cost Per I/O
Source: “Comparison of Storage Protocol Performance”, VMware Inc., 2008
August 2009 98Storage Virtualization School
Perplexing Server Virtualization Storage Performance Problems
App performance drop-off
• When moving from physical to virtual servers
• Often causing fruitless guest migrations
• Lots of admin frustration looking for root cause
August 2009 99Storage Virtualization School
The Issue is often…
Too Much Oversubscription
Generally, oversubscription is a very good thing
August 2009 100Storage Virtualization School
Where Oversubscription Occurs
Within the:• Hypervisor• LUN• Disk Drives• SAN fabric• Target Storage ports
Too much creates positive loop• Problems feed on themselves
August 2009 101Storage Virtualization School
Hypervisor OversubscriptionHypervisors are designed for oversubscription
• But too much of a good thing…Means IO & resource bottlenecks
• Figuring out the problem root cause is difficult at best
August 2009 Storage Virtualization School 102
Hypervisor
X86 Architecture
Ap
O
p.
S
p.
S
Ap
O
Ap
O
p.
S
App.
OS
LUN OversubscriptionCombines disks into storage pools
• Each storage pool is carved up by the Hypervisor Into virtual storage pools
Then assigned to the individual VM guests
Each VM guest contests for the same storage pool
• Storage systems can’t distinguish between guests
Contention decreases traditional Storage performance
Traditional SAN StorageVirtual Storage
August 2009 103Storage Virtualization School
HDD Oversubscription – Especially SATA
Slower SATA drives don’t handle contention well
• Nominal buffers or ques = higher response times
SATA
FC/SAS
Que depth of 256 to 51215,000 RPM10,000 RPM7,200 RPM
Que depth of 0 to 32Usually 07,200 RPM
August 2009 104Storage Virtualization School
SAN Fabric OversubscriptionSAN storage is typically oversubscribed
• 8:1 (server initiators – target storage ports) or moreNetwork blocking can dramatically reduce performance
Full storage buffer ques also reduces performance
August 2009 Storage Virtualization School 105
Failure to Adjust for Virtual Server Oversubscription Can be Disastrous
SAN or storage target ports blocks IO • Causing SCSI timeouts
SCSI drivers are notoriously impatientApps crash
Physical oversubscription 8:1
Virtual oversubscription 160:1•Based on avg 20 guests per VM
August 2009 106Storage Virtualization School
Too Much Oversubscription App Pain Points
Operationally
App SCSI timeouts• Lots of unscheduled downtime
Slow app performance• Reducing productivity
Difficult to diagnose causality• Increased downtime
• Increased user frustration
• Increased lost productivity
Economically
Too much • Admin time chasing tail
• Scheduled downtime
• Unscheduled downtime
Lost revenue & productivity
August 2009 Storage Virtualization School 107
Too Much Oversubscription Work-Arounds
Assign 1:1 physical LUNs to virtual LUNs• Easiest with iSCSI Storage
Run Hypervisor RDM storage• Manually assign storage LUNs to each guest
Limit or Eliminate use of SATAReduce SAN oversubscription ratios• Upgrade SAN
8G FC w/NPIV10G iSCSI
Use NAS• Eliminates Storage oversubscription
August 2009 Storage Virtualization School 108
Issues w/Work-Arounds
RDM means ltd advanced features• Discouraged by Hypervisor vendors
Reduced or eliminate SATA drives• Increases costs
Although fat SAS drives are cost effective alternative
NAS may cause some app performance issues• Oversubscription gains potentially wiped out by performance
The key is to look at ecosystem holistically• Limit overall oversubscription on the whole
August 2009 Storage Virtualization School 109
Better Alternative Can be Virtualized Storage
Virtualized SAN &/or NAS (GNS or GFS) Storage
Can mitigate or eliminate oversubscription issues by
• Spreading volumes and files Across multiple systems, spindles, RAID groups
• Increasing IO & throughput By aggregating & virtualizing more HDDs, systems, ports, BW
August 2009 Storage Virtualization School 110
Of Course, There is the > Probability of Data Failure Issue (Previously Discussed)
Probability of 1 system going down is low• Probability any 1 of many will fail increases rapidly
(P1+ P2+ P3) – (P1* P2* P3)
• P = probability of a specific RAID group failing
Ways to mitigate increased data failure probabilities• Self-healing storage elements• RAIR• More scalable individual storage systems (SAN or NAS)
August 2009 Storage Virtualization School 111
Server Virtualization Lack of End-to-end Visibility PainCan’t pierce firewall of server virtualization layer
• Networked storage mgmt only see the storage side
• Virtual server mgmt only see the guest side
• And they do not correlate automatically
Difficult to pierce firewall of virtualized storage too
August 2009 Storage Virtualization School 112
No Perfect Solutions
But some pretty good all around ones
• Akorri – Balance Point
• EMC – IT Management Solutions
SMARTS® ADM & Family, IT Compliance Mgr, Control Center®
• TekTools – Profiler for VM
• Virtual Instruments – Virtual Wisdom
Some focused ones• VMware – Veeam Monitor
SAN Optimization – Data Migration Tools
• NetApp – Onaro
• SAN Pulse – SANlogics
August 2009 113Storage Virtualization School
Server Virtualization DP Pain & Issues
Local
Wide Area
Granularity
August 2009 Storage Virtualization School 114
Level Setting DefinitionsHA protects against local hardware failuresDR protects against site failuresBusiness Continuity means• No business interruptions for data failures or disasters
Data protection software protects against• Software failures• Human error• MalWare
Granularity determines • Amount of data that can be lost - RPO• Amount of time it takes to recover - RTO
August 2009 Storage Virtualization School 115
HA Requires Redundant Systems
100% redundancy can be a tad expensive
• Upfront & ongoing
• For just protecting against hardware faults
August 2009 116Storage Virtualization School
Virtual Server DR Tends to Work Better w / SAN or Virtual SAN Storage
If hardware hosting VMs fails
• VMs can easily be restarted Boot from SAN
• On a different physical server
August 2009 117Storage Virtualization School
DR with Shared Storage on SAN
Virtual guest images live on the SAN Storage
Each VM guest is then pointed @
• Appropriate storage image & restarted
Essentially RTO is zero
• Or instantaneously
All guests & data are protected
• Available through the SAN
August 2009 Storage Virtualization School 118
High Cost of Networked Storage HARequires duplicated Network Storage for HA• 2x Network Storage hardware costs
• 2x Network Storage software costs
• More than 2x operational costsMore HA systems means much more costs
August 2009 119Storage Virtualization School
Virtualized Storage Can Mitigate Costs
Virtualized storage (SAN or NAS)
• Fewer system images to manage
• Fewer software licenses
• Even Capacity based licenses are less costlyHigher scalability means lower lower costs
August 2009 120Storage Virtualization School
Wide Area DR
Requires VM Storage to mirror over WAN to remote recovery site
Primary Site
Recovery Site
FC over WANGateway
FC over WANGateway
Native TCP/IP
August 2009 121Storage Virtualization School
Storage Virtualization (SV) Can Mitigate CostsNot all SV can WAN replicate, ones that do mean• Centralized control, fewer points of contact for WAN replication• Less admin, less bandwidth contention• Better performance• Lower software license costs
August 2009 122Storage Virtualization School
Primary Site
Recovery Site
Server Virtualization WAN DR IssuesFC over IP Gateways are expensive• Cisco & Brocade (QLogic less so)
• Effective Data Throughput
Greatly reduced by packet loss & distance
Limited packet loss mitigation & WAN opt. has little impact
NAS NFS usually has performance issues• Native TCP/IP replication effective data throughput reduced
By packet loss & latency
• Duplicate storage, infrastructure, licenses, maint, etc.
August 2009 123Storage Virtualization School
Reducing Wide Area IssuesReplication using native TCP/IP or iSCSI
• Allows TCP optimizers to be utilized
• Vendors who offer this type of storage replication include
BlueArc, Compellent, DELL/EQL, EMC, Exanet, Fujitsu, HDS (HNAS), LHN, NetApp, RELDATA
• Some Network Storage have TCP optimizers built-in
Fujitsu Eternus 8000
TCPOptimizer
TCPOptimizer
TCP/IP
August 2009 124Storage Virtualization School
Wide Area DR Technology
Hypervisor, Storage, OS, or Application based
• Mirroring – Sync and/or Async
• Snapshot Replication – Async
• CDP
August 2009 125Storage Virtualization School
Typical HA-DR VM Storage & Issues
Mirroring
Hypervisor snapshot replication
CDP typically does not work well over the WAN
Traditional Backup
August 2009 126Storage Virtualization School
Mirroring – Sync, Async, Semi-SyncSync replicates on write• Requires remote acknowledgement before local write is released• RPO and RTO are fine grain
Async releases local writes before remote acknowledged• RPO and RTO are medium to fine grain
Semi-sync replicates snaps or incremental snaps async• RPO and RTO are medium to fine grain
August 2009 Storage Virtualization School 127
Primary Site
Recovery Site
FC over WANGateway
FC over WANGateway
Native TCP/IP
Local or remote
Mirroring ShortcomingsSync • Cannot prevent the rolling disaster – disasters are synchronized• Expensive & performance limited to ~ 100 circuit miles
Async• Remote data vaults can be inconsistent and non-recoverable
Semi-sync• Snapshots are typically not crash consistent
August 2009 Storage Virtualization School 128
Snapshot – Simpler HA-DR Alternative?No agents on servers or applications• Simple to use• Medium to fine granularity RPO & RTO• Snapshots sent to other site, potentially bi-directional• Snap restores = mount the data, point & you’re done
Remote Snapshot can be promoted to a production volume• Fast – virtually instantaneous with no BU Windows• Centrally administer w/storage• In limited cases – deduped
August 2009 Storage Virtualization School 129
Snaps
MAN/WAN
Storage Virtualization Can Again HelpSV that is integrated with BU software provides• Centralized control, fewer points of contact for WAN replication• Less admin, less bandwidth contention• Better performance• Lower software license costs
August 2009 130Storage Virtualization School
BUAgen
t
MAN/WAN
Snapshot ImperfectionsSnaps typically not structured data crash consistent
• Requires either VSS integration for Windows
• Or “agents” for the structured apps requiring crash consistencyA hybrid approach – requires integration with BU SW console
• Agents used to quiesce DBMS, providing write consistency
• BU software tells storage to take the snapshot
There are severe limits on number of snaps / system
• And snapshots will typically reduce cap
High cost w/Capacity based licensing
• Dual licenses for sending AND receiving systems
• Storage system tends to be higher cost
August 2009 Storage Virtualization School 131
Issues w/Multi-Vendor App Aware ApproachMultiple products to manage…separately
BU SW not aware of replicated snapshots
• Can’t see them or recover from themOne exception is CommVault Simpana 8
Requires Agents for crash consistent apps
• Or an agent for VSS on Microsoft
August 2009 Storage Virtualization School 132
1st: The Insidious Problem w/AgentsAgents are software w/admin privileges
• A.k.a. plug-ins, lite agents, client software
Role is to collect data & send it to a backup or media server
• Complete files and ongoing incremental changes
Separate agents typical / OS, database, ERP, & email app
• As well as for BU, CDP, & Archiving / app
• Can be more than one agent / serverOS agent, database agent, email agent, etc.
• When agents deduplicate and/or encrypt – at the sourceThey are even more resource intensive
August 2009 Storage Virtualization School 133
Agent
Why Admins Despise AgentsE.G. Operational Headaches
Agents compromise security
Agents are very difficult to admin & manage
• Especially as servers & apps proliferate
Agents misappropriate server assets
• Particularly acute with virtual servers
Agents escalate CapEx & OpEx
August 2009 Storage Virtualization School 134
Agents Compromise SecurityA firewall port must be opened/agent
Agents have admin privileges
• Creates a backdoor access to everything on the server
• Hackers target agents – BU data must be importantAgents are listening on a port just waiting to be hacked
Hacker can try to hack dozens thousands of servers
• Often without being detected
• The more clients/agents installed, the more attack points
• Lack of encryption in-flight puts transmitted data at risk
• Agent encryption wastes even more server resources A no win situation
August 2009 Storage Virtualization School 135
Agents Very Difficult to Admin & ManageInstalling agents can be maddeningly frustrating
• Requires an app disruptive system reboot to initialize
Upgrading agents is a manual process (high touch)
• Making it just as frustrating as installations1
Agent upgrades must be pushed out to each system
• Upgrades also require an app disruptive system reboot1
• OS & app agents are upgraded when SW is upgradedUsually more than once a year
• OS as often as once a month
And when the OS or apps are upgraded
Or when the OS or apps have a major patch
August 2009 Storage Virtualization School 136
1Some BU software have an automated upgrade process; however, the reboots are still disruptive
Continued
Infrastructure complexity = increased failures• More agent software parts = > failure probability
Multi-vendor operations means lots of agent flavors• Platforms, operating systems, databases (all kinds), & email
Troubleshooting is complicated• Particularly aggravating when an agent stops working
No notification – difficult to detect & difficult to fix
• Larger infrastructures take longer to diagnoseExponential complexity when ROBOs are added
No automatic discovery of new servers or devices• New Agents must be manually added
Agent Management significantly drains IT resourcesAugust 2009 Storage Virtualization School 137
Agents Misappropriate Server AssetsAgent software steals server resources
• Each agent utilizes 2% or more of a server’s resourcesMany DP systems require multiple agents (OS, app, & function)
• Most resource est. are based on average utilizationAvg is calculated differently by each vendor
• Comes down to how often data is protected
• Per scan server resources used times # scans / day
• Divided by total available server resources per day
It’s a big deal when resources required affect app performance
It’s a really big deal when the server is virtualized
• And each VM requires it’s own agent
• Suddenly, a lot of server resources are dedicated to agents
August 2009 Storage Virtualization School 138
Agents Escalate CapEx & OpEx>2% of server CapEx & OpEx allocated to agents• More when agents are required for multiple applications
• Virtual server allocation is multiplied by # of VMs
HW, SW, network, & infrastructure must be upsized • To accommodate agents while meeting app perf. requirements
Based on peak performance
• Meaning more HW, SW, networks, & infrastructure
More assets under management means higher OpEx• People have productivity limitations – more personnel
• SW licensing based on capacity, CPUs, servers, etc. = higher $
• More HW = more power, cooling, rack space, floor space, etc.
August 2009 Storage Virtualization School 139
Agent Issues Exacerbated on VMs
Instead of 1 or 2 agents per physical server
• There are lots of agents per physical server
Wasting underutilized server resources is one thing
• It’s quite another when that server is oversubscribed
August 2009 Storage Virtualization School 140
ESX 3.5 or vSphere4
X86 Architecture
Ap
O
p.
S
p.
S
Ap
O
Ap
O
p.
S
App.
OS
Ultimately it Reduces Virtualization Value
Agents limit VMs / physical server
• Reduces effective consolidation benefits
• Decreases financial savings, payback, & ROI
VM backups will often contest for the IO
• Simultaneous backups have bandwidth constraints
• Backups must be manually scheduled serially
August 2009 Storage Virtualization School 141
Traditional or Legacy Backup & Restore
Backup to Tape, VTL, or Disk
RPO & RTO range from coarse to fine grain
• Some even provide CDP
August 2009 Storage Virtualization School 142
Typical Backup & Restore FailuresAll of the Agent issues in spades• Multiple agents for different functions & applications
Not typically ROBO or WAN optimizedHigh failure rates on restores• No automated restore testing or validation
Backup validation is not the same thing
• No time based versioning• Multi-step restores
Data has to be restored from backup media• To media or backup server before it can be restored to server• Requires multiple steps &passes
Many lack built-in deduplicationMost do not have integrated archival
August 2009 Storage Virtualization School 143
CDPTypically copies on writes
It differs from mirroring in 4 ways• Time stamps every write
• Can be transaction or event aware
• Allows rollback to any point in time, event, or transaction
• Prevents the rolling disaster
RPO & RTO is fine grain
August 2009 Storage Virtualization School 144
CDP Fails to Measure Up
Primarily agent based as well w/agent problems
Most CDP is not really designed for ROBOs or WAN1
• Slow over WAN, e.g. not WAN optimized
• No deduplication
Not integrated in with backup in most cases
Many are OS & Application limited
• Primarily Windows and Exchange focused
August 2009 Storage Virtualization School 145
1Asigra is the exception
VMware’s Agentless Solutions
VMware Consolidated Backup – VCB
VMware Data Recovery – VDR
August 2009 Storage Virtualization School 146
VCBRequires no VM or VMware agents • Utilizes VMware VMDK Snapshots
RPO & RTO is coarse grain
Mount snaps on Windows proxy server • Agent on proxy server
Proxy server backed up, sent to media server, then stored
Other Advantages• Reduces LAN traffic & has BMR support
August 2009 Storage Virtualization School 147
BU Transmit
Snapshot
Mou
nt BU Server
Media Server
Proxy Server
Where VCB Comes up a bit Short32 max concurrent VMs/proxy, w/5 as best practice
• Means more proxy servers or very slow BU & restores
Multi-step restore – Restore proxy server, restore VMs
DBMS, Email, ERP not BU crash consistent1
• E.g. doesn’t ensure all writes are complete, cache flushed etc.
Often complex scripting is required
(RPO & RTO) is coarse – VMDK only2
• Windows files is exception
August 2009 Storage Virtualization School 148
1Windows VSS enabled structured apps are the exception2CommVault Simpana 8 cracks open VMDKs
VDRPart of vSphere4• Requires no VM or VMware agents • Utilizes VMware VMDK Snapshots
RPO & RTO is coarse grain
• Works thru vCenter• Intuitive & simple• Built-in deduplication
August 2009 Storage Virtualization School 149
Where VDR Comes up a bit Short
100 max concurrent VMs• Aimed at smaller VMware environments
Not file system aware – primarily VMDK• With the exception of Windows
Not DBMS, Email, ERP not BU crash consistent• E.g. doesn’t flush cache, complete all writes etc.
With the exception of Windows VSS
No replicationSoftware is required on every vSphere serverRPO & RTO is coarse except for WindowsVDR is good, & pretty basic aimed at SMB/SME• Other vendors provide more capable – comparable offerings
Veeam, Vizioncore, PhD TechnologiesAugust 2009 Storage Virtualization School 150
Other Ongoing HA-DR Virtualization Issues
Serious backup scalability limitations
No integration w/Online-Cloud backup or DR
Does not leverage VM Infrastructure or cloud
August 2009 Storage Virtualization School 151
Serious VM DP Scalability Issues
DP vaults rarely scale w/some exceptions
• Meaning more backup vaultsWhich increases complexity exponentially
• Different servers & apps manually pointed at different vaults
Loses a lot of deduplication value
Far more time intensive
Greater opportunities for human error & backup failures
• No load balancing or on-demand allocation of resources
Requiring yet even more hardware
August 2009 Storage Virtualization School 152
Private-Public Cloud Integration a.k.a. Hybrid Cloud Integration
Local backup or archive vaults don’t replicate
• To offsite online cloud backup service providers
• Or DR providers
August 2009 Storage Virtualization School 153
User Site
Public Cloud Site
Another More Complete Agentless Backup Solution
Asigra – Hybrid Cloud Backup
• No agents
• Physical or virtual appliance
• Complete protectionOperating systems
File systems
Structured data
VMS
August 2009 Storage Virtualization School 154
ESX 3.5 or vSphere4
X86 Architecture
Ap
O
p.
S
p.
S
Ap
O
Ap
O
p.
S
App.
OS
Asigra Agentless VMware BackupsAgentless Hybrid Cloud Backup• ESX 3i/3.5/3.0 & vShpere4 compatible• Physical or Virtual Backup Appliance
Only agentless VM-level backup productOriginal & alternate VM restoresTime based versioning even agentless CDP
• File/App-level backup • Agentless VMDK-level backup
Any storage (DAS/SAN/NAS)VMDK restore as pure filesCOS-less VMDK backup/restore
• Backs up entire VI setupGlobal VI backup set creation
• Local & global dedupeBlock level & built-in
• VCB integration scripts• Highly scalable vaults• Autonomic healing w/restore validation 1pass recoveries• Private, public, & hybrid cloud integration
August 2009 Storage Virtualization School 155
NOTE: Licensed by deduped, compressed, stored TBs
Asigra Agentless Hybrid Cloud BU Limitations
No native backup to tapeRequires replacement of current backup softwareAgentless incredulousness• Can’t believe agentless backup is as good as agent based
(It is)
August 2009 Storage Virtualization School 156
VM Agentless Recommendations
For smaller environments (< 100 VMs)• VDR
• Products from Veeam, Vizioncore, & PhD Technologies
• VCB & products that run on VCBCommVault Simpana 8
Acronis
PHD Technologies Inc. esXpress
STORServer VCB
Symantec Backup Exec 12.5
Veeam Backup
Vizioncore vRanger Pro
August 2009 Storage Virtualization School 157
VM Agentless Recommendations
For medium to larger environments• SME to Enterprise
Asigra Hybrid Cloud Backup
Combinations of snapshot & backup w/agents
Combinations of snapshot & Asigra agentless backup
Public cloud agentless service providers
August 2009 Storage Virtualization School 158
Some Storage Configuration Best Practices
Separate OS & app data• OS volumes (C: or /) on different VMFS or LUN from apps (D: etc)• Heavy apps get their own VMFS or raw LUN(s)
Optimize storage by application• Different tiers or RAID levels for OS, data, transaction logs
Automated tiering can help
• No more than one VMFS per LUN• Less than 16 production ESX .VMDKs per VMFS
Implement Data Reduction Technologies• Dedupe can have a huge impact on VMDKs created from a template
Big impact on VDI and on replicated or backup data
August 2009 Storage Virtualization School 159
Conclusions
Numerous Virtual Server storage Issues
There are ways to deal with them
• Some better than others
Storage Virtualization
• Is one of those better ways
• Reduced storage costs
• Reduced storage SW license costs
• Increased app availability
• Increased online flexibility
August 2009 160Storage Virtualization School
Audience Response
Questions?
Break sponsored by
Part 3
Storage as a dynamic online “on-demand” resource
Part 1 Part 2 Part 3Part 1 Part 2 Part 3
Common Sense Weather StationBy Serpent River Weather Bureau
If the rock is wet…• It’s Raining
If the rock is swaying…• It’s Windy
If the rock is hot…• It’s Sunny
If the rock is cool…• It’s Overcast
If the rock is white…• It’s Snowing
If the rock is blue…• It’s Cold
If the rock is gone…• TORNADO!!!!!
August 2009 Storage Virtualization School 164
“Common sense is not so common”Voltaire
Part 3 Agenda
Storage as a “dynamic resource”
• Online
• Minimal disruptions
• On-demand
August 2009 Storage Virtualization School 165
Why Dealing w/Users is SO Frustrating
August 2009 166Storage Virtualization School
Start
Get an idea
Perform P.O.C.
Does P.O.C. support idea
Process created
Use process to improve service
Discover side effects of process
Can side effects be mitigated?
Improve process
Bad Idea
Revolution
Your Logic
No
No
Yes
Yes
August 2009 167Storage Virtualization School
Start
Get an idea
Push for implementation
Ignore all contradicting
evidence
Blame IT when it fails
Keep idea forever
End
User Logic
Read trade press• Business Week
• Byte & Switch
• CIO
• Forbes
• Fortune
• InfoStor
• Storage Magazine
• SearchStorage.com
• VSM
Glowing tech review = idea
August 2009 168Storage Virtualization School
The 7x24x365 World
• New markets
• New business
• New revenues
• New profits
• This is a good thing
March 2009 Storage Virtualization School 169August 2009 169Storage Virtualization School
IDC 2008 WW New Block-Level Virtualized Capacity Share by Segment, 2006–2012
August 2009 170Storage Virtualization School
IDC April 2008 Block-Level Virtualization Forecast Chart
0
200
400
600
800
1000
1200
1400
1600
1800
2007 2008 2009 2010 2011 2012
Heterogeneous iSCSI
Heterogeneous FC
($M)
$571
$902
$1161
$1368$1517
$1626
August 2009 171Storage Virtualization School
Using Storage Virtualization to Provide Storage as a Dynamic Online Resource
3 Key factors
• Linear or near linear scalability
• Economies of scale
• Make most tasks online & app non-disruptive
August 2009 Storage Virtualization School 172
Scaling
SV has potential for near linear scaling for
• Performance
• Capacity
• Managed files or file objectsAnd very large file systems
August 2009 Storage Virtualization School 173
Economies of Scale
Provides leverage
• Fewer storage software licensesEven capacity based licensing is less
• Goes down per TB as size increases
• Starts all over again with new image
• Less data migration
• Less mgmt, disruptions, admin time & admins
August 2009 Storage Virtualization School 174
Makes Most Tasks Online & App Non-disruptive
More than you might think is possible• Maintenance
• Moves & Changes
• Upgrades
• Data migration
• Provisioning
• Allocation
• Data Protection
August 2009 Storage Virtualization School 175
Online Data Migration
4 types of data migration
• Between arrays or filers
• Data consolidationOn a single resource
• From old-to-new
• Between tiers
August 2009 Storage Virtualization School 176
•Mail server•DBMS•File server•Video server•Web server
•Data migration•From ext storage•To pool storage•Betweenpool storage•Between Tiers
Online Data ProtectionSnapshots• Near CDP
Or CDP w/instant rollback
• BU from writeable snaps• Consistency groups
Replication - Sync & Async• Masked by virtualization
Apps have continuous accessInstant automated recovery
August 2009 Storage Virtualization School 177
LAN/MAN/WAN Replication •Synchronous•Asynchronous•Network optimized
•Deduped•Compressed•Utilize Tier 2 storage•Reduced costs
Snapshots•Incremental or zero space•Non-disruptive•Instant roll back•Read only•Read/write – clones•Utilize Tier 2 storage
•Reduced costs
Online Provisioning
Thin Provisioning
• Apps think they have all the storage they needActually fulfills on policy
Dynamic LUN Expansion
• LUN Expands as needed
Both enable J.I.T.S.
• Just-in-time-storage
Even built-in dedupe
• Ideal for VM ISO files, VMDKs, & VDI
August 2009 Storage Virtualization School 178
Thinks it has 1 PB – actually
has 64TB
“On-Demand” SAN Storage Allocation
Performance and/or capacity resources
• As needed
• Policy driven More drives, cache, pipes, QoS, etc.
Stripping
• Systems
• And/or Drives
August 2009 Storage Virtualization School 179
What is Deduplication? – Data Reduction
Deduplication removes duplicate data
3 different types
• File based
• Variable Block (a.k.a. storage objects)
• Application aware
August 2009 Storage Virtualization School 180
File Based DedupeReduces duplicate files
Problem it solves
• Runaway storage growth by eliminating duplicate identical filesKey is identical – this is a coarse grain approach
Primarily for secondary storage with backup data
Value proposition
• Reduced storage – Somewhat better than HW compression
August 2009 Storage Virtualization School 181
Variable Block Based Dedupe – Most Common
View from Backup Software: Normal-looking FS or VTL
= Compressed unique segments
= Unique variable segments (4KB-12KB)
= Redundant data segments
DataStream
A B C D E F G H I J
Incr 1
A B H
Incr 2
A E I
Second Full Backup
B J C D E F G H
First Full Backup
B C D A E F GA
Reduces duplicate storage objects across multiple filesProblem it solves – Medium to fine grain approach• Runaway storage growth by eliminating duplicate identical storage objects
Primarily for secondary storage with backup dataApp, protocol, file pathname, & block address independent
Value proposition• Reduced storage – Considerably better than hardware compression
August 2009 182Storage Virtualization School
Application Aware Based Dedupe
Designed for unstructured files• Reads & decompresses files (MPEGs, JPEGs, Office, PDFs, etc.)• Removes duplicate storage objects• Optimizes remaining storage objects, then re-compresses them
Problem it solves – medium to fine grain approach• Runaway storage growth eliminates duplicate storage objects
Key is ability to read & decompress filesExcellent for both primary & secondary storage
Value proposition• Reduced storage
Best at primary storage & equal to block w/secondary storage
August 2009 Storage Virtualization School 183
Content-Aware Dedupe is a Bit Different
Extract Correlate Optimize
Delayer and decode files in to fundamental storage objects
Correlate storage objects within& across files; finds both exact and similar matches
Applies file-aware optimizers to unique objects& re-stores
TXT
PNG
TXT
TXTJPG
JPG
JPG
Before After
August 2009 184Storage Virtualization School
When to use Dedupe
All 3 Types• VM ISO files, VDI, Citrix, VMDKs & Backup
File dedupe• Content addressable storage (CAS)
Variable block dedupe• To a limited extent primary storage
Beware performance hits
Application aware dedupe• Primary unstructured data
Digital multimedia files, MS Office files, PDFs, etc.
August 2009 Storage Virtualization School 185
Comparison of DeDuped Compressed Files
0
10
20
30
40
50
60
70
80
90
100 100
9085
70
20
August 2009 186Storage Virtualization School
Dedupe Ratio DemystificationVendors represent numbers in different ways
• Left column represents amount storage is reduced
• Right column represents amount data is reduced
Dedupe Ratio Actual Data Reduction
3:1 67%
4:1 75%
5:1 80%
10:1 90%
20:1 95%
30:1 97%August 2009 187Storage Virtualization School
Caveats About Dedupe
Used correctly, dedupe ratios are quite good
• Used incorrectly, they are marginal
Scalability is huge with deduplication• The larger the repository, the greater the amount of deduplication
It can also slow performance below acceptable parameters
Data integrity is incredibly important
• Remember, dedupe is eliminating multiple copies
If the primary copy becomes corrupted…
August 2009 188Storage Virtualization School
Dedupe Products
g g
Vendor Dedupe Product Type Dedupe location -Architecture
Local and/or Global
Dedupe
Primary or Secondary
Dedupe Notes
Asigra Data protection SW Source (LAN) & inlintarget (media server)
e Local & Global Secondary ROBO optimized dedupe, w/site-to-
site replication, &archive
CommVault Data protection SW Inline target (media server)
Local & Global Secondary Containerized dedupe, stays
deduped archive disk & tape
EMC Avamar Backup SW Source (on each server)
Local & Global Secondary Agent based deduplication
EMC NX/NS Celerra
NAS & Unified Virtual Storage Inline target Local Secondary &
Primary Comes with Celerra's DART OS
EMC Data Domain NAS or VTL disk storage Inline target Local Secondary Best known storage based dedupe -fast inline dedupe
EMC Disk Libraries VTL disk storage Inline &/or post
processing target Local Secondary Resale of Quantum
ExaGrid NAS Inline target (gateway) Local Secondary Best known storage based dedupe -
fast inline dedupe
FalconStor VTL, NAS, & Unified Virtual Storage
Post-processing target
Local & Global
Secondary & Primary VTL leader, dedupe can be added on
IBMBackup SW Inline target VTL
storageLocal & Global
Secondary & Primary
Diligent acquisition, one of the early dedupe VTLsVTL disk storage
NEC NAS Grid virtual storage Inline target Local & Global Secondary Most scalable NAS dedupe in
capacity & perf because of Grid
NetApp NAS or VTL disk storage Inline target Local Secondary & Primary Comes with ONTAP
Ocarina Content aware SW Post processing gateway
Local & Global Primary Only application aware dedupe
Quantum NAS or VTL disk storage Inline&/or post processing target Local Secondary Owner of most dedupe patents
Sepaton - HP VTL disk storage Post-processing target VTL
Local & Global Secondary Largest Enterprise dedupe systems,
also OEM'ed by HP
What is Thin Provisioning?Allows over-provisioning of storage
• Provisions more storage than physically available
Problem it solves
• App disruption when volumes are increased
Value proposition
• Allows lowest market pricing for HDDs (a.k.a. just-in-time storage)
Defers HDD acquisitions
Up to 64 TB Capacity “Promised” to Application
20 GB - Actual Capacity Used
August 2009 190Storage Virtualization School
When to use Thin Provisioning
Applications that demand a lot of storage
• Then only use it sparingly
As a way to reduce storage costs
• By deferring them
When not to use Thin Provisioning
• For DBMS that actually exercised the assigned storage
August 2009 191Storage Virtualization School
Thin Provisioning Caveats
Make sure there are plenty of safeguards• Policies that warn
When physical capacity is hitting a % threshold
• General pools that can allocate physical storageTo the Thin Provisioned volume based on policies
Note: Dynamic LUN expansion is almost as effective• The key exception is NTFS partitions
August 2009 192Storage Virtualization School
Payback Typically ~ 2 Years
Generally online storage virtualization services• Reduces expensive duplicate target software
• Reduces expensive server software
• Reduces the amount of storage required
• Increases storage utilization
• Permits heterogeneous storage vendors (resources)Lower TCO
• Reduces storage acquisition criteria to $/Perf & RAS
• Reduces admin time & overtimeAugust 2009 Storage Virtualization School 193
Payback Typically ~ 2 Years
Online data migration
• Optimizes storage tiers
• Increases competitiveness of additions
Single pane of glass
• Reduces mgmt touch points reducing OpEx
Online provisioning leverages HDD < costs
• Enables J.I.T.S. (just in time storage) reducing CapEx
August 2009 Storage Virtualization School 194
The Socratic Test of Three
March 2009 Storage Virtualization School 195August 2009 195Storage Virtualization School
Questions?
• Marc Staimer, President & CDS
• Dragon Slayer Consulting• [email protected]
• 503-579-3763
August 2009 196Storage Virtualization School
»WWhhaatt DDaattaa DDoommaaiinn iiss DDooiinngg ttoo SSttoorraaggeeThis paper outlines the impact of data deduplication on data protection, and specifically how Data Domain has changed the status quo for data protection.
» UUssiinngg DDeedduupplliiccaattiioonn wwiitthh MMooddeerrnn MMaaiinnffrraammee VViirrttuuaall TTaappeeLearn how mainframe users can switch from physical tape (or hybrid disk/tape) to a modern mainframe Virtual Tape solution with Data Domain deduplication storage.
Resources from our Sponsor
» IDDCC WWhhiittee PPaappeerr:: TThhee EEccoonnoommiicc IImmppaacctt ooff FFiillee VViirrttuuaalliizzaattiioonnGet insight into which applications are driving storage capacity growth, and learn from interviews with senior IT executives who are reducing the cost and complexityof data management.
» HHooww ttoo CCrreeaattee aa SSmmaarrtteerr SSttoorraaggee SSttrraatteeggyyLearn the value of having insight into your file storage environment, including the types of files that are being created, who is creating them, how quickly they age, and the capacity they consume
Resources from our Sponsor
» RRiigghhtt--ssiizziinngg YYoouurr SSttoorraaggee IInnffrraassttrruuccttuurree wwiitthh TTiieerreedd SSttoorraaggee ((PPCC))This podcast explains how tiered storage can drive efficiency in your IT operations.
» MMaaxxiimmiizzee tthhee IImmppaacctt ooff YYoouurr VViirrttuuaall EEnnvviirroonnmmeenntt:: AAlliiggnniinngg SSttoorraaggee,, NNeettwwoorrkkss aanndd AApppplliiccaattiioonnss ffoorr OOppttiimmaall EEffffiicciieennccyyThis webcast will provide valuable insight that will help you ensure that your initial virtualization deployment is realizing its potential in efficiency, ease of data management and utilization.
Resources from our Sponsor
» HHiigghh--AAvvaaiillaabbiilliittyy iinn VViirrttuuaall EEnnvviirroonnmmeennttss -- PPrraaccttiiccaall AApppplliiccaattiioonnVirtualizing your servers enhances server consolidation and provides easier server management. Discover a solution that works with VMware Infrastructure.
» VVMMwwaarree vvSSpphheerree vv44 SSttoorraaggee BBeesstt PPrraaccttiicceess wwiitthh XXiiootteecchhVMware vSphere v4 is a powerful virtualization software, designed to reduce costs & improve IT control. Discover a high value storage platform for this deployment.
Resources from our Sponsor
» MMaaxxiimmiizzee SSttoorraaggee EEffffiicciieennccyyMaximize your storage efficiencies with agentless reporting. See storage usage from the array to the hosts down to the application.
» VVMMwwaarree rreeppoorrttiinnggGet better visibility and control into your VMware environment with APTARE StorageConsole Virtualization Manager.
Resources from our Sponsor
» EEMMCC''ss IITT''ss VViirrttuuaalliizzaattiioonn JJoouurrnneeyyWant to learn what EMC is doing with virtualization? Visit their blog to follow their virtualization journey including challenges, best practices and future plans.
»WWhhiittee PPaappeerr:: FFoorrrreesstteerr AAnnaallyysstt RReeppoorrtt:: SSttoorraaggee CChhooiicceess ffoorr VViirrttuuaall SSeerrvveerrFor IT pros who will either upgrade or deploy a new storage environment to supportserver virtualization, this report provides key Forrester recommendations.
Resources from our Sponsor
» PPiillee OOnn tthhee SSaavviinnggss:: FFrreeee SSttoorraaggee VViirrttuuaalliizzaattiioonn SSooffttwwaarree ffrroomm HHiittaacchhii DDaattaa SSyysstteemmss Virtualize your multivendor storage, increase efficiency, and lower TCO with free software from Hitachi Data Systems.
»WWhhiittee PPaappeerr:: TThhee EEccoonnoommiicc EEffffeeccttss ooff SSttoorraaggee VViirrttuuaalliizzaattiioonnVirtualization improves organizations’ purchasing power by reclaiming, utilizing and optimizing storage to create economically superior data centers. Read more…
Resources from our Sponsor