62
© 2012 IBM Corporation IBM Storage Virtualization Cloud enabling technology Danijel Paulin, [email protected] Systems Architect, SEE IBM Croatia 9/27/2012 11th TF-Storage Meeting, 26-27 September 2012, Dubrovnik, Croatia

IBM Storage Virtualization - TERENA

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

IBM Storage Virtualization Cloud enabling technology

Danijel Paulin, [email protected] Systems Architect, SEE IBM Croatia

9/27/2012

11th TF-Storage Meeting, 26-27 September 2012, Dubrovnik, Croatia

Page 2: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

Agenda

Introduction

Virtualization – function and benefits

IBM Storage Virtualization

Virtualization Appliance SAN Volume Controller

Virtual Storage Platform Management

Integrated Infrastructure System - „Cloud Ready”

Summary

2

Page 3: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

Virtualization

Greater Storage

Efficiency & Flexibility

Workload Systems

Tuning

Foundation for

Cloud

Higher Utilization

Increased

Flexibility

Better Economics

New approach in designing IT Infrastructures

Smarter Computing is realized through an IT infrastructure that is designed for data, tuned to the task, and managed in the cloud...

Smarter Computing

Building a cloud starts with virtualizing your IT environment

Page 4: IBM Storage Virtualization - TERENA

© IBM Corporation 2012

The journey to the cloud begins with virtualization!

Orchestrate Workflow Manage the process for approval of usage

Provision & Secure Automate provisioning of resources

Monitor & Manage Provide visibility of performance of virtual machines

Meter & Rate Track usage of resources

Virtualize Server, storage & network devices to increase utilization

4

Page 5: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

Server virtualization System p, System i, System z LPARs, VMware ESX, IBM Smart Business

Desktop Cloud

Virtually consolidate workloads on servers

File and File System virtualization Scale Out NAS (SoNAS), DFSMS, IBM General Parallel File System, N-series

Virtually consolidate files in one namespace across servers

Storage virtualization SAN Volume Controller (the Storage Hypervisor), ProtecTIER

Industry leading Storage Virtualization solutions

Server and Storage Infrastructure Management Data protection with Tivoli Storage Manager and TSM FastBack

Advanced management of virtual environments with TPC, IBM Director VMcontrol, TADDM, ITM, TPM

Consolidated management of virtual and physical storage resources

IBM Storage Cloud Solutions Smart Business Storage Cloud (SoNAS), IBM SmartCloud Managed Backup

Virtualization and automation of storage capacity, data protection, and other storage services

IBM Virtualization Offerings

Page 6: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

Sharing

Virtual

Resources

Resources

Examples: LPARs, VMs, virtual disks, VLANs

Benefits: Resource utilization, workload mgmt., agility, energy efficiency

Aggregation

Virtual

Resources

Resources

Examples: Virtual disks, system pools

Benefits: Management simplification, investment protection, scalability

Emulation

Virtual

Resources

Resources

Examples: Arch. emulators, iSCSI, FCoE, v. tape

Benefits: Compatibility, software investment protection, interoperability, flexibility

Insulation

Add, Replace, or Change

Virtual

Resources

Resources

Examples: Compat. modes, CUOD, appliances

Benefits: Agility, investment protection, complexity & change hiding

Resource Type Y

Resource Type X

Add or Change

Virtualization – functions and benefits

Page 7: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 7

Technology that makes one set of resources look and feel like another set of resources A logical representation of physical resources

– Hides some of the complexity

– Adds or integrates new function with existing services

– Can be nested or applied to multiple layers of a system

Virtualization

Logical Representation

Physical Resources

What is Storage Virtualization?

Page 8: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

What distinguishes a Storage Cloud from Traditional IT?

1. Storage resources are virtualized from multiple arrays, vendors, and datacenters – pooled together and accessed anywhere. (as opposed to physical array-boundary limitations)

2. Storage services are standardized – selected from a storage service catalog. (as opposed to customized configuration)

3. Storage provisioning is self-service – administrators use automation to allocate capacity from the catalog. (as opposed to manual component-level provisioning)

4. Storage usage is paid per use – end users are aware of the impact of their consumption and service levels. (as opposed to paid from a central IT budget)

Page 9: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 9

IBM Storage Virtualization

Page 10: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 10

Today's SAN

SAN SAN-attached disks look like local disks to the OS

& application

Page 11: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 11

Virtualization layer

SAN – with Virtualization

Virtual disks start as images of migrated non-virtual disks.

Later, modify striping, thin provisioning, etc.

SAN

Page 12: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 12

Virtualization layer

Become truly flexible !

Virtual disks remain constant during physical infrastructure changes

SAN

Page 13: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 13

Virtualization layer

Enable tiered Storage !

Moving virtual disks between storage tiers requires no

downtime

SAN

Page 14: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 14

Avoid planned Downtime !

Upgrade

Virtualization layer upgrade or replacement with no downtime!

SAN

Page 15: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 15

In-band Storage Virtualization - Benefits

Isolation

1. Flat interoperability matrix 2. Non-disruptive migrations 3. No-cost multipathing

Pooling

1. Higher (pool) utilization 2. Cross-pool-striping: IOPS 3. Thin Provisioning: free GB

Performance CACHE + SSD

1. Performance increase 2. Hot-spot elimination 3. Adds SSD to old gear

Mirroring Mirroring

× 1. License economies 2. Cross-vendor mirror 3. Favorable TCO

License $$

Page 16: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 16

Migration into Storage Virtualization (and back!)

Virtualization layer SAN

ZONE

This works backwards too (no vendor lock-in)

Virtual disks in transparent Image Mode, before being converted to Full Striped

Page 17: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 17

Virtualization layer

Redundant SAN !

SAN A

ZONE

1 : 4

SAN B

Page 18: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 18

Virtualization Appliance

SAN Volume Controller

Page 19: IBM Storage Virtualization - TERENA

Virtual Server Infrastructure

Storage Hypervisor

Manage

VMC

ontrol

Virtual Storage Infrastructure

Tivoli Storage Productivity Center

Stor

age H

yper

visor

(SAN Volume Controller) Manage

IBM

Systems D

irector

• Virtual Storage Platform - SAN Volume Controller

– Common device driver - iSCSI or FC host attach – Common capabilities

• I/O caching and cross-site cache coherency • Thin provisioning • Easy Tier automated tiering to Solid-state Disks • Snapshot (FlashCopy) • Mirroring (Synchronous and Asynchronous)

– Data mobility • Transparent data migration among arrays and across tiers • Snapshot and mirroring across arrays and tiers

• Virtual Storage Platform Management - Tivoli Storage Productivity Center – Manageability

• Integrated SAN-wide Management with Tivoli Storage Productivity Center

• Integrated IBM server and storage management (Systems Director Storage Control)

– Replication • Application integrated FlashCopy • DR automation

– High Availability • Stretch Cluster HA

Page 20: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 20

Virtualization Appliance : SAN Volume Controller

Stand-alone product

Clustered ×2…8

SVC comes with write cache mirrored in pairs (IOgroups)

Multi-use Fibrechannel in & out

Linux boot, 100% IBM stack

TCA: 1. Hardware 2. per-TB license (tiered) 3. per-TB mirroring license

Page 21: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 21

6th Generation.....

Continuous development

Firmware is backwards compatible (64 bit not for 32 bit Hardware)

Replace while online

SAN Volume Controller CG8 – Firmware v6.4

SVC 4F2 - 4GB cache, 2Gb SAN (Rel.3 / 2006) SVC 8F2 - 8GB cache, 2Gb SAN (ROHS comp.) SVC 8F4 - 8GB cache, 4Gb SAN 155.000 SPC-1™ IOPS SVC 8G4 - +Dual-core Processor 272.500 SPC-1™ IOPS SVC CF8 - 24GB cache, Quad-core 380.483 6-node SPC-1 IOPS

SVC CG8 - +10 GbE approx. 640.000 SPC-1-like IOPS

MO

DE

LS

initial Release

:

:

Page 22: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 22

SVC Model & Code Release History

1999 – Almaden Research group publish ComPaSS clustering 2000 – SVC ‘lodestone’ development begins using ComPaSS 2003 – SVC 1.1 – 4F2 Hardware 4 node 2004 – SVC 1.2 – 8 node support 2004 – SVC 2.1 – 8F2 Hardware 2005 – SVC 3.1 – 8F4 Hardware 2006 – SVC 4.1 – Global Mirror, MTFC 2007 – SVC 4.2 – 8G4 Hardware, FlashCopy enh 2008 – SVC 4.3 – Thin Provisioning, Vdisk Mirror 8A4 Hdw 2009 – SVC 5.1 – CF8 Hardware, SSD Support, 4 Site 2010 – SVC 6.1 – V7000 Hardware, RAID, Easy Tier 2011 – SVC 6.2/3 – V7000U, 10G iSCSI, xtD Split Cluster 2012 – SVC 6.4 – IBM Real-time Compression, FCoE, Volume mobility...

Page 23: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

Based on IBM System x3550 M3 server (1U) – Intel® Xeon® 5600 (Westmere) 2.53 GHz quad-core processor

24GB of cache – Up to 192GB of cache per SVC cluster

Four 8Gbps FC ports (support Short-Wave & Long-Wave SFPs) – Up to 32 FC ports per SVC cluster

For external storage And/or for server attachment And/or Remote Copy/Mirroring

Two 1 Gbps iSCSI ports – Up to 16 GbE ports per SVC cluster

Optional 1 to 4 Solid State Drives – Up to 32 SSD per SVC cluster

Optional two 10 Gbps iSCSI/FCoE ports New engines may be intermixed in pairs with other engines in SVC clusters

– Mixing engine types in a cluster results in Volume throughput characteristics of the engine type in that I/O group

Cluster non-disruptive upgrade capability may be used to replace older engines with new CG8 engines

SVC 2145-CG8 – Virtualization Appliance

Page 24: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

SAN Volume Controller cluster

Storage Pool Storage Pool Storage Pool

consistent Driver Stack

consistent Driver Stack

consistent Driver Stack

IBM SAN Volume Controller Architecture

SVC Node with UPS (not depicted)

IO Group

Array LUNs

Managed Disk

vDISK here: striped Mode

Page 25: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

SVC Cluster

IBM SAN Volume Controller – Topology

Page 26: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

Virtual-Disk Types

Virtual Disks

MDG1

MDG2

MDG3

Image Mode:

Pass thru; Virtual Disk = Physical LUN

Sequential Mode:

Virtual Disk mapped sequentially to a portion of a managed disk

Striped Mode:

Virtual Disk striped

across multiple managed disks. Preferred mode

A

A

B

B

C

C C

Page 27: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 27

IBM SAN Volume Controller I/O Stack

SVC software has a modular design – 100% “In-house” code path

Each function is implemented as an independent component

– Components bypassed if not in use for a given volume

Standard interface between components – Easy to add/remove components

Components exploit a rich set of libraries and frameworks

– Minimal Linux base OS to boot-strap and hand control to user space

– Custom memory management & thread scheduling

– Optimal I/O code path

– Clustered "support" processes like GUI, slpd, cimom, easy tier

Remote Copy

Cache

Flash Copy

Mirroring

Space Efficient

Virtualization

SCSI Backend

SCSI Frontend

60us

RAID

Easy

Tier

Drives | External SCSI

Page 28: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

IBM SAN Volume Controller Management Options SVC GUI Completely redesigned

Browser based

Extremely easy to learn/use fast

SVC CLI ssh

scripting

complete command set

Tivoli Productivity Center TPC, TPC-R

SMI-S 1.3

Embedded CIMOM

VDS VSS

Storage Control

vCenter Plugin

Page 29: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 29

SAN Volume Controller Features

Page 30: IBM Storage Virtualization - TERENA

SAN Volume Controller Features - summary Cache partitioning Embedded SMI-S agent Easy to use GUI

– Built-in real time performance monitoring E-mail, SNMP trap & Syslog error event logging Authentication service for Single Sign-On & LDAP Virtualise data without data-loss Expand or shrink Volumes on-line Thin-provisioned Volumes

– Reclaim Zero-write space – Thick to thin, thin to thick & thin to thin migration

On-line Volume Migration

Volume Mirroring

EasyTier: Automatic relocation of hot and cold extents

FlashCopy, Point-In-Time copy (optional) – Up to 256 target per source

● Target FC may be source Remote Copy

– Full (with background copy = clone) – Partial (no background copy) – Space Efficient – Incremental – Cascaded – Consistency Groups – Reverse

Microsoft Virtual Disk Service & Volume Shadow Copy Services hardware provider

Remote Copy (optional) – Synchronous & asynchronous remote replication with

Consistency groups

VMware – Storage Replication Adaptor for Site Recovery

Manager – VAAI support & vCenter Server management plug-in

Hot-spots Optimized performance and throughput

Automatic Relocation

SSDs HDDs SSDs HDDs

Volume

Volume copy 2

SVC

Volume copy 1

Volume

MDisk Target

SVC

MDisk Source

Up to 256

Vol3 FlashCopy

target of Vol1

Vol0 Source

Map 1 Map 2

Map 4

Vol1 FlashCopy

target of Vol0

Vol2 FlashCopy

target of Vol1

Vol4 FlashCopy

target of Vol3

MM or GM Relationship

Consolidated DR Site

MM or GM Relationship

MM or GM Relationship

SVC

SVC SVC

SVC

Page 31: IBM Storage Virtualization - TERENA

Volume Mirroring Back-end high availability & migration

SVC stores two copies of a Volume – It maintains both copies in sync, reads primary copy and writes to both copies

If disk supporting one copy fails, SVC provides continuous data access by using other copy – Copies are automatically resynchronized after repair

Intended to protect critical data against failure of a disk system or disk array – A local high availability function, not a disaster recovery function

Copies can be split – Either copy can continue as production copy

Either or both copies may be thin-provisioned – Can be used to convert fully allocated to thin-provisioned volume

● Thick to thin migration

– May be used to convert thin-provisioned to fully allocated ● Thin to thick migration

Mirrored Volumes use twice physical capacity of un-mirrored Volumes – Base virtualisation licensed capacity must include required physical capacity

The user can configure the timeout for each mirrored volume – Priority on redundancy: Wait until write completes or times-out finally. Performance impact, but active copies are always synchronized

31

Copy 0 Copy 1

SVC R W W

Page 32: IBM Storage Virtualization - TERENA

What is Easy Tier? – A function that dynamically re- distributes active data across multiple tiers of storage class based on workload

characteristics Automatic storage hierarchy ● Hybrid storage pool with 2 tiers = Solid-State Drives & Hard Disk Drives

● I/O Monitor keeps access history for each virtualisation extent (16MiB to 2GiB per extent) every 5 minutes

● Data Placement Adviser analyses history every 24 hours

● Data Migration Planner invokes data migration Promote hot extents or demote inactive extents

– The goal being to reduce response time

– Users have automatic and semi-automatic extent based placement and migration management

Why it matters? – Solid State Storage has orders of magnitude better throughput and response time with random reads

– Full volume allocation to SSD only benefits a small number of volumes or portions of volumes, and use cases

– Allowing dynamic movement of the hottest extents to be transferred to the highest performance storage enables a small number of SSD to benefit the entire infrastructure

– Works with Thin-provisioned Volumes

IBM EasyTier

32

Hot-spots Optimized performance and throughput

Transparent reorganization

Hot-spots Optimized performance and throughput

Automatic Relocation

SSDs HDDs SSDs HDDs

Page 33: IBM Storage Virtualization - TERENA

Thin-provisioning

Traditional (“fully allocated”) virtual disks use physical disk capacity for the entire capacity of a virtual disk even if it is not used

With thin-provisioning, SVC allocates and uses physical disk capacity when data is written

Available at no additional charge with base virtualisation license Support all hosts supported with traditional volumes and all advanced features

(EasyTier, FlashCopy, etc.) Reclaiming Unused Disk Space

– When using Volume Mirroring to copy from a fully-allocated volume to a thin-provisioned volume, SVC will not copy blocks that are all zeroes

– When processing a write request, SVC detects if all zeroes are being written and does not allocate disk space for such requests in the thin-provisioned volumes ● Helps avoid space utilization concerns when formatting Volumes

Done at Grain Level (32/64/128/256KiB) If grain contains all zeros don’t write 33

Without thin provisioning, pre-allocated space is reserved whether the application

uses it or not

With thin provisioning, applications can grow dynamically, but only consume space they are

actually using

Dynamic growth

Page 34: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 34

Copy Services

Page 35: IBM Storage Virtualization - TERENA

35

Business Continuity with SVC

Traditional SAN Replication APIs differ by vendor Replication destination must be the

same as the source Different multipath drivers for each

array Lower-cost disks offer primitive, or no

replication services

SAN Volume Controller Common replication API, SAN-wide, that

does not change as storage hardware changes

Common multipath driver for all arrays Replication targets can be on lower-cost

disks, reducing the overall cost of exploiting replication services

SAN SAN

FlashCopy® Metro/Global Mirror

TimeFinder SRDF

IBM DS5000

IBM DS5000

EMC Clariion

EMC Clariion EMC

Clariion IBM

DS5000 HP

EVA IBM

Storwize V7000

HDS AMS

SVC SVC

Page 36: IBM Storage Virtualization - TERENA

SVC SVC

Managed Storage

36

Volume Mirroring Volume Mirroring “outside the box” 2 close sites (<10Km) Warning, there is no consistency group

Global Mirror Consistent Asynchronous Mirror

– Limited impact on write IO response time

– Data loss – All write IOs are sent to the remote

site in the same order they were received on source volumes

– Only 1 source and 1 target volumes 2 remote sites (>300 Km)

Metro Mirror Synchronous Mirror

– Write IO response time doubled + distance latency

– No data loss 2 close sites (<300 Km) Warning, production performance impact if inter-site links are unavailable, during microcode upgrades, etc.

FlashCopy Point-in-Time Copy “outside the box” 2 close sites (<10Km) Warning, this is not real time replication

Vol0 Vol0’ Vol0’

Managed Storage Legacy Storage

R W W

Copy Services with SVC

Source and target can have different characteristics and be from different vendors Source and target can be in the same cluster

Page 37: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 37

SAN Volume Controller

SAN Volume Controller

SAN Volume Controller

SAN Volume Controller

Multicluster Mirroring "any-to-any" (up to 4 instances)

SAN Volume Controller

SAN Volume Controller

SAN Volume Controller

Datacenter1 Datacenter 2

Datacenter 4

Datacenter 3

Page 38: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 38

SVC split cluster solution

Page 39: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 39

SVC split cluster - symmetric disk mirroring

VM Host

VM Host

SVC 1 node A SVC 1 node B

High availability + protection for virtual machines

VM VM

LUN1 LUN1' max.100km recommended max.300km supported

One storage system. Two locations.

VM VM

VM VM

Appliance functionality, not software-based, no license

Page 40: IBM Storage Virtualization - TERENA

SW

I/O Group

Production room A Production room B Production room C

You should always have 2 SAN fabrics (A & B), and 2 switches per SAN fabric (one on each site)

– This diagram is only showing connectivity to a single fabric ● In reality connectivity is to a redundant SAN

fabric and therefore everything should be doubled

You should always connect each SVC node in a cluster on the same SAN switches

– The best is to connect each SVC node to SAN fabric A switch 1 & 2, as well as SAN fabric B switch 1 & 2

– You can consider (supported but it is not recommended) connecting all SVC nodes to the switch 1 in the SAN fabric A, and to the switch 2 in the SAN fabric B

To avoid fabric re-initialisation in case of link hiccups on the ISL, consider creating a Virtual SAN Fabric on each site and use inter-VSAN routing

SAN A Switch 1

SAN A’ Switch 2

LW or SW LW or SW

SW

SW SW

SW SW LW or SW LW or SW LW or SW LW or SW

ISL

SVC split cluster & VDM – Connectivity Bellow 10Km using passive DWDM

Pool 1 Pool 3 Pool 2 Candidate Quorum

Primary Quorum

Candidate Quorum

40

Page 41: IBM Storage Virtualization - TERENA

SW

I/O Group

Production room A Production room B

SW

SW SW

SW SW

SVC split cluster & VDM – Connectivity Up to 300Km using active DWDM

Pool 1 Pool 2 Candidate Quorum

Candidate Quorum

41

Dedicated ISLs/Trunks For SVC inter-node traffic

Private SAN A Private SAN A’

Public SAN A Public SAN A’ ISL s/Trunks

Production room C

LW or SW LW or SW

Pool 3 Primary Quorum

SW Brocade virtual fabric or a Cisco VSAN can be used to isolate Public and Private SANs

Enhanced!

You should always have 2 SAN fabrics (A & B) with at least: 2 switches per SAN fabric (1 per site) when using CISCO VSANs or Brocade virtual fabrics to isolate private and

public SANs 4 switches per SAN fabric (2 per site) when private and public SANs are on physically dedicated switches This diagram is only showing connectivity to a single fabric A (In reality connectivity is to a redundant SAN fabric and therefore everything should be doubled with also connection to B switches).

Page 42: IBM Storage Virtualization - TERENA

2-site Split Cluster

HA / Disaster Recovery with SVC Split Cluster

Improve availability, load-balance, and deliver real-time remote data access by distributing applications and their data across multiple sites. Seamless server / storage failover when

used in conjunction with server or hypervisor clustering (such as VMware or PowerVM) Up to 300km between sites (3x EMC VPLEX)

Metro or Global Mirror

4-site Disaster Recovery For combined high availability and disaster recovery needs, synchronously or asynchronously mirror data over long distances between two high-availability stretch clusters.

High Availability High Availability

Disaster Recovery

Data center 1 Data center 2

Server Cluster 1 Server Cluster 2

SVCStretched-cluster Stretched

virtual volume

Failover

Data center 1 Data center 2

Server Cluster 1 Server Cluster 2

Stretchedvirtual volume

Failover

Data center 1 Data center 2

Server Cluster 1 Server Cluster 2

Stretchedvirtual volume

Failover

Up to 300km

Page 43: IBM Storage Virtualization - TERENA

SVC Split Cluster Considerations

The same code is used for all inter-node communication – Clustering – Write Cache Mirroring – Global Mirror & Metro Mirror

Advantages

– No manual intervention required – Automatic and fast handling of storage failures – Volumes mirrored in both locations – Transparent for servers and host based clusters – Perfect fit in a virtualized environment (like VMware VMotion, AIX Live Partition

Mobility)

Disadvantages – Mix between HA and DR solution but not a true DR solution – Non-trivial implementation – involve IBM Services

43

Page 44: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 44

Storwize V7000 : mini SVC with disks

Page 45: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 45

V7000 = The iPod of Midrange Storage

Delegated complexity "auto optimizing"

Easy-Tier SSD enabled Thin provisioning Non-IBM expansion Auto-migration

based on "mini" SVC

Page 46: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 46

Compatibility

Page 47: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation IBM System Storage SAN Volume Controller

SAN Volume Controller

SVC 6.4 Supported Environments

8Gbps SAN fabric

HP 3PAR, StorageWorks P9500, MA, EMA MSA 2000, XP EVA 6400, 8400

Hitachi Virtual Storage Platform (VSP)

Lightning Thunder

TagmaStore AMS 2100, 2300, 2500

WMS, USP, USP-V

EMC VNX VMAX CLARiiON CX4-960 Symmetrix

Microsoft Windows Hyper-V

IBM Power7 IBM AIX IBM i 6.1 (VIOS)

Sun Solaris

HP-UX 11i Tru64 OpenVMS

Linux (Intel/Power/z

Linux) RHEL

SUSE 11

Citrix Xen Server IBM BladeCenter

SAN

SAN Volume Controller

Continuous Copy Metro/Global Mirror Multiple Cluster Mirror

VMware vSphere 4.1., 5

Point-in-time Copy Full volume, Copy on write 256 targets, Incremental, Cascaded, Reverse, Space-Efficient, FlashCopy Mgr

Novell NetWare

Sun StorageTek

IBM DS DS3400, DS3500 DS4000 DS5020, DS3950 DS6000 DS8000, DS8800

1024 Hosts

IBM Storwize V7000

IBM

N series

NetApp FAS

SGI IRIX IBM TS7650G

Fujitsu Eternus

DX60, DX80, DX90, DX410 DX8100, DX8300, DX9700 8000 Models 2000 & 1200

4000 models 600 & 400, 3000

NEC iStorage

Bull Storeway

Space-Efficient Virtual Disks

Apple Mac OS

Pillar Axiom

IBM XIV DCS9550 DCS9900

IBM z/VSE

SSD

Native iSCSI* 1 or 10 Gigabit

TMS RamSan- 620 Compellent Series 20

Easy Tier

Virtual Disk Mirroring

VAAI

Page 48: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 48

Virtual Storage Platform

Management

Page 49: IBM Storage Virtualization - TERENA

Virtual Server Infrastructure

Storage Hypervisor

Manage

VMC

ontrol

Virtual Storage Infrastructure

Tivoli Storage Productivity Center

Stor

age H

yper

visor

(SAN Volume Controller) Manage

IBM

Systems D

irector

• Virtual Storage Platform - SAN Volume Controller

– Common device driver - iSCSI or FC host attach – Common capabilities

• I/O caching and cross-site cache coherency • Thin provisioning • Easy Tier automated tiering to Solid-state Disks • Snapshot (FlashCopy) • Mirroring (Synchronous and Asynchronous)

– Data mobility • Transparent data migration among arrays and across tiers • Snapshot and mirroring across arrays and tiers

• Virtual Storage Platform Management - Tivoli Storage Productivity Center – Manageability

• Integrated SAN-wide Management with Tivoli Storage Productivity Center

• Integrated IBM server and storage management (Systems Director Storage Control)

– Replication • Application integrated FlashCopy • DR automation

– High Availability • Stretch Cluster HA

Page 50: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

Tivoli Storage Productivity Center - TPC

5

Servers ESX servers Apps, DB’s, file systems Volume managers Host bus adaptors Virtual HBAs Multi-path drivers

What You Need to Manage TPC Can Help

IBM SmartCloud Virtual Storage Center

All this and more… Advanced SAN Planning and

provisioning based on best practices

Proactive configuration change management

Performance optimization Tiering Optimization Complete SAN fabric

performance mgmt. Storage Virtualization Application Aware FlashCopy

management

TPC 5.1 Single management

console Heterogeneous storage Health monitoring Capacity mgmt. Provisioning Fabric management FlashCopy support Storage System

Performance Management

SAN Fabric Performance management

Trend Analysis DR & Business Continuity Applications & Storage Hypervisor (ESX, VIO) Hyperswap Mgmt.

Storage Networks Switches & Directors Virtual devices

Storage Multi-vendor storage Storage array provisioning Virtualization / Vol. mapping Block + NAS, VMFS Tape libraries

Replication FlashCopy Metro Mirror Metro Global Mirror

… and Mature Start Here

Page 51: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

TPC 5.1 Highlights

51

Fully integrated & Web-based GUI

– Based on Storwize/XIV success

TCR/Cognos-based Reporting & Analytics

Enhanced management for virtual environments

Integrated Installer

Simplified packaging

Page 52: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

Enhanced management for virtual environments

Helps avoid double counting storage capacity in TPC reporting on VMware

Associates storage not only with individual VMs and Hypervisors but also with the clusters

VMotion awareness

52

Virtual Machines Clustered Across Hosts

Storage (SAN)

VM Hypervisor

Tivoli Storage Productivity Center

VM Hypervisor

Page 53: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 53

Enhanced management for virtual environments Web-based GUI - Hypervisor related Storage

Page 54: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 54

Integrated Infrastructure System

„Cloud Ready”

Page 55: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

IBM PureSystems

55

Infrastructure & Cloud Application & Cloud

• Integrated Infrastructure System

• Factory integration of Compute, Storage, Networking, and management

• Broad support for x86 and POWER environments

• Cloud ready for infrastructure

• Integrated Application Platform

• Factory integration of infrastructure + middleware (DB2, Websphere)

• Application ready (Power or x86 with workload deployment capability)

• Cloud ready application platform

Page 56: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 56

Flexible and open choice in a fully integrated system

PureFlex System is Integrated by design Expert

Integrated Systems

Tightly integrated compute, storage, networking, software, management, and security

Applications Tools

Compute

Storage Networking

Virtualization

Management Security

Page 57: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 57

IBM Flex System

Compute Nodes Power 2S/4S* x86 2S/4S

Storage Node V7000 Expansion inside or outside chassis

Management Appliance

Networking 10/40GbE, FCoE, IB 8/16Gb FC

Expansion PCIe Storage

IBM PureFlex System

Pre-configured, pre-integrated infrastructure systems with compute,

storage, networking, physical and virtual management, and entry cloud

management with integrated expertise.

Chassis 14 half-wide bays for nodes

IBM PureApplication System

Pre-configured, pre-integrated platform systems with middleware designed for

transactional web applications and enabled for cloud with

integrated expertise.

IBM PureSystems What’s Inside? An evolution in design, a revolution in experience

Expert Integrated Systems

Page 58: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 58

Summary

Page 59: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 59

– Simplified administration, including copy services: 1 same process

– Online re-planning flexibility is greatly enhanced "Cloud ready"

– Storage effectiveness (ongoing optimization) can be maintained over time

– Move applications up one tier as required, or down one tier when stale

– Move from performance design "in hardware" to QoS policy management

Why to consider Storage Virtualization?

1. Missing storage "hypervisor" for virtualized servers

2. Too high physical migration effort

3. Compatibility chaos (multipathing, HBA firmware…)

4. Need for transparent campus failover like Unix LVM

5. Need for automatic hotspot elimination ("Easy Tier")

6. Unhappy with storage performance SVC

Page 60: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 60

Internet Resources Information Center

http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp

SVC Support Matrix http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

SVC / Storwize V7000 Documentation

http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp

Page 61: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation

Thank you!

61

Page 62: IBM Storage Virtualization - TERENA

© 2012 IBM Corporation 62