Upload
ashutosh-mate
View
177
Download
2
Embed Size (px)
Citation preview
Spectrum Scale / ESS Solution & Architecture
© 2016 IBM Corporation
Ash MateWW Senior Solutions Architect, [email protected]
IBM Spectrum Scale Backup & Archive
Solution & Architecture
© 2016 IBM Corporation
Agenda
• What is IBM Spectrum Scale?• Spectrum Scale Deployment Models• Backup & Archive of Spectrum Scale
– Solution Architecture– Benefits– Case Studies
• Backup & Archive to Spectrum Scale– Solution Architecture– Benefits– Case Studies
© 2016 IBM Corporation
What is IBM Spectrum Scale?
© 2016 IBM Corporation
NFS
Map Reduce Connector OpenStackCinder Swift
Glance
Powered by
POSIX
Spectrum Scale
Shared Nothing Cluster
Client workstations
Users and applications
Compute Farm
Single name space
Site A
Site B
Site C
Off Premise
4
Disk
Flash
Tape
Spectrum Scale is a scalable parallel filesystem that helps customersmanage and optimize large amounts of data across multiple storage environments and data tiers
What is IBM Spectrum Scale? (contd)
© 2016 IBM Corporation
Spectrum Scale Deployment Models
© 2016 IBM Corporation
Single Cluster/Single Filesystem*
*or can be multiple filesystems if desired
ESS Storage
TCP/IP or Infinband Network
Other optionalNSD
Servers
ApplicationNodes
Storage
cNFS
cNFS
NFS, CIFS, Swift, S3 exports
NSDServersInside
ESS
HeterogeneousStorage
Storage
Scalable, capacity & performance
Optimize heterogeneous
resources
GPFS Protocol
All clients can access all data in parallel
All Network Shared Disk (NSD) servers export NSDs to all the clients in active-active mode
GPFS stripes files across NSD servers and NSDs in units of file-system block-size
NSD client communicates with all the servers
File-system load spread evenly across all the servers and NSDs. No Hot Spots
No single-server bottleneck Can share access to data with
NFS, SMB, S3 and Swift Access via File & Object
protocols for clients without GPFS client
Easy to scale while keeping the architecture balanced
Client does real-time parallel I/O to all the NSD servers and storage volumes/NSDs
Simple Cluster Model Overview
© 2016 IBM Corporation
Backup & Archive of Spectrum Scale • Spectrum Scale as the source• Use Case: Backup or Archive data stored on Spectrum Scale to other backup and
archival products.– Inbuilt mirroring, snapshotting capabilities (mmbackup)– Spectrum Protect (TSM)– Spectrum Archive (LTFS EE)– Enterprise Content Manager (ECM)– Third Party Backup Software
© 2016 IBM Corporation
Highlevel Architecture
© Copyright IBM Corporation 20158
Spectrum ScaleNSD server
Spectrum ScaleNSD clients
Spectrum Protect for Space ManagementSpectrum Protect Backup Archive Client
Supported platforms: AIX™, xLinux,
pLinux, zLinux, Windows®
Supported platforms: AIX™, xLinux, pLinux, zLinux,
HP, Sol, Windows®
Supported platforms: AIX™, xLinux, zLinux (4Q15)
Spectrum Archive Enterprise Edition
Supported platform: xLinux
Spectrum Protect Server
Supported storage medium:LTFS compatible Tape Library
Function:• Backup, Restore• Migration, Recall• SOBAR
Function:• Migration, Recall
Customer application can run on Spectrum Protect NSD client or server nodes.
Supported platforms: Customer application on NSD client: AIX™, xLinux, pLinux, zLinux, Windows®Customer application on NSD server:Spectrum Protect: AIX™, xLinux, zLinux (4Q15)Spectrum Archive: xLinux
Supported storage medium:Disk, Optical, Tape Library,
Object Storage
© 2016 IBM Corporation
Concepts• Snapshot• mmbackup• Active Archiving/DMAPI
– TSM/HSM– LTFS
• Scale Out Backup And Restore (SOBAR)
9
© 2016 IBM Corporation
• Capture file system content at a point in time
• Snapshots are read-only
• Uses Copy-on-Write - does not consume space unless data changes
• Intermediate online backup capability– Allows easy recovery from common problems such as accidental
deletion of a file, and comparison with older versions of a file.
• Backup or mirror programs can use a snapshot to obtain a consistent copy of the file system
• Policy engine can be restricted to run on a snapshot instead of live/active file system
• Multiple snapshots of a fileset or file system are allowed (up to 256 snapshots)– File system level snapshot are called global snasphot
• NOTE: snapshot is not a backup solution since it does not make a copy of all data
10
Capture file system content at a point in time
Snapshot
© 2016 IBM Corporation
• Creating Snapshot– Can be expensive on busy system.
Command: mmcrsnapshot Device SnapshotName [-j Fileset]
# mmcrsnapshot d13 g_snap1Writing dirty data to disk.Quiescing all file system operations.Writing dirty data to disk again.Snapshot g_snap1 created with id 5.
Restore Operation:1) Use copy to restore file.2)mmrestorefs to restore global or independent fileset level snapshot.
– NOTE: mmrestorefs of a global snapshot requires file system to be unmounted.
11
Snapshot (contd)
© 2016 IBM Corporation
backup (mmbackup)
© Copyright IBM Corporation 2015 12
Function
• Massive parallel filesystem backup processing
• Spectrum Scale mmbackup creates local shadow of Spectrum Protect DB and uses policy engine to identify files for backup
• Spectrum Protect backup archive client is used under the hood to backup files to Spectrum Protect Server
• Spectrum Protect restore (CLI or GUI) can be used to restore files
Challenges
• Usage of ACL & EA might lead to increased backup traffic
• If HSM is used inline backup might lead to unexpected tape mounts
• Administrative operations on Spectrum Protect Server might not be observed from mmbackup (e.g. file space deletion)
• Limited handling of include rules using management class binding
Recommendations
• If HSM is used use option MIGREQUIRESBACKUP=YES
• Prevent rename of directories close to file system root
• Prevent ACL & EA changes if possible
restore (GUI or CLI)Spectrum Scale Cluster Spectrum Protect Server
Spectrum Protectbackup archive clienttypically installed on
serveral cluster nodes
Spectrum Scalemmbackup tool
coordinates processing
Backup Of Large Spectrum Scale File Systems
© 2016 IBM Corporation13
Backup Of Large Spectrum Scale File Systems
Backup cycle: • After start mmbackup evaluates the cluster environment
and verifies product versions and settings• Optional the Spectrum Protect server is queried for
existing backup information. In other cases existing shadow DB is used for processing
• The policy engine is used to generate a list files currently eligible for backup activities
• Compare existing shadow DB and scan result to calculate file lists for required backup activities
• Expire all files deleted in the file system since last backup run
• Incremental backup all files with changed metadata in the file system since last backup run
• Selective backup all files with changed data in the file system since last backup run
• While backup activities ongoing update shadow DB inline• Analyse backup results from all used cluster nodes and
finish backup cycle by selective backup the current shadow DB
© Copyright IBM Corporation 2015
initiate mmbackup
Evaluate environment
Optional: query Spectrum Protect
server
Perform file system scan
Calculate backup activities
Expire deleted files
Backup new and changed files
Analyse result and finish backup run
© 2016 IBM Corporation
Backing up Spectrum Scale File System • IBM Spectrum Scale (GPFS) mmbackup is a utility that uses the powerful GPFS policy engine and
Spectrum Protect (TSM) Backup clients combined to backup a GPFS file system to a TSM server.– Spectrum Protect backup clients are used as data movers– Spectrum Protect server as back end storage– Policy engine for candidate selection and work load distribution – Fully integrated with Spectrum Protect functionality, restore is done by Spectrum Protect Backup client CLI– Maintains its own database – no need to communicate with Spectrum Protect server (which is an expensive
operation)– Multi Threaded and Multiple nodes can participate in a backup job
• Progressive incremental or "incremental forever“
• Expiration: Free up space on Spectrum Protect server for deleted files
14
© 2016 IBM Corporation
Command• Backup operation:- Backup from a active file system or a snapshot (in future from a fileset)
mmbackup Device [-t {full|incremental}] [-N {Node[,Node...] | NodeFile | NodeClass}] [-g GlobalWorkDirectory] [-s LocalWorkDirectory] [-S SnapshotName] [-f] [-q] [-v] [-d] [-a IscanThreads] [-n DirThreadLevel] [-m ExecThreads | [[--expire-threads ExpireThreads] [--backup-threads BackupThreads]]] [-B MaxFiles | [[--max-backup-count MaxBackupCount] [--max-expire-count MaxExpireCount]]] [--max-backup-size MaxBackupSize] [--quote | --noquote] [--rebuild] [--tsm-servers TSMServer[,TSMServer...]] [--tsm-errorlog TSMErrorLogFile] [-L n] [-P PolicyFile]
• Restore operation:- There is no command since restore are done by Spectrum Protect command (dsmc restore)- Can restore a file/directory or whole file system
15
© 2016 IBM Corporation
GPFSTSM Backup
GPFSTSM Backup
GPFS
Spectrum Protect SERVER
LAN
Backup data goes from GPFS to TSM serverRestore: data goes from TSM server to GPFS
Spectrum Scale Cluster
Users andapplications
Tape Library
disks
•Data goes to disks in Spectrum Protect server to reduce the backup window time•Data then moved to tapes after backup has completed•Restore is done directly from tapes•Admin executes commands on Spectrum Protect Backup clients to restore data•Backup is done on off peak hours to minimize the impact
data
mmbackup
Restore command from TSM B/A clients
mmbackup running Incremental job
After backup completes
16
Backup and Restore overview
© 2016 IBM Corporation
Tape Tier and Active Archiving• IBM Spectrum Scale supports DMAPI (Data Management API) that can be used by Data management application
like Spectrum Protect & Spectrum Archive to provide tape tier/active archiving to Spectrum Scale file system
• Use DMAPI and callbacks with:
• Spectrum Protect (TSM/HSM):
• Client server model
• Require separate TSM server
• Spectrum Archive (LTFS EE):
• Integrated with Spectrum Scale Cluster
• Does not require separate server
• It is based on Linear Tape File System (open source)
• Allows export and import of tapes
17
© 2016 IBM Corporation
GPFSTSM/HSM
GPFSTSM/HSM
GPFS
Spectrum Protect SERVER
LAN
Migration: data goes from GPFS to TSM serverRecall: data goes from TSM server to GPFS
Spectrum Scale Cluster
Users and applications
Tape Library
•Migration can be done directly to the tape•Data is recalled from the tape •Multiple TSM/HSM clients can move data to the TSM server•Recall due to file access•Migration and recalls are distributed and done by TSM/HSM clients
write
Migrationbased on storage pool threshold
recalls caused by user accessing files
file operations i.e. read/write
migration due to low online storage
18
Spectrum Scale + Spectrum Protect
© 2016 IBM Corporation
Tape Tier and Active Archiving (contd.)Operations: Migration: Move data to the tape
– Uses GPFS policy to find candidate/migrate – Threshold based migration
• Utilizes Policy Rules (mmchpolicy and user exits to accomplish monitoring of the space)
mmaddcallback CallbackIdentifier --command CommandPathname
--event Event[,Event...] [--priority Value] [--async | --sync [--timeout Seconds] [--onerror Action]] [-N {Node[,Node...] | NodeFile | NodeClass}] [--parms ParameterString ...]
--event: lowDiskSpace, noDiskSpace
– Examples are located at:/usr/lpp/mmfs/samples/ilm
– Process:• Set LTFS EE or TSM• Create Policy with threshold, use mmchpolicy command to install new policy• Setup callback
19
© 2016 IBM Corporation
Tape Tier and Active Archiving (contd.)• Recall: Move data from tape to file system.
– On-Demand recall via access to a file– Based on command to recall a file– Policy based recall
• Reconciliation– Recover space for deleted files from tapes
20
© 2016 IBM Corporation
LTFS EE with separate GPFS nodes: LTFS EE connects to tape via LTFS LE+
Tape library can have multiple pools (3 in above example)
Multiple nodes can connect to the tape library – scalability for performance.
No External offline storage server
21
Spectrum Scale + Spectrum Archive
© 2016 IBM Corporation
Scale Out Backup and Restore (SOBAR) is a specialized mechanism for data protection against disaster only for IBM Spectrum Scale™ file systems that are managed by Spectrum Protect - Tivoli® Storage Manager (TSM) Hierarchical Storage Management (HSM).
Backup Process:• Backup configuration data• Pre-migrate files to HSM so there is copy of the data in the TSM
Restore Process:• In case of disaster, recreate cluster and file system (using mmrestore config). • Use image restore process to restore inode space (directory structure and file stubs)• Now use normal HSM process to recall data (data will be recalled on demand)
22
Scale Out Backup and Restore (SOBAR)
© 2016 IBM Corporation
migration
© Copyright IBM Corporation 2015 23
Function
•Function Backup•Spectrum Protect HSM used to premigrate files
•SOBAR toolset used to generate filesystem metadata image
•Spectrum Protect backup archive client used to backup image files
•Function Restore•Spectrum Protect backup archive client used to restore image files
•SOBAR toolset used to recreate file system structure
•Spectrum Protect HSM used to pre-fetch files and allow direct access by applying transparent recall
Challenges
• All files to be included have to be premigrated or migrated
• Cluster configuration has to be backed up separately
Recommendations
• Frequently applied policy rules should ensure that newly created files will be premigrated immediately
• Integrate SOBAR backup to your business process to prevent file changes shortly before image capturing
• Prepare pre-fetching importance list for recovery processing
recall (transparent and manual)
Spectrum Scale Cluster Spectrum Protect Server
Spectrum Protect forSpace Management clientAND backup archive client
typically installed onseveral cluster nodes
Spectrum ScaleSOBAR toolset used
for processing
image backup
image restore
Disaster Recovery using SOBAR
© 2016 IBM Corporation
Backup of configuration information
mmbackupconfig Device -o OutputFile
Restore configuration information
mmrestoreconfig Device -i InputFile [-I {yes | test}] [-Q {yes | no | only}] [-W NewDeviceName] or mmrestoreconfig Device -i InputFile --image-restore [-I {yes | test}] [-W NewDeviceName] or mmrestoreconfig Device -i InputFile -F QueryResultFile or mmrestoreconfig Device -i InputFile -I continue
24
SOBAR Commands
© 2016 IBM Corporation
SOBAR Commands (contd)Backup of the metadata space (inodes space)
mmimgbackup Device [-g GlobalWorkDirectory] [-L n] [-N {Node[,Node...] | NodeFile | NodeClass}] [-S SnapshotName] [--image ImageSetName] [--notsm | --tsm] [--tsm-server servername] [POLICY-OPTIONS]
Restore filesystem metadata space.
mmimgrestore Device ImagePath [-g GlobalWorkDirectory] [-L n] [-N {Node[,Node...] | NodeFile | NodeClass}] [--image ImageSetName] [POLICY-OPTIONS]
25
© 2016 IBM Corporation
IBM Spectrum Scale and Spectrum Protect together
• Consolidate primary and backup storage for scalable performance
• Regional Hospital Network
• High availability: No single point of failure• 4 x more capacity and lower storage
costs, compared to Data Domain• Faster backups and restores: IBM ESS
40Gb/s network• Secure: Encryption for primary and
backup data
IBM Internal and Partner Use Only|
26
Consolidate Enterprise Backup
Global Large Enterprise
• High availability: Fast failover• Reduced backup infrastructure
costs by consolidating over 9 PB of data
• Easier for Backup Administrators to manage storage
• Supports mixed workloads: Virtual servers, SAP and other business data
Client Examples
© 2016 IBM Corporation
Backup & Archive to Spectrum Scale• Spectrum Scale as the destination• Use Case: Spectrum Protect uses Spectrum Scale / ESS for storing data being
backed up or archived
© 2016 IBM Corporation
Meet Bob, IT Manager• How do I store more data on a flat budget?
• How do I win the backup window race every night?
• How do I stop buying expensive storage appliances to meet the data growth?
• How can I make my Backup administrators more efficient?
IBM Internal and Partner Use Only|
28
© 2016 IBM CorporationIBM SystemsIBM Internal and Partner Use Only |
29
Backup and Disaster Recovery was named as the biggest challenge, which isn’t central to top-line revenue generation or day-to-day customer satisfaction
Storage Environment Challenges(% Rating 4 or 5 on 1-5 scale)
backup and disaster recovery
scalability Tiering data
Provisioning/ Flexibility
budget constraints
leverage underutilized storage
Meeting SLAs
Managing heterogeneous o/s
software tool to manage of storage
de-duplication
Back-office departments and IT people are never applauded for ‘keeping the lights on’, but in Storage that’s just where a lot of time still goes
Scalabity and Tiering data point to the importance of having the data available and making the best us of it for the business.
Businesses operations and applications are dynamic. Despite years of progress in storage efficiency and non-stop data growth, Leveraging Underutilized Storage remains a top concern.
Meetings SLAs ranks surprisingly low, pushed down by other urgent issues.
Source: 2014 STG NDB Study, n = 1,206
Storage Operational Challenges
© 2016 IBM Corporation
Market Facts
• IBM is a leader in the enterprise backup software and integrated appliances magic quadrantPublished date: 06/15/2015 Source: Gartner Magic Quadrant for Enterprise Backup Software and Integrated Appliances
• By 2016, less than 30% of all big data is expected to be backed upPublished date: 06/16/2014 Source: Gartner Magic Quadrant for Enterprise Backup Software and Integrated Appliances
• By 2017, 70% of organizations are expected to have replaced their remote-office tape backup with a disk-based backup solution that incorporates replication, up from 30% todayPublished date: 06/16/2014 Source: Gartner
• By 2018, the number of organizations abandoning tape for backup is expected to double, and archiving to tape should increase by 25%Published date: 06/16/2014 Source: Gartner Magic Quadrant for Enterprise Backup Software and Integrated Appliances
Be selective about what you backupArchive to low cost storage
Backup to fast storage
© 2016 IBM Corporation
IBM Spectrum Protect + IBM Spectrum Scale Solution
• Easier to grow as your data grows
• Lower cost of backup infrastructure
• Easier to use than the competition
IBM Internal and Partner Use Only|
31
200 TB 1 PB 10 PB
Virtually unlimited scaling
Add turnkey building blocks
High performance storage with parallel data access
Add storage with no impact to applications or users
Spectrum ProtectServers
Shared Spectrum ScaleFile system
© 2016 IBM Corporation
We focused our innovation efforts on solving your problems• Simplify
– Manageability at scale with common graphical user interface
– Storage provisioning is transparent to Spectrum Protect
• Reduce costs – Lower infrastructure costs to achieve backup window and recovery objectives
– IBM Spectrum Protect’s built in enterprise class data dedup for no additional charge
– Lower admin efforts with simplified provisioning of storage for Spectrum Protect
– Higher storage utilization by leveraging a shared file system
– Build your infrastructure your way using low cost commodity storage
– Real time recovery for a longer retention period per dollar
• Improved availability through our high performance shared file system– Scalable performance lets you finish your backups during their SLA windows
– Flexible restore helps to get business back in business during an event
– Automated failure for backup and restore options
– Highly available storage that can span datacenters
IBM Internal and Partner Use Only|
32
© 2016 IBM Corporation
Spectrum Protect on Spectrum Scale - Overview
33
• Multiple Spectrum Protect (TSM) instances store DB and storage pools in a Spectrum Scale file system (GPFS)– Spectrum Scale provides global name space for
all Spectrum Protect instances– Instances share all file system resources
• Spectrum Protect instances run on cluster nodes accessing the file system and disk directly
• Spectrum Scale file systems balances the workload and capacity for all TSM instances on disk
• Provides standardized, scalable and easy to use storage infrastructure for the multiple instances
Spectrum Protect Clients
Spectrum Scale Storage
Storage Network
TCP/IP Network
TSM TSM TSM TSM
Spectrum Scale storage for Spectrum Protect
Spectrum Scale file system
© 2016 IBM Corporation
Our solution offers fast, seamless and virtually limitless scaling
• Without Spectrum Scale– Each backup server has its own isolated file system
– Each Protect server and its dedicated LUN is tightly coupled
– Storage islands appear with underutilized capacity
– Capacity and performance management is challanging
– Scaling and performance may impact apps and users
• With Spectrum Scale– Scale capacity seamlessly and transparently
to apps or users under the shared file system global namespace
– Build your infrastructure using commodity storage, i.e. no vendor lock in.
– Central adminstration of all storage
IBM Internal and Partner Use Only|
34
Backup clients
Spectrum Scale shared file systemSpectrum Protectinstances
Storage
© 2016 IBM Corporation
IBM Spectrum Protect on IBM Spectrum Scale Architecture
35
TSM Server
GPFS Storage
Application
TSM clientFiles
Application
TSM clientDB
Application
TSM clientMail
Application
TSM clientERP
Tape
GPFS Server
TSM Server
GPFS Server
GPFS file systems
All Spectrum Protect (TSM) servers store DB and storage pools in Spectrum Scale (GPFS) file systems
– File system for databases provide low latency
– File system for storage pools provide high sequential performance
Spectrum Scale can do both
Running multiple TSM instances on one GPFS cluster provides standardizes, scalable and easy to use storage infrastructure for the TSM backup environment
GPFS cluster provides a single file system and on demand resource sharing for all TSM instances
DB STG
© 2016 IBM Corporation
Value proposition for Spectrum Protect on Spectrum Scale• Optimized storage utilization – all Spectrum Scale servers use the same storage
• Operational efficiency with one storage system for all Spectrum Protect servers
• Scalable in multiple dimensions:– Capacity: concurrently add more storage to Spectrum Scale file system– Performance: concurrently add more Spectrum Scale / Spectrum Protect server or faster storage
• High performance with intelligent striping across all disk devices
• High availability in clustered file system
• Disaster protection with TSM or GPFS replication or GPFS native RAID (GNR)
• Cost efficient by utilizing standard infrastructure components36
© 2016 IBM Corporation
Elastic Storage Server overview
• Spectrum Scale appliance (pre-packaged)– Graphical User Interface– 3 Years Maintenance and Support
• Based on GPFS native RAID (declustered)– Predictable performance– Low impact during rebuild– 2 and 3 fault tolerance configurable– End-to-end checksums
• Provides GPFS file system– Applications are configured on extra nodes
• Different models– GS: small and fast (2 – 125 TB)– GL: large and scaling ( 150 – 1530 TB)
37
Elastic Storage Server(NSD Server)
File server Database BackupArchive Apps
Global name space
Native RAID SW
JBODs
JBODs
LAN / IB
Protocol / Application nodes (GPFS NSD clients)
© 2016 IBM Corporation
IBM Elastic Storage Server provides: Scalable, extremely high-performance while being a low-cost storage platform Declustered GPFS Native RAID with options for 3 or 4 way mirroring, or double or triple parity RAID
Data and redundancy information distributed across all disks in the JBOD Extremely fast rebuild times
Benefits of integrating with Spectrum Protect: Simplified storage configuration Ethernet storage attachment (10GbE or 40GbE) Global namespace sharable by multiple Spectrum Protect servers Can be shared by more than one Spectrum Protect server
Overview of Spectrum Protect with ESS
© 2016 IBM Corporation
Spectrum Protect server performance on ESS – outside view
39
More information about the tests on Developer Works
“IBM's TSM backup product goes super-fast when backing up to the Elastic Storage parallel file system….”
“Of course, you need fast network links as well and backup/archive software that can use the links and back-end storage, as TSM can. If you have these then your backup and archive, and subsequent restores, could move data around like a dragster roaring down a speed strip.”
39
The Register http://www.theregister.co.uk/2014/12/22/dragster_backup_with_parallel_target_system/
© 2016 IBM Corporation
TSM Blueprint: TSM with Elastic Storage Server - Available!
• Support for IBM Elastic Storage Server– Configuration instructions for large TSM server with Elastic Storage Server– Configuration script support for automating Spectrum Protect server setup with ESS– Initially published for Linux x86_64
• See https://ibm.biz/TivoliStorageManagerBlueprints40
© 2016 IBM Corporation
A Perfect Match:Spectrum Protect and Elastic Storage
• Spectrum Scale and Spectrum Protect together provide high performance and low latency characteristics
• Easily scale the system performance- and capacity-wise by adding more ESS to the cluster
http://escc.mainz.de.ibm.com | [email protected]
41
Optional:In environments where very huge amounts of very small objects are stored, consider to place the TSM DB and logs to an IBM Flash SystemConsider to add physical tape for archving or offsite vaulting
Spectrum Protect Clients
Spectrum Protect Server(s)
TapeSequential access
Production Data
Optional
1 or moreElastic
Storage Servers
Spectrum Scale File System(s)
© 2016 IBM Corporation
Key values for Spectrum Protect on ESS• Superior Performance
– No additional overhead with TSM server running on GPFS client– ESS performance scales almost linear– No impact during disk rebuild with GPFS Native RAID (GNR) on ESS
• Lower Cost– No extra storage required for TSM DB– Use of standard infrastructure components
• Excellent Data Protection – With GNR, TSM backup and node replication– Superior data protection with native RAID options
• Flexible Scalability– Multiple TSM servers can share a single file system and storage– Add more ESS building blocks as capacity and performance demands grow
• Ease of use with graphical user interface and TSM operation center– TSM operations center provides advanced monitoring and reporting
42
© 2016 IBM Corporation
A Smarter Storage Approach
For more information:Website: http://www-03.ibm.com/systems/storage/spectrum/index.html
Acknowledgements:
Dominic Muller-Wicke: ([email protected])
Nils Haustein: ([email protected])
John Langlois: ([email protected])
Thank you!The IBM Integrated Storage Portfolio