Upload
nelson-baldwin
View
243
Download
13
Embed Size (px)
Citation preview
Filer 教育訓練
Simple
Fast
Reliable
Agenda
FAS Storage Overview
FAS Series Specification
LVM & Flexibly Extend
NetApp RAID Structure
WALF & NVRAM
NetApp Function for License
FilerView
iSCSI/FCP | CLI
Questions
FAS Storage OverviewFabric Attached Storage (FAS) Systems
Highest
Availability
Highest
Availability
840TB
1176 TB
1176TB840TB
FAS2020A
FAS2050A
68 TB
104 TB
Stand
Alone
Stand
Alone
FAS2020
68 TB
FAS6040c
FAS6040
FAS6080c
FAS6080
FAS3140
FAS3140A
FAS3170
FAS3170A
840TB
420TB
420TB
840 TB
FAS2050
104 TB
FAS3160
FAS3160A
672TB
672TB
NetApp FAS Storage Systems
FAS2040A
136 TB
FAS2040
136 TB
Data ONTAP + One Family of Management Software
Midsize Enterprise
Remote Office/Branch Office
Enterprise Data Center
Unified FC-SAN, iSCSI, NAS Storage
Scalable Performance and Capacity
Data ONTAP® single interface across all
NetApp 複合式儲存架構
UNIX®
Servers
Linux®
Servers
Exchange
CRM
ERP
Windows®
Servers
FC SAN
Windows
Servers
CIFS, NFS
LAN
主資料中心
NetApp
FAS Series
SQL Server
iSCSILAN
異地備援中心
LAN
NetApp
FAS Series
Home Dirs
CIFS
Exchange
遠端分公司 / 辦公室
NetApp
FAS SeriesLAN
WAN
NetApp
VTL
Backup Server
簡單 - 可靠 - 快速簡易且單純的儲存架構,大幅降低管理複雜性與成本
可靠且靈活有彈性,確保企業永續經營
快速、卓越的運作效率,有效提昇企業生產力
Single Architecture means Seamless Scaling
* Disk shelf conversion
FAS960
FAS3100Series
FAS6000Series
FAS3000Series
FAS2000 Series
Simple “head swap”
Zero Data Migration
Investment protection
Data ONTAP® single interface across all
FAS2000 Series Specification
FAS2000 Series OverviewNew!
FAS2020 FAS2050Maximum Disk Drives Maximum Raw Capacity
68 68TB
136136TB
104104TB
System Configuration1 or 2 controllers in a
single 2U chassis1 or 2 controllers in a
single 2U chassis 1 or 2 controllers in a
single 4U chassis
Memory(NVMEM) 1GB(128MB) 4GB(512MB) 2GB(256MB)
Max 4Gb/sec FC Ports 2 2 2
Max 1GbE Ports 2 4 2
Onboard SAS Port — 2 (options )
10GbE card supportFCoE adapter supportMultipath HA expansion
———
———
(options ) (options ) (options )
Data ONTAP Support7.2.2L1 and later
7.3 and laterNo 8.0 support
7.3.2 and later8.0 GA (target)
7.2.2L1 and later7.3 and later
No 8.0 support
* All specifications are for single configurations
FAS2040
9
Scales Easily and Affordably
68 Drives
SAS or SATA Internal,
FC or SATA Expansion
104 Drives
SAS or SATA Internal,
FC or SATA Expansion
136 Drives
SAS or SATA Internal,
FC, SATA, or SAS Expansion
FAS2050FAS2050
FAS2020FAS2020
FAS2040FAS204011
1 Max spindles requires a mix of (2x) DS14 and (4x) DS4243 (available for FAS2040 Dec. ’09)
More
Capacity
FAS2050 Front and Rear View
I/O Expansion Slots
Onboard Fibre PortMix mode – Initiator or Target(offline mode can change type)
Console PortBMC Port
PowerSupply
FAS2050: Processor Controller Module
Console Port
4Gb FC ports
(SFP included with FC SAN or Expansion shelf only) BMC RM port 1GbE Cu ports
PCIe Expansion Slot (FC HBA Shown)
Handle
Disk Shelf Specification
Shelf for FC Front and Rear View
DS 14MK4 (ESH4)
14 Bays (3U)
FCAL 4 Gbit speed
FC 15000rpm 300GB, 450GB (Max)
Shelf for SATA Front and Rear View
DS 14MK2 AT (AT-FCX)
14 Bays (3U)
FCAL 2 Gbit speed
SATA II 7200 rpm 500GB, 750GB, 1TB (Max)
DSxxxx – Disk shelf DS4xxx – 4U DSx24x – 24 drive bays DSxxx3 – 3Gbps SAS backbone
Shelf for SAS/SATA Front View
DS4243 (IOM3)
SAS 15000 rpm 300GB, 450GB(Max)SATA II 7200 rpm 500GB, 1TB (Max)
•Only FAS2040 – Onboard 1 SAS Port (single) •FAS2050 X2066A dual port SAS, Others X2065A quad port SAS, DOT 7.3.2 later
LVM
Logical Volume
NetApp Flexible Volumes
Disks
Volumes
File Systems& LUNs
Aggregates
RAID Groups
300 GB
觀念
265GB
Usable Total Capacity
265GB x 8 = 2120 GB
2120GB x 0.9 -30GB = 1878GB
-10 % for
WAFL use
- 30GB for
root vol0
265GB
- 5 % for
SyncMirror
(options)
RAW
USE
265GB
265GB
265GB
265GB
265GB
265GB
265GB
Parity
Parity
Spare
Spare
Usable Aggregate Capacity
D
D
D
D
D
D
D
D
D
D
D
D
S
DP
P
S
D D D D D P DP S
RG0: SAS 300GB x 16
RG1: FC144GB x 8
10TB
vol2
/vol/vol1500GB
/vol/vol21500GB
/vol/vol3/lun.file2TB
/vol/vol4//lun.file5TB
iSCSI FCPCIFSNFS
Aggrgeate (Aggr0)- Online Expand
FlexVol - vol0 for root- Online Expand &Shrink
Lun.file- Online Expand- Shrink only for Win2008
vol1 vol3 vol4
Aggregate
D
D
D
D
D
D
D
D
DP
D
D
P
RG1: SAS / FC 300GB15000 RPM
Aggr1
FlexVolAggrgeate- RAID-DP Default limited by 16 Disks (FC Max 28) or Maximum 16 TB (included Parity Disk)
- RAID-4 limited by 8
FlexVol
D
D
D
D
D
D
D
D
DP
D
D
P
RG0: SATA 1TB7200 RPM
Aggr0
FlexVolFlexVol
SATA 和 SAS/FC 介面 不能同一個 Aggregate
觀念
Mixed ?
Data ONTAP Storage Lexicon
RaidGroupA RAID4 set with at least two disks: data & parityORA RAID-DP set with at least three disks: data and 2 parity.Either type raidroup can grow additional disks instantly while online.
AggregateA collection of one or more raidgroups, used as a container to provision one or more flexvols.
FlexVol (flexible volume) A formatted filesystem allocated as a logical subset of the available space (4KB blocks) within an aggregate.
Flexibly Extend
即時動態線上檔案系統擴充能力
屬於 Dynamic Online File System Growth最少可一次增加 ( 減少 ) 數 MB 的容量最多可一次增加 ( 減少 ) 數 TB 的容量 不需等待就能立刻使用新增的容量不影響系統運作及效能不需重建檔案系統對 Unix 而言 -- 不需更動 mount point 的設定對 Windows 而言 -- 不需更動網路磁碟的設定
即時動態線上檔案數量上限擴充能力
在檔案系統容量不變的前提下,可隨時增加 inode 數量 ( 可容納的最大檔案數量 )
避免因檔案數量達到 Volume 的系統上限時,即使仍有剩餘儲存空間,也無法再存入檔案了完全不影響系統運作
即時動態線上目錄檔案數量上限擴充能力
因為單個目錄檔案數量太多會影響效能,故 Filer 上針對每個 Volume 有一個 MaxDirSize option預設目錄檔案數量限制是 System Memory 的 1%10240KB 為目錄可容納檔案數量為 300,000
NetApp RAID Structure
RAID 的歷史
RAIDUniversity California at Berkeley in 1987早期: Redundant Array of Inexpensive Disks現在: Redundant Array of Independent Disks
RAID 的目標更大的容量 (capacity)更高的效能 (performance)更佳的可靠度 (reliability)更高的可用度 (availability)
RAID levels :0 : Striping (No protection)1 : Mirroring2 : ECC bit-level checksum3 : Byte-level striping dedicated parity disk (single user)4 : Block-level striping dedicated parity disk (multi user)5 : Block-level striping distributed parity disk (multi user)
Level 3,4,5 Algorithms : XOR (Exclusive OR)
RAID 0 – Striping
將所資料打散在所有硬碟上面效能最高任何一顆硬碟故障所有資料就會流失無資料安全保護功能,風險性比單顆硬碟更高
D1 D3D2
A1 A2 A3
B1 B2 B3
C1 C2 C3
* RAID 0 – NetApp not support
RAID 1 – Mirroring
能防止任何一顆硬碟故障而導致資料流失需要多一倍的儲存空間讀取效能略高於單顆硬碟所需成本最高
D1 D2
A1 A1
B1 B1
C1 C1
* RAID 1 – NetApp support (License for SyncMirror)FAS3X00 And FAS6000 Only
RAID 4 – Striping + Single Parity Drive
能防止任何一顆硬碟故障而導致資料流失 ( 包括 Parity)將資料經過 XOR 的運算值寫入某個特定的 Parity 硬碟Stripe 的單位 : Block Level適合多人工作環境讀取效能佳標準的設計上 Parity 硬碟成為寫入效能上的瓶頸
D1 D3D2
0 1 0
1 1 0
1 1 1
P
1
0
1
D D D D P
1 0 1 1 1
0 1 1 0 0
1 0 0 0 1
1 1 0 1 1
RAID 4 – 新增一顆硬碟很容易
XOR -eXclusive OR
A B XOR
0 0 0
0 1 1
1 0 1
1 1 0
D D D D D P
1 0 1 1 1
0 1 1 0 0
1 0 0 0 1
1 1 0 1 1
RAID 4 – 新增一顆硬碟很容易
D D D D D P
1 0 1 1 0 1
0 1 1 0 0 0
1 0 0 0 0 1
1 1 0 1 0 1
RAID 4 – 新增一顆硬碟很容易要新增的硬碟內容都是“ 0” 即可隨時可新增一顆新的硬碟到 RAID group ,無需執行高風險的重建磁碟陣列 !!
RAID 5 – Striping + Distributed Parity
能防止任何一顆硬碟故障而導致資料流失將資料和經過 XOR 的 Parity 運算值分散寫入到所有的硬碟Stripe 的單位 : Block Level適合多人工作環境寫入效能相對於其他 RAID 方式來說不佳
D1 D3D2
0 1 0
1 1 0
D4
1
0
1 1 1 1
1 1 0 0
1 1 0 0
P
P
P
P
P
Data to be stored: 001110111100100* RAID 5 – NetApp not support
RAID 6 – Distributed Double Parity
較 RAID 5 多做一次 ParityStripe 的單位 : Block Level適合多人工作環境讀寫效能較 RAID 5 更差
D1 D3D2
B0
D4
A0P0
P3
P2
P1
P4
Q0Q0
Q1Q1
Q2Q2
Q3Q3
D1
C2D2
C3 B3
B4
A1
A4Q3Q3
RAID 重建期間的資料安全
RAID 6
RAID 3 、 4 、 5 Time
資料安全保護能力
0%
100%
數小時以上一顆硬碟發生故障
備援硬碟重建完畢
Disk Failure
RAID Expansion
RAID 6 技術大幅提高磁碟陣列安全係數
6組 8顆硬碟組成的RAID 5,RAID 6比較
0.79% 1.60%
12.71%
24.04%
0.00639%0.00160%0.00015%0.00004%0%
5%
10%
15%
20%
25%
30%
FC-146GB FC-300GB SATA-250GB SATA-500GB
RAID 6
RAID TypeRebuild IOPS
LossRebuild Time 重建期間遇到第二顆硬故障的衝擊
RAID-5 50%+ 15 to 50% slower than RAID-1/0 資料立刻流失
RAID-1/0 20 to 25% 15 to 50% faster than RAID-514% 的機率資料流失
in an 8-disk group (1/[n-1])
RAID-1 20 to 25% 15 to 50% faster than RAID-5 資料立刻流失
Source: EMC CLARiiON Best Practices for Fibre Channel Storage, CLARiiON Firmware Release 16 Update
RAID 6 的安全性 比 RAID 5 高出 4,000 倍
D1 D3D2 P DP
NetApp RAID-DPTM
RAID-DPTM(Double Parity 或 Diagonal Parity)
結合 WAFL®(Write Anywhere File Layout) 及 NVRAM 的技術,使效能可以克服傳統 RAID 4 問題新增硬碟不需停機、等待,立即可用RAID-DP 安全性等同 RAID6 :比 RAID5 安全 4000 倍以上RAID-DP 效能比 RAID6 更優異
Distributed Dual Parity RAID 6
Distributed Dual Parity RAID 6RAID 5 延伸一顆硬碟High Overhead
EMC CX3, DMX3, HDS AMS/USP, HP XP
RAID-6 SNIA http://www.snia.org/education/dictionary/r/
Source: HDS - Using RAID-6 with Hitachi TagmaStore™ Storage for Improved Data Protection
RAID 6 Performance Drops 33% !
低效能
Diagonal Dual Parity RAID 6
Diagonal Dual Parity RAID 6RAID 4 延伸一顆硬碟Patented Low Overhead Technology
NetApp FAS
高效能
RAID Level 0 10+11+0
4 5 6WAFL
RAID-DP
經濟 - 以最低的成本提供資料安全保護
效能 - 讀寫效能皆最高
擴充 - 不須等待隨時動態擴增一顆或多顆
安全 - 保護任何一顆硬碟故障的資料安全
重建期間的安全 - 保護 * 任意 * 兩顆硬碟故障
經濟 + 效能 + 擴充 + 安全 + 重建期間的安全
各種 RAID 的特性
Six Disk “RAID-DP” Array
{D D D D P DP
Start with simple RAID 4 Parity
3 1 2 3 9
{D D D D P DP
Add “Diagonal Parity”
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
Fail One Drive
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
7
Fail Second Drive
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
7
Recalculate from Diagonal Parity
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
7
Recalculate from Row Parity (standard RAID4)
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
7
The rest of the block … diagonals everywhere
3121
1131
2213
3122
9587
7121211{
D D D D P DP
NetApp RAID-DP 的優勢
0%0%
5%5%
10%10%
15%15%
20%20%
Up to 5% Up to 2.6% Up to 17.9%
FCATA
FCATA
FC
ATA
17.9%
1.7%2.6%
.2%
5%3%
ATA
小於 10 億分之 1
*RAID-DP 在資料硬碟重建期間又發生 2 個磁區錯誤而導致資料流
失的機率( 以 16 顆硬碟組成
RAID 為例 )
Protected withRAID-DP
.0000000001%
平均每年硬碟的故障率
* 硬碟發生磁區錯誤的機率( 以 300GB FC /
320GB SATA 為例 )
*RAID3/4/5 在磁碟重建期間發生磁區錯誤導致資料流失的機率
( 以 8 顆硬碟組成RAID 為例 )
RAIDRAID 重建期間資料仍有安全保護重建期間資料仍有安全保護
*Source: Network Appliance
NetApp Function
NetApp Snapshot™ Technology
A
B
C
A
B
C
Snap 1
NetApp Snapshot™ Technology
Take snapshot 1Copy pointers onlyNo data movement
Blocks in LUN or File
Blocks on the Disk
A
B
C
A
B1
C
A
B
C
Snap 1
NetApp Snapshot™ Technology
Blocks on the Disk
B
A
B
C
A
C
B1
B1
Snap 2
A
B
C
Snap 1
Take snapshot 1
Continue writing data
Take snapshot 2Copy pointers onlyNo data movement
Blocks in LUN or File
A
B1
C2
Snap 3
A
B1
C
Snap 2
NetApp Snapshot™ Technology
Take snapshot 1Blocks
on the Disk
A
B
C
A
B
C
Continue writing data
Take snapshot 2
Continue writing data
Take snapshot 3
Simplicity of model =Best disk utilizationFastest performanceUnlimited snapshots
B1
B1
C2
C2
A
B
C
Snap 1
Blocks in LUN or File
80%
20%
A 90MB Volume with 20% Snap Reserve
TotalCap. =100MB
ActiveFilesystem
20MBsnapreserve
80MB
78%80%
20%22%
Snapped Blocks May Encroach on AFS
Snap Reserve = 20%
Snapped blocksmay encroach upon usable AFS space
AFS AFS
hardlimit
nolimit
snapreserve
snapreserve
AFS blocksmay not encroach upon Snap reserve area
FlexVol for CIFS NFS
FlexVol - 20% for snapshot means?-For CIFS |NFS
Data Snapshot
50% 20%
不搬動區塊的快照功能空間最省效率最高, 每個 volume 都有 255 份的快照備份
如果資料沒有異動不管快照多份以上都不會佔空間
勿系統管理人員可以隨時刪除任一快照時間的備份資料而不影響其他快照備份的內容依據 < 時 >,< 天 >,< 週 > 為週期混合使用進行快照或隨時依需要進行快照隨時動態調整快照空間所佔比率 (0-50%)
調整時不會失去原有快照備份內容超過設定保留空間上限時會發出警告,仍可正常進行快照,不會覆蓋原有快照內容
Snapshot
Snapshots optimize usable capacity
Assumes 5% data change per Snapshot
2 TB 2 TB0
5
10
15
20
Traditional Point-In-Time NetApp Snapshots
TB's
Weekly 1
Daily 7
Daily 6
Daily 5
Daily 4
Daily 3
Daily 2
Daily 1
RAID
Usable
DeDupe : A-SIS
NetApp Function
General data in flexible volume
Meta Data Deduplication
Process
Deduplicated data in flexible volume
Deduplicated Data
Meta Data
Introduction to NetApp Deduplication
Data
Eliminates duplicate blocksReduces data footprint
NetApp Deduplication : Offline
Integrated with Data ONTAP®
General-purpose volume deduplication
Identifies and removes redundant data blocks
Storage-efficient VMs50-90% reduction for VMware® space requirements
OS image management
Application agnosticPrimary storage
Backup data
Archival data
NetApp Deduplication 20:1 or Greater for Backup
Before After
Deduplication Storage Saving
For various environment
Data Types Typical Space Savings Range
Backup Data 90% 85-95%
VMware® VMs 70% 50-90%
Geoseismic 55% 40-70%
Database Backups 55% 40-70%
Home Dirs 35% 20-50%
CIFS Shares 35% 20-50%
Email pst's 30% 20-40%
Mixed Enterprise Data 30% 20-40%
Document Archives 25% 20-30%
Engineering Archives 25% 20-30%
Traditional Storage Utilization
VMware template clones• Clones are
100% identical• Including
OS and applications
• Clones consume storage equal to size of template
By design, VMware® environments are very redundant
Production Data Deduplication
Dedupe removes redundant data• Supports FCP,
iSCSI, and NFS• 50–70% storage
reduction• >90% with VDI
Quick Start to running A-SIS
DataONTAP7.3.1 Maximum dedupe flexvolume size :FAS2020 - 1TB FAS2050 - 2TBFAS3140 - 4TB FAS3160 – 16TB
A-SIS Example1
SISVolume (Size = 1GB) 下有兩個相同 iso檔 , 名稱不同 但內容相同
Saved 51% 可用空間剩餘 607MB
A-SIS Example2
Dedupe for VMware NFS Datastore
A-SIS Log
Filer > rdfile /etc/log/sis (current)Filer > rdfile /etc/log/sis.0
Built-in web-based management tool
FilerView
FilerView
Filer Status
Licenses
每一個型號都已內建完整功能
依不同序號來啟動相對應之功能
AutoSupport
System Notification from 主機名稱 (< 内容 >) < 重要度 >
重要度 : 「 INFO 」「 WARNING 」「 ERROR 」「 CRITICAL 」
內容 : (WEEKLY_LOG) (REBOOT (reboot command))
(DISK_SCRUB!!!) (OVER_TEMPERATURE_SHUTDOWN!!!)
troubleshooting#telnet mailhost 25
75
AutoSupport – Email Events
Typical AutoSupport Email Events• Weekly logs (/etc/messages)• System reboots• NVRAM batteries low• Disk, fan, and power supply failures• Shelf faults• System overheating• Cluster events• File system growth too large
Autosupport (Sample for WEEKLY_Log)
Report
Syslog Messages
System Date&Time
加入 AD 的狀態建議要使用 NTP
否則時間差距 5 分鐘以上就會無法存取
Halt & Reboot
Halt 關機後再關閉電源開關Reboot 重新開機* 未執行 Halt, 直接關掉電源的話 , 會消耗 NVRAM Battery 長時間關機請執行 Halt
Disk
Disk Info
DISK ID Number
Aggregate
Aggregate
Configure RAID
RAID scrubbing starts automatically every Sunday at 1:00 a.m.
The default speed is medium, meaning approximately 40 percent of the CPU time is used for RAID reconstruction.
FlexVolume
FlexVolume
Expand & Shrink
*Adjust the FlexVol Storage Size.
- Expand & Shrink
Expand & Shrink (cont’d)
Adjust the FlexVol Size
Modify Volume - inode
Modify Volume – inode (cont’d)
Adjust the FlexVol Maximum Files or Directory Size
Modify Volume – inode (cont’d)
Add a new volume
*Volume Type Selection - Flexible
Add a new volume (cont’d)
*Space Guarantee- Volume (Default Policy ) pre-allocated space
- none (Thin Provision) like Credit Card
- file (Thin Provision) like Credit Card _ pre-allocates space in the volume so that any file in the volume with space reservation enabled can be completely rewritten, even if its blocks are pinned for a snapshot.
Qtree Style
*Different security styles.-UNIX (UID and GID , UNIX-style permission 755)-NTFS (For CIFS request , Windows permissions ACL)-Mixed (files in the qtree or volume have the UNIX security style and some have the NTFS security style.)
Volume Snapshot
Manage Snapshot
*Management Snapshots
- Add , Delete , View Snapshots
Add Snapshot
Configure Snapshot
建議值 還是要算每天大約異動量 和 想保留天數或份數 然後調整右方數值
一但每天或小時檔案異動量高的時後
然後 snapshot reserved 設的又低的時後
快照容量就會佔 vol 的百分比就較高
.snapshot or ~snapshot directory is visible.
Snapshot Delta
Filer>snap delta vol_name
LUN
How to Map ?
20TB
FlexVol
/vol/vol3/lun.file2TB
/vol/vol4//lun.file5TB
iSCSI FCP
FlexVol FlexVol FlexVol
Initiators Group 1 ( 虛
擬 )
iSCSI Node Nameiqn.1991-
05.com.microsoft:w2k3en1
WWPN21:01:00:e0:8b:a9:b9:35
iSCSI Node Nameiqn.1991-
05.com.microsoft:w2k3en2
Lun map
Add to iGroup
Initiators Group 2 ( 虛
擬 )
Create Lun.file
不打勾為 Lun.file 之 Thin Provision
Add Initiator Group
Solaris , Windows , HP-UX , AIX Linux , NetWare , Vmware,Xen,Hyper-V
iSCSI & FCP
Map Lun.file to igroup
Microsoft iSCSI initiator
新增 Storage IP 位置
MPIO
/vol/vol3/lun.file2TB
Initiators Group
WWPN21:01:00:e0:8b:a9:b9:35
Lun map
Add to iGroup
FCP
WWPN21:01:00:e0:8b:89:b9:35
同一個 LUN 對到同一台主機 , 不同的 Fibre Port 需要 Multipath 軟體
Windows – NetApp Multipath Software for 2003 & 2008Linux – Kernel 2.6 later _ Native mpath* Dual Controller 也需要使用 MPIO
Lun.file for iSCSI and FCP
/vol/vol3/lun.file2TB
iSCSI
Initiators Group
iSCSI Node Nameiqn.1991-
05.com.microsoft:w2k3en1
WWPN21:01:00:e0:8b:a9:b9:35
Lun map
Add to iGroup
Add to iGroup
FCP
Default Lun (No Snapshot)
FlexVol - 100% for Data-For iSCSI | FCP (lun.file) lun.file 的 size 是 Flexvolume 的 90% 為建議值
Data | Lun.file
Snapshot-Snapmirror -FlexClone
-LunClone-SnapRestore
LUNs Protect by Snapshot
FlexVol – 1 : 1.2
Data | Lun.file Snapshot
-Snapmirror
-FlexClone
-SnapRestore
Protect by snapshot(1) 1TB
Protect by snapshot(N)
200GB1TB Lun.file
50% of volume
50% of volume
ActiveFilesystem
Lun.file
snapreserve
100%
Write data
AvailableSpace
0%
KeepAvailable
Space
1 2
0%
New Data
Snapshot datafor 100% overwrite
3
50GB
50GB
Why 1:1.2
Delete all data then write new data
Network
Network
VIF (Virtual Interface) Single Mode – Active Standby Multi Mode – EtherChannel
802.3ad / LACP
Single
Stack
Multi Multi
A A A AS
Switch 需設定Switch 不需設定
Manage Network Interface
Manage Host Files
CLI
Telnet(default) & SSH
執行主機需安裝 Sun Java(TM) Runtime Platform
1
2
退出 CTRL+D
RSH
3
RSH 使用時機* 當多人管理時 (Telnet , SSH 和 WEB 加起來只有一Session)* 寫成 Script 執行備份指令
CLI for Storage System
CLI
? : list all command lineversion : Data ONTAP Storage system 版本license : check storage license statuspriv set advanced : advanced mode(more cli)sysconfig –a : check storage hardware statussysconfig –r : 了解 disk 的狀況 (failed, spare)fcadmin device_map : 了解目前 Disk Loop 狀況cf status : check cluster statusoptions : list all options
CLI
sysconfig : 確認硬體 Model 和 DOT 版本
CLI
sysconfig –r : 了解 disk 的狀況 (failed, spare)
CLI
environment status all : 確認 HW 是否有錯誤
CLI
df : 確認儲存空間的使用率等
CLI for Aggregate
112/04/19
CLI for Aggregate
Create a new aggr in storage ( aggr name : aggr1 ; aggr size : 5 x hdd) aggr create aggr1 5@disk_size aggr create aggr1 5
Add 2 disk to aggr1 aggr add aggr1 2@disk_size aggr add aggr1 2
Show Aggr options aggr options aggr0
Change Aggr options aggr options aggr0 raidsize 28
CLI for volume
112/04/19
CLI for Volume
Resize Volume space vol size vol_name 1000g Vol size vol_name +50g
Show volume options vol options vol_name
Change volume options vol options vol_name nosnap on Vol options vol_name no_atime_update on
Question ?
130
THANK YOU
130