Upload
mosan-santos
View
687
Download
9
Embed Size (px)
Citation preview
Professional and focused tool of data storageIntroduction to Inspur Storage Products
1
Contents
/ 322
1 Overview of Inspur Storage Product Line
2 Introduction to Overseas Key Product—AS520N
3 Introduction to Overseas Key Product-AS510H
4 Introduction to Overseas Key Product-AS1100H
5 Introduction to advanced function of AS510H/AS1100H
Inspur Storage Product LinePart I
2015 Inspur Storage Product Line
/ 324
FunctionsOnline centralized storage systemBusiness integration
storage system Mass storage system
AS510N
AS520E/G
AS1100H
TL1000
BCP
DP2000
AS520E-M1
AS520N TL2000
DP1000-M1
Performance
DPS-M1
AS1000G6
AS510H
AS500N6
AS530NTL3000
Storage system for data protection
AS5600
AS8000-M2
AS8000-M3
AS8000-M4 AS18000
AS13000 –M1
AS13000 –M2
Hardware components
Server SCSI card SCSI wire
Storage
Application environment
•Servers are geographically scattered.•Server is directly attached to storage device.
Storage Architecture: Direct Attached Storage (DAS)
Hardware components
Server Network card
Network wire
Storage
Application environment
• Data sharing based on Ethernet
• File-level data transmission
Storage Architecture: Network Attached Storage(NAS)
Hardware components
Server FC card FC wire
Storage FC switch
Application environment
• Block-level file transmission
•Massive expansion of storage space
• Complicated network environment
Storage Architecture: Storage Area Network(SAN)
Introduction to Overseas Key Products—AS520NPart II
AS520N—Product Overview
/ 329
Application scope
Product features
Small and medium-sized government departments and enterprises, campus education,
Internet, video surveillance industries and IP SAN/NAS network architecture
• Product architecture: 3U16/4U24-bay, rack-mounted
• Host interface: standard configuration is 2 GB host interfaces which
can be expended to 6.
• Cache: standard configuration is 8GB, support up to 32GB *
• Disk type: 3.5'' high-capacity SATA/NL SAS
• Disk expansion: support expanding of 3 expansion cabinets
(3U16/J4U24) with the same bay as that of host, and support 4U60-bay
high-density expansion cabinet
Simple, efficient and manageable
AS520NAS530N
* The max. system cache is limited to 16GB by the operating system.
AS520N—Overview of Product Parameters
/ 3210
Parameters AS520N
Storage operating system The independent DOM is used as memory system, so it is not necessary to use data hard disk.
Storage processor Standard configuration: 1 special storage processor (does not support expansion)
Cache 8GB which can be expended to 32GB *( support 8GB memory of single memory stick)
Host interface Standard configuration: 2 GB which can expended to 6 GB to the most.
Number of stand-alone disks 16
Disk type 7200 RPM Enterprise-level SATA/NL SAS disk (1TB/2TB/3TB/4TB)
Max. number of disks 76
Expansibility 3 3U16-bay JBOD/1 4U60-bay expansion cabinets can be expanded
RAID level Support RAID0, 1, 5, 6, 10, 50, 60
Supported applications Host interface aggregation, online LUN expansion and snapshot
AS520N Application Scenario I—IP SAN Centralized Storage
/ 3211
IP SAN centralized storage: converges network interfaces, and improves bandwidth greatly.
The user can achieve the centralized storage and management of all front-end servers by using only one set of storage products for beginners. The storage system supports the cross-heterogeneous host platform (Windows, Linux and Unix) and is able to meet the requirements of different business applications (database, Email, Web etc.)
……
应用服务器集群
Exchange邮件 办公OA Web、视频
IP SAN存储
千兆交换机
IP SAN storage
Exchange E-mail Web , video
Gigabit switch
OA Office
Application servers
AS520N Application Scenario II—NAS File Sharing
/ 3212
NAS file sharing: NAS sharing architecture enables easy deployment
N series network storage product can perform the function of sharing FTP, NAS files and support the file sharing of heterogeneous host platform (different types of files, such as document, picture and video);
The installation and deployment of the whole system is easy. After connecting it to the LAN network directly, the user can use and manage the system conveniently. Besides, it is helpful to improve the availabilities of data.
……
服务器组
NAS文件共享存储
Server group
NAS file sharing storage
AS520N Application Scenario II—Video Surveillance Application
/ 3213
Video surveillance application: mass storage space and flexible network establishment
With this scheme, the front-end video servers can be put under centralized storage and management.
Site 1Remote
monitoring
Video server
Video server
Video server
Storage device
Introduction to Overseas Key Product—AS510HPart III
Generation of Optical Storage Product AS510H
/ 3215
Application scope
Product features
Suitable for department-level data center building, small and medium-sized database, virtualization
and other applications as well as FC SAN network architecture
• Product architecture: dual-controller rack-mounted, 2U12/2U24-bay head
• Cache: 4GB (standard) or 8GB (optional), support up to 16GB
• Optional expansion cabinets: support 2U12-bay, 2U24-bay and 4U60-bay
high-density expansion cabinets
• High performance and new protocol: support the most advanced host
interface technology, achieve the upgrading from 8Gb FC to 16Gb FC 、SAS2.0 and then to SAS3.0. 4 SAS expansion interfaces at back-end disk
channel provide higher performance for the user.
High performance and new protocol: adopt advanced 12Gb SAS and 16Gb FC ports, therefore it runs ahead of other comparable products of other manufacturers.
AS510H
New Generation of Optical Storage Product AS510H—Product Specification
/ 3216
Name AS510HController Dual-controller (Active-Active)
Host specification 2U12-bay/2U24-bayCache Each controller provides cache of 2GB which can be updated to 8GB.
Disk type SSD, SAS, NL-SASNumber of disks Support 384 disks
Host interface
Standard configuration: 4 SAS(12 GB) host interfaces+4 FC (16Gb) host interfaces4 SAS(12 GB) host interfaces are optional8 SAS(12 GB) host interfaces are optional
12 SAS (12 GB) host interfaces are optional4 SAS(12 GB) host interfaces+8 FC (16Gb) host interfaces are optional
4 SAS(12 GB) host interfaces+4 electric iSCSI (16Gb) host interfaces are optionalExpansion interface 4 SAS (6Gb) expansion interfacesExpansion cabinet 2U12, 2U24, 4U60
RAID level 0, 1, 3, 5, 6, 10
Advanced functions
Dynamic disk pool technologyThin provisioning
SSD cache accelerationRemote volume mirroring
Volume replicationSnapshot
Chassis 552.5mmx482.6mmx86.4mm (L x W x H)Operating temperature 10-40 ℃
New Generation of Optical Storage Product AS510H—Application Scenario
/ 3217
Server Server ServerServerServer
Centralized storage application
Simple, efficient, safe and abundant storage solutions
Database server Database server
Highly reliable dual-computer application
Small-scale virtualization application
Virtualized server Virtualized server Virtualized server
The Principle Analysis of the Middle-End Dual Controller Product Hardware 510H
18
Lnk
Lnk
Lnk
Lnk
Drive Expansion
Port 1 Port 2
ID/Diag
Lnk
Lnk
Lnk
LnkCh 2Ch 1
Port 1 Port 2
2 SAS2.0 broad port
Internet access management
Serial port12Gb SAS host channelFactory maintenance
iSCSI Host
FC Host4 16 8
S A4 16 8
S A
Ch 3
Ch 4
Lnk
Lnk
Lnk
Lnk
Drive Expansion
Port 1 Port 2
ID/Diag
Lnk
Lnk
Lnk
LnkCh 2Ch 1
Port 1 Port 2
FC expansion card 4 16Gb FC (4/8/16 Gb self-adaption)
FC expansion card 2 16Gb FC (4/8/16Gb self-adaption)
SAS expansion card 4 12Gb broad port SAS (6/12Gb self-adaption)
SAS expansion card 2 12Gb broad port SAS (6/12Gb self-adaption)
iSCSI expansion card 2 10Gb iSCSI electrical port RJ45 (1/10Gb self-adaption)
iSCSI Host
FC Host4 16 8
S A4 16 8
S A4 16 8
S A4 16 8
S A
Ch 3
& 4
Ch 5 & 6
Lnk
Lnk
Lnk
Lnk
Drive Expansion
Port 1 Port 2
ID/Diag
Lnk
Lnk
Lnk
LnkCh 2Ch 1
Port 1 Port 2
Introduction to Overseas Key Product—AS1100HPart IV
New Generation of Optical Storage Product AS1100H
/ 3220
Application scope
Product features
Suitable for storage system construction for center computer room, large-sized database and
virtualization applications, FC SAN network architecture as well as governments, public
security, education, finance, telecommunications and other industries
• Product architecture: dual-controller rack-mounted, 2U12/2U24-bay head
• High cache: 24GB (standard), can be extended to 48GB
• Optional expansion cabinets: support 2U12-bay, 2U24-bay and 4U60-bay
high-density expansion cabinets
• High performance and new protocol: adopt advanced 16Gb FC port in the
industry. 4 SAS expansion interfaces at back-end disk channel provide higher
performance for the user.
• Rich host interfaces: support 16Gb FC, SAS, IB and 10Gb iSCSI, thus
meeting user's requirements of different network environment.
High performance and competitive: middle and high-end dual-controller SAN storage system improves the hardware indexes and performance and enhances the competitive power of middle-end dual controller
storage system in the market.
AS1100H
New Generation of Optical Storage Product AS1100H—Product Specification
/ 3221
Name AS1100HController Dual-controller (Active-Active)
Host specification 2U12-bay/2U24-bayCache Each controller provides cache of 12GB which can be updated to 24GB.
Disk type SSD, SAS, NL-SASNumber of disks Support 384 disks
Host interface
Standard configuration: 8 FC (16 Gb) host interfaces4 IB (40 Gb) host interfaces are optional
8 optical host interfaces (10 Gb) are optional8 SAS(6 GB) host interfaces are optional
Expansion interface 4 SAS (6Gb) expansion interfacesExpansion cabinet 2U12, 2U24, 4U60
RAID level 0, 1, 3, 5, 6, 10
Advanced functions
SSD disk cacheDynamic disk pool technology
Thin provisioningRemote volume mirroring
Volume replicationSnapshot
Chassis 552.5mmx482.6mmx86.4mm (L x W x H)Operating temperature 10-40 ℃
New Generation of Optical Storage Product AS1100H—Application Scenario
/ 3222
Large scale database cluster application
Database server Database server Database server
Stable, efficient, safe and abundant storage solutions
Large-scale virtualization application
Virtualized server Virtualized server Virtualized server
Data-level remote disaster recovery
The Principle Analysis of the Middle-End Dual Controller Product Hardware 1100H
23
Host Card1
LnkLnk LnkLnk LnkLnk LnkLnkID/
Diag
LnkLnk
Ch 2Ch 1
LnkLnkPort 2Port 1
4
2 SAS2.0 broad ports
Management port
Serial port
Optional expansion card• 4 个 6Gb/s SAS broad port• 2 个 40Gb/s IB• 4 个 16Gb/s FC• 4 个 10Gb/s iSCSI
Port for factory maintenance
SAS expansion card 4 6Gb SAS broad port
FC / iSCSI expansion card 4 16Gb/s FC or 10Gb/s iSCSI
SFP+
IB expansion card 2 40Gb IB
6Gb SAS host channel
Introduction to advanced function -AS510H/AS1100HPart V
The Software Functions of the Optical Fiber Storage Product
25
Dynamic disk pool DDP
Automatic prevision SSD
hard drive as cache
AS510HAS1100H
Enhanced
remote mirrorin
g
Enhanced
snapshot
The Traditional RAID Data Protection Method
26
Disk organization and management via RAID group Volume space can only be distributed in the disks of RAID group
– the performance is restricted to the number of disks The hot spare drive only operates when the hard drive is malfunctioned The hot spare space is always in the standby mode
24-drive system with (2) 10-drive groups (8+2) and (4) hot spares
The Traditional RAID Data Protection Method—— Disk Malfunction
27
Rebuild the data to the hot spare drives– the hot spare drives need to undertake all the write operation when rebuilding
the data– becomes the performance bottleneck– the data rebuilding can only be conducted in sequence, one data stripe at a
time All data access in this RAID group will be affected
24-drive system with (2) 10-drive groups (8+2) and (4) hot spares
Dynamic Disk Pool DDP – Principles
28
• Each stripe is 4GB, referred to as D-piece, and is from the 10 disks in the bottom of DDP
• Complete the choosing of disks needed via the relevant algorithm– different stripe is located in different disk– pseudorandom can ensure the dynamic balance of the space used
Dynamic Disk Pool DDP – Principles
29
• As for D-Piece which has space located in:– fragments located in other disks need to be verified– new fragments will be written into the damaged data
• Almost all the disks participate in rebuilding the data
Dynamic Disk Pool DDP – Principles
30
• One or more disks can be added to the Pool (1~12)• The data will again conduct dynamic balance distribution to give play to
new performance of the disk – move of data fragment only, not the rebuilding of the data
Dynamic Disk Pool DDP – Failure Effect Comparison
31
0
20
40
60
80
100
120
300GB … 900GB … 2TB … 3TB …
DDPRAID 6
Hour
No Impact
2.5
DAY
S
1.3
DAY
S
The test is based on 24 disks, middle-level business system
Time
Normal
Acceptable
Performance
The impact of disk failure on performance
DDPRAID
4+
DAY
SBusiness influence
Data recovery minute vs day_4TB requires 5.5days
Data Recovery Test – Test-DDP Test Result Analysis
32
DDP data recovery compared to the traditional raid
DDP ( 29 块使用40TB ) raid6(29 块使用 40TB) raid6(29 块使用 20TB) DDP ( 58 块使用
20TB ) DDP ( 58 块使用40TB )
0
5
10
15
20
25
30
A comparison on data recovery between AS1000G6 DDP and raid
NB: 3T 7200 SAS disk;64k write
coor
dina
te a
xis
capt
ion
29 7200 SAS disks available capacity 40TB, DDP data recovery 7.24 hours,Raid6 data recovery 27.19 hours; 58 disks disk pool 1 available capacity 20TB and the recovery time is 1.18 hours; 58 disks disk pool 1 available capacity 40TB and the recovery time is 4.03 hours.
Performance Test-DDP Test Result Analysis
33
DDP IOPS reaches 377,703, which is much higher than raid; however, there is no clear advantage on bandwidth compared with raid;
29 7200 SAS disk available capacity 40TB. The DDP data recovery in this test is 3.75 times that of raid. With more disks, DDP data recovers faster. The data recovery under the top configuration is 8 times that of the traditional raid. In the advertisement, it is said to be 10 times that of the traditional raid, which is its limit value.
Strategy Sequential read Sequential write Random read Random write
IOPS bandwidth IOPS bandwidth IOPS bandwidth IOPS bandwidth
DDP 377703.27 2517.82 172666.97 2333.18 207799.52 2227.69 93670.11 1388.27
RAID5 173046.98 2226.35 92359.18 1628.95 159729.32 1524.03 66009.79 1300
RAID6 240545.92 2775.45 178515.51 2500.22 22856.47 2088.01 3017.92 682.18
The Software Functions of Inspur’s Kernel Optical Fiber Storage DDP
34
• The system provides performance continuously without any influence• The system’s performance can be controlled within the “green area”
– the hard drive failure has the minimum impact on the system’s performance– clearly accelerates the system recovery time– 10 times faster than the traditional RAID recovery speed– accelerate data rebuilding
• Disk pool avoids hard drive hotspot– all volume space is distributed in all the hard drives in the disk pool– lower the hard drive failure rate
• dynamic data distribution and redistribution continue operating in the back-end
Time
Optimal
Acceptable
Perfo
rman
ce
Performance Impact of a Drive Failure
RAID Rebuild
DDP
Dynamic Disk Pool (DDP)
The Software Functions of Inspur’s Kernel Optical Fiber Storage DDP
35
Dynamic Disk Pool (DDP)
– DDP (dynamic disk pool) is a FREE function of the product which can achieve higher IOPS than the traditional RAID. It can be mostly promoted in the virtualization application platform and database application.
– There is NO LIMIT ON THE NUMBER OF HARD DRIVES in DDP group. With more hard drives, the data read and write and data recovery of malfunctioned disks are faster with HIGHER PERFORMANCE. It is advisable to take 24 to 60 disks as a DDP group.
– 510H and 1100H support SSD hard drive to be DDP volume; 500H and 1000G6 do not support SSD to be DDP.
Inspur’s Kernel Optical Fiber Storage DDP Configuration
36
AS1100H AS510H
The basic configured disk number of each DDP 11 11
The maximum disk number each DDP supports 384 192
The number of disks that can be added to DDP every time
1 to 12 disks 1 to 12 disks
The number of DDP the storage system supports 20 20
The largest single LUN capacity in DDP 64TB 64TB
The type of disk supported SAS, near-line SAS, SSD SAS, near-line SAS, SSD
A Comparison of Inspur’s Kernel Optical Fiber Storage DDP with raid6
37
The Snapshot Function of Inspur’s Kernel Optical Fiber Storage
38
Enhanced snapshot
Source
PiTSnapshotPiT
SnapshotPiTSnapshotPiT
Snapshot
– The number of enhanced snapshot is MORE than that of the traditional snapshot. Each volume has 128 snapshot volume while the traditional one has 16 snapshots in each volume
– As for the same LUN, it supports 4 snapshot group, each snapshot group supports 32 snapshots; the snapshots in each snapshot group can share THE SAME STORAGE VOLUME whose size is about 40% of the original volume (configurable), which reduces the storage capacity consumption for the convenience of snapshot management
– Support snapshot consistency relevance group to ensure the data consistency when multiple number of LUN of the same application take snapshot simultaneously
– Configure automatic snapshot strategy, taking snapshot every 30 minutes, 24 everyday; manual operation has no time limit
The Mirroring Function of Inspur’s Kernel Optical Fiber Storage
39
Enhanced remote mirroring
Source PiTSnapshot PiT Mirror Async RVMSite A
Site B
– The enhanced remote mirroring is a kind of asynchronous mirroring which is realized BASED ON THE ENHANCED SNAPSHOT. The data read/write log and data variation pointer are saved in the storage volume. With the comparison of pointers of the two places, it can be converted into residual quantity data to be transmitted to another place, so as to ENSURE that the DATA transmitted between the pointers is CONSISTENT and can be USED at a certain point.
– Conducting remote mirroring based on IP LINK and BARE FIBER LINK. Flexible link choice and HIGH CONTROLLABLE COST
– With the comparison of log record and pointer, it supports BREAKPOIN RESUME of the data transmission to ensure data integrity
– The mirroring link requires exchanger to transmit data and the controller needs to reserve a port for mirroring link interface
The Principle Analysis of the Middle-End Dual Controller Product Software
40
500H 510H
1000G6
1100H
AS500H AS510H AS1000G6 AS1100HThe number
of host512 512 1024 1024
Partition 128 128 512 512LUN 512 512 2048 2048
Single volume capacity limit: 64TB at maximum
The Automatic Downsizing Function of Inspur’s Kernel Optical Fiber Storage
41
Automatic prevision
Physical Storage: 1TB Total
Volumes: 2TB
1 TB
300 GB
50GB
150GB
100GB
200 GB
200 GB
Store maximum data with minimum costs
Advantages:– Users can create volumes flexibly. There is no need to
consider the disk reserved space in the initial planning phase; physical capacity can be added in the later phase to IMPROVE THE CAPACITY PLANNING EFFICIENCY
– LOWERING the PURCHASING COST of the storage system– Electric ENERGY and computer room SAVING, LOWERING
the emission of thermal LOSS with high efficiency and low-carbon
– The downsized volume uses 4GB as a distribution unit, the smallest granularity is 4BG. It can be configured to be above 4GB; each downsized volume’s largest virtual capacity is 64TB
– This function can only used in DDP, not in the normal RAID.
The SSD as Cache Function of Inspur’s Kernel Optical Fiber Storage
42
SSD hard drive as cache
Description: Using SSD based on the read cache of the controller
Expandable to 5TB high-speed cache, supporting MLC technology
After setting SSD to read cache, the read IO of the host will copy the data read
to SSD. If the next IO is hit, then data should be retrieved directly from SSD;
otherwise, it needs to retrieve data from the hard disk and copy to SSD.
When SSD is fully written, data deletion will begin according to the data hit rate
or the storage time in SSD.
FOCUS ON CUSTOMER SUCCESS
Thanks