MySQL Head-to-Head

Preview:

Citation preview

MySQL and Ceph

PortlandCeph Days

MySQL in the CloudHead-to-Head Performance Lab

May 25

WHOIS

Brent Compton and Kyle BaderStorage Solution ArchitecturesRed Hat

Yves TrudeauPrincipal ArchitectPercona

AGENDA

MySQL on Ceph MySQL in the CloudHead-to-Head Performance

Lab• MySQL on Ceph vs. AWS• Head-to-head: Performance• Head-to-head: Price/performance• IOPS performance nodes for

Ceph

• Why MySQL on Ceph• Ceph Architecture• Tuning: MySQL on Ceph• HW Architectural Considerations

MySQL on Ceph vs. AWS

• Shared, elastic storage pool• Dynamic DB placement• Flexible volume resizing• Live instance migration• Backup to object pool• Read replicas via copy-on-write

snapshots

MySQL ON CEPH STORAGE CLOUDOPS EFFICIENCY

MYSQL-ON-CEPH PRIVATE CLOUDFIDELITY TO A MYSQL-ON-AWS EXPERIENCE

• Hybrid cloud requires public/private cloud commonalities

• Developers want DevOps consistency• Elastic block storage, Ceph RBD vs. AWS EBS• Elastic object storage, Ceph RGW vs. AWS S3• Users want deterministic performance

HEAD-TO-HEADPERFORMANCE

30 IOPS/GB: AWS EBS P-IOPS TARGET

HEAD-TO-HEAD LABTEST ENVIRONMENTS

• EC2 r3.2xlarge and m4.4xlarge• EBS Provisioned IOPS and GPSSD• Percona Server

• Supermicro servers• Red Hat Ceph Storage RBD• Percona Server

OSD Storage Server Systems5x SuperStorage SSG-6028R-OSDXXX

Dual Intel Xeon E5-2650v3 (10x core)32GB SDRAM DDR32x 80GB boot drives 4x 800GB Intel DC P3700 (hot-swap U.2 NVMe)1x dual port 10GbE network adaptors AOC-STGN-i2S 8x Seagate 6TB 7200 RPM SAS (unused in this lab)Mellanox 40GbE network adaptor(unused in this lab)

MySQL Client Systems12x Super Server 2UTwin2 nodes

Dual Intel Xeon E5-2670v2 (cpuset limited to 8 or 16 vCPUs)64GB SDRAM DDR3

Storage Server Software:Red Hat Ceph Storage 1.3.2Red Hat Enterprise Linux 7.2Percona Server

5x OSD Nodes 12x Client Nodes

Shared 10G SFP+ Networking

Monitor Nodes

SUPERMICRO CEPHLAB ENVIRONMENT

P-IOPS _x000d_m4.4xl P-IOPS_x000d_ r3.2xl GP-SSD _x000d_r3.2xl0

1000

2000

3000

4000

5000

6000

7000

8000

9000

7996 7956

950

1680 1687

267

100% Read100% Write

SYSBENCH BASELINE ON AWS EC2 + EBS

Brent Compton
P-IOPS m4.4xl reads should be 7996 and writes should be 1680 (for consistency, we're using the 200GB instance sizes for all 3)

01000020000300004000050000600007000080000

7996

67144

40031

1680 5677 1258

20053

4752

100% Read

100% write

70/30 RW

SYSBENCH REQUESTS PER MYSQL INSTANCE

Brent Compton
Numbers for all three should consistently use the 200GB instance sizes, vs. mixed 100GB and 200GB instances.

CONVERTING SYSBENCH REQUESTS TO IOPS READ PATH

X% FROM INNODB BUFFER POOL

IOPS = (READ REQUESTS – X%)

SYSBENCH READ

Brent Compton
Change:Select!MySQL SelectChange:Buffer PoolMySQL Buffer Pool

CONVERTING SYSBENCH REQUESTS TO IOPS WRITE PATH

SYSBENCH WRITE

1X READ

X% FROM INNODB BUFFER POOL

IOPS = (READ REQ – X%)

LOG, DOUBLE WRITE BUFFER

IOPS = (WRITE REQ * 2.3)

1X WRITE

Brent Compton
Change:Select!MySQL SelectChange:Buffer PoolMySQL Buffer Pool

P-IOPS _x000d_m4.4xl P-IOPS _x000d_r3.2xl GP-SSD _x000d_r3.2xl0.0

5.0

10.0

15.0

20.0

25.0

30.0

35.0

30.0 29.8

3.6

25.6 25.7

4.1

100% Read100% Write

AWS IOPS/GB BASELINE: ~ AS ADVERTISED!

Brent Compton
P-IOPSm4.4IL -> m4.4xlP-IOPSm4.4xl 100% write should be 25.6 not 36.1 (for consistency, we're using 200GB instance sizes for all 3).

IOPS/GB PER MYSQL INSTANCE

0

50

100

150

200

250

300

30

252

150

26

78

19

MySQL IOPS/GB ReadsMySQL IOPS/GB Writes

FOCUSING ON WRITE IOPS/GBAWS THROTTLE WATERMARK FOR DETERMINISTIC PERFORMANCE

0102030405060708090

26

78

19

EFFECT OF CEPH CLUSTER LOADING ON IOPS/GB

Ceph c

luster

_x00

0d_(1

4% ca

pacit

y)

Ceph c

luster

_x00

0d_(3

6% ca

pacit

y)

Ceph c

luster

_x00

0d_(7

2% ca

pacit

y)

Ceph c

luster

_x00

0d_(8

7% ca

pacit

y)0

20406080

100120140160

78

3725 19

134

72

37 36

100% Write70/30 RW

IOPS

/GB

Brent Compton
See comment on previous slide.

A NOTE ON WRITE AMPLIFICATIONMYSQL ON CEPH – WRITE PATH

INNODB DOUBLEWRITE BUFFER

CEPH REPLICATION

OSD JOURNALING

MYSQL INSERT

X2

X2

X2

Brent Compton
Change:InsertMySQL InsertChange:Write BufferMySQL Double Write BufferChange:ReplicationCeph ReplicationChange:JournalingCeph Journaling

HEAD-TO-HEADPERFORMANCE

30 IOPS/GB: AWS EBS P-IOPS TARGET

25 IOPS/GB: CEPH 72% CLUSTER CAPACITY (WRITES)78 IOPS/GB: CEPH 14% CLUSTER CAPACITY (WRITES)

HEAD-TO-HEADPRICE/PERFORMANCE

$2.50: TARGET AWS EBS P-IOPS STORAGE PER IOP

IOPS/GB ON VARIOUS CONFIGS

-

10

20

30

40

50

60

70

80

90

31

18 18

78

AWS EBS Provisioned-IOPSCeph on Supermicro FatTwin 72% CapacityCeph on Supermicro MicroCloud 87% CapacityCeph on Supermicro MicroCloud 14% Capacity

IOPS

/GB

(Sys

benc

h W

rite)

$/STORAGE-IOP ON THE SAME CONFIGS

$-

$0.50

$1.00

$1.50

$2.00

$2.50

$3.00

$2.40

$0.80 $0.78 $1.06

AWS EBS Provisioned-IOPSCeph on Supermicro FatTwin 72% CapacityCeph on Supermicro MicroCloud 87% CapacityCeph on Supermicro MicroCloud 14% Capacity

Stor

age

$/IO

P (S

ysbe

nch

Writ

e)

HEAD-TO-HEADPRICE/PERFORMANCE

$2.50: TARGET AWS P-IOPS $/IOP (EBS ONLY)$0.78: CEPH ON SUPERMICRO MICRO CLOUD CLUSTER

IOPS PERFORMANCE NODES FOR CEPH

ARCHITECTURAL CONSIDERATIONSUNDERSTANDING THE WORKLOAD

Traditional Ceph Workload• $/GB• PBs• Unstructured data• MB/sec

MySQL Ceph Workload• $/IOP• TBs• Structured data• IOPS

ARCHITECTURAL CONSIDERATIONSFUNDAMENTALLY DIFFERENT DESIGN

Traditional Ceph Workload• 50-300+ TB per server• Magnetic Media (HDD)• Low CPU-core:OSD ratio• 10GbE->40GbE

MySQL Ceph Workload• < 10 TB per server• Flash (SSD -> NVMe)• High CPU-core:OSD ratio• 10GbE

05

10152025303540

18 18 19

6

34 34 36

8

100% Write70/30 RW

IOPS

/GB

CONSIDERING CORE-TO-FLASH RATIO

Brent Compton
In bar labels,change:20 cores20 cores/serverchange:8 NVMe2 NVMe/serveretc.

8x Nodes in 3U chassisModel: SYS-5038MR-OSDXXXP

Per Node Configuration:CPU: Single Intel Xeon E5-2630 v4Memory: 32GB NVMe Storage: Single 800GB Intel P3700 Networking: 1x dual-port 10G SFP+

+ +

1x CPU + 1x NVMe + 1x SFP

SUPERMICRO MICRO CLOUDCEPH MYSQL PERFORMANCE SKU

SEE US AT PERCONA LIVE!• Hands on Test Drive: MySQL on Ceph April 18, 1:30-4:30

• MySQL on CephApril 19, 1:20-2:10

• MySQL in the Cloud: Head-to-Head Performance April 19, 2:20-3:10

• Running MySQL Virtualized on Ceph: Which Hypervisor?

April 20, 3:30-4:20

THANK YOU!

Recommended