Upload
federico-campoli
View
87
Download
2
Embed Size (px)
Citation preview
The ninja elephantScaling the analytics database in Transferwise
Federico Campoli
Transferwise
25th January 2017
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 1 / 1
First rule about talks, don’t talk about the speaker
Born in 1972
Passionate about IT since 1982 mostly because of TRON movie
Joined the Oracle DBA secret society in 2004
Fell in love with PostgreSQL in 2006
Currently runs the Brighton PostgreSQL User group
Works at Transferwise as Data Engineer
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 2 / 1
Table of contents
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 3 / 1
Table of contents
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 4 / 1
We have an appointment, and we are late!
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 5 / 1
The Gordian Knot of analytics db
The data engineer started in July 2016
He was involved in a task not customer facing
However the task was very critical to the business
To solve the performance issues on the MySQL analytics database
Which were bad despite the resources assigned to the VM were considerable
And the data set was medium size
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 6 / 1
The Gordian Knot of analytics db
The data engineer started in July 2016
He was involved in a task not customer facing
However the task was very critical to the business
To solve the performance issues on the MySQL analytics database
Which were bad despite the resources assigned to the VM were considerable
And the data set was medium size
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 6 / 1
Tactical assessment
The existing database had the following configuration
MySQL 5.6 on innodb
Innodb buffer size 60 GB
RAM available 70 GB
20 CPU
600 GB used on disk
Analytic queries performed via Looker and Tableau
The main live MySQL schema replicated into the analytics database
Several schemas from the service database imported on a regular basis
One schema used for obfuscating PII and denormalising the heavy queries
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 7 / 1
The frog effect
If you drop a frog in a pot of boiling water, it will of course frantically try toclamber out. But if you place it gently in a pot of tepid water and turn the heat
will be slowly boiled to death.
The performance issues worsened over a two years span
The obfuscation was made via custom views
The data size on the MySQL master increased over time
Causing the optimiser to switch on materialise when accessing the views
The analytics tools struggled just under normal load
In busy periods the database became almost unusable
Analysts were busy to tune existing queries rather writing new
A new solution was needed
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 8 / 1
The frog effect
If you drop a frog in a pot of boiling water, it will of course frantically try toclamber out. But if you place it gently in a pot of tepid water and turn the heat
will be slowly boiled to death.
The performance issues worsened over a two years span
The obfuscation was made via custom views
The data size on the MySQL master increased over time
Causing the optimiser to switch on materialise when accessing the views
The analytics tools struggled just under normal load
In busy periods the database became almost unusable
Analysts were busy to tune existing queries rather writing new
A new solution was needed
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 8 / 1
Table of contents
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 9 / 1
The eye of the storm
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 10 / 1
One size doesn’t fits all
It was clear that MySQL was no longer a good fit.
However the new solution’s requirements had to meet some specific needs.
Data updated in almost real time from the live database
PII obfuscated for the analysts
PII available in clear for the power users
The system should be able to scale out for several years
Modern SQL for better analytics queries
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 11 / 1
May the best database win
The analysts team shortlisted few solutions.
Each solution covered partially the requirements.
Google BigQuery
Amazon RedShift
Snowflake
PostgreSQL
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 12 / 1
Shortlisting the shortlist
Google BigQuery and Amazon RedShift did not suffice the analytics requirementsand were removed from the list.
Both PostgreSQL and Snowflake offered very good performance and modern SQL.
Neither of them offered a replication system from the MySQL system.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 13 / 1
Straight into the cloud
Snowflake is a cloud based data warehouse service. It’s based on Amazon S3 andcomes with different sizing.
Their pricing system is very appealing and the preliminary tests shown Snowflakeoutperforming PostgreSQL1.
1PostgreSQL single machine vs cloud based parallel processingFederico Campoli (Transferwise) The ninja elephant 25th January 2017 14 / 1
Streaming copy
Using FiveTran, an impressive multi technology data pipeline, the data would flowin real time from our production server to Snowflake.
Unfortunately there was just one little catch.
There was no support for obfuscation.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 15 / 1
Streaming copy
Using FiveTran, an impressive multi technology data pipeline, the data would flowin real time from our production server to Snowflake.
Unfortunately there was just one little catch.
There was no support for obfuscation.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 15 / 1
Customer comes first
In Transferwise we really care about the customer’s data security.
Our policy for the PII data is that any personal information moving outside ourperimeter shall be obfuscated.
The third party extraction and replica for Snowflake required full read access toour live systems or at least a database configured in cascading replica .
The data should have been obfuscated before allowing the third party replicatoraccess.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 16 / 1
Proactive development
The data engineer foreseeing the issue developed in his spare time a proof ofconcept based on the replica tool pg chameleon which uses a python library toread the MySQL replica.
The tests on a small copy of the live database were successful.
The tool’s simple structure allowed to add the obfuscation in real time withminimal changes.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 17 / 1
And the winner is...
In this scenario PostgreSQL would be the replicated and obfuscated data sourcefor FiveTran.
However, because the performance on PostgreSQL were quite good, and thesystem have good margin for scaling up, the decision was to keep the dataanalytics data behind our perimeter.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 18 / 1
Table of contents
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 19 / 1
MySQL Replica in a nutshell
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 20 / 1
A quick look to the replication system
Let’s have a quick overview on how the MySQL replica works and how thereplicator interacts with it.
The following slides are related to pg chameleon because the custom obfuscatortool share with pg chameleon most of its concepts and code.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 21 / 1
MySQL Replica
The MySQL replica protocol is logical
When MySQL is configured properly the RDBMS saves the data changedinto binary log files
The slave connects to the master and gets the replication data
The replication’s data are saved into the slave’s local relay logs
The local relay logs are replayed into the slave
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 22 / 1
MySQL Replica
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 23 / 1
A chameleon in the middle
pg chameleon mimics a mysql slave’s behaviour
Connects to the master and reads data changes
It stores the row images into a PostgreSQL table using the jsonb format
A plpgSQL function decodes the rows and replay the changes
PostgreSQL acts as relay log and replication slaveWith an extra cool feature.
Initialises the PostgreSQL replica schema in just one command
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 24 / 1
A chameleon in the middle
pg chameleon mimics a mysql slave’s behaviour
Connects to the master and reads data changes
It stores the row images into a PostgreSQL table using the jsonb format
A plpgSQL function decodes the rows and replay the changes
PostgreSQL acts as relay log and replication slaveWith an extra cool feature.
Initialises the PostgreSQL replica schema in just one command
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 24 / 1
A chameleon in the middle
pg chameleon mimics a mysql slave’s behaviour
Connects to the master and reads data changes
It stores the row images into a PostgreSQL table using the jsonb format
A plpgSQL function decodes the rows and replay the changes
PostgreSQL acts as relay log and replication slaveWith an extra cool feature.
Initialises the PostgreSQL replica schema in just one command
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 24 / 1
MySQL replica + pg chameleon
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 25 / 1
Log formats
MySQL supports different formats for the binary logs.
The STATEMENT format. It logs the statements which are replayed on theslave.It seems the best solution for performance.However replaying queries with not deterministic elements generateinconsistent slaves (e.g. insert with uuid).
The ROW format is deterministic. It logs the row image and the DDL queries.This is the format required for pg chameleon to work.
MIXED takes the best of both worlds. The master logs the statements unlessa not deterministic element is used. In that case it logs the row image.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 26 / 1
Table of contents
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 27 / 1
Maximum effort
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 28 / 1
Replica and obfuscation
The data engineer worked on pg chameleon and built a minimum viable product.
The project was forked into a transferwise owned repository for adding theobfuscation capabilities and other specific functionalities like the daily proceduresfor the pre aggregated schema.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 29 / 1
Mighty morphing power elephant
The replica initialisation locks the mysql tables in read only mode.
To avoid the main database to be locked for several hours a secondary MySQLreplica is setup with the local query logging enabled.
The cascading replica also allowed to use the ROW binlog format as the masteruses MIXED for performance reasons.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 30 / 1
This is what awesome looks like!
A MySQL master is replicated into a MySQL slave
The slave’s data is copied and obfuscated using a PostgreSQL database!
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 31 / 1
This is what awesome looks like!
A MySQL master is replicated into a MySQL slave
The slave’s data is copied and obfuscated using a PostgreSQL database!
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 31 / 1
Replica initialisation
The replica initialisation follows the same rules of any mysql replica setup
Flush the tables with read lock
Get the master’s coordinates
Copy the data
Release the locks
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 32 / 1
Tricky SQL
The data copy pulls the data out from mysql using the CSV format with a verytricky SQL statement.
SELECT
CASE
WHEN data_type="enum"
THEN
SUBSTRING(COLUMN_TYPE,5)
END AS enum_list,
CASE
WHEN
data_type IN (’"""+"’,’".join(self.hexify)+"""’)
THEN
concat(’hex(’,column_name,’)’)
WHEN
data_type IN (’bit’)
THEN
concat(’cast(‘’,column_name,’‘ AS unsigned)’)
ELSE
concat(’‘’,column_name,’‘’)
END
AS column_csv
FROM
information_schema.COLUMNS
WHERE
table_schema=%s
AND table_name=%s
ORDER BY
ordinal_position
;
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 33 / 1
Fallback on failure
The CSV data is pulled out in slices in order to avoid memory overload.
The file is then pushed into PostgreSQL using the COPY command.However...
COPY is fast but is single transaction
One failure and the entire batch is rolled back
If this happens the procedure loads the same data using the INSERTstatements
Which can be very slow
But at least discards only the problematic rows
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 34 / 1
Obfuscation when initialising
The obfuscation process is quite simple and uses the extension pgcrypt for hashingin sha256.
When the replica is initialised the data is copied into the schema in clear
The table locks are released
The tables with PII are copied and obfuscated in a separate schema
The process builds the indices on the schemas with data in clear andobfuscated
The tables without PII data are exposed to the normal users using simpleviews
All the varchar fields in the obfuscated schema are converted in text fields
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 35 / 1
Obfuscation on the fly
The obfuscation is also applied when the data is replicated.The approach is very simple.
When a row image is captured the process checks if the table contains PIIdata
In that case the process generates a second jsonb element with the PII dataobfuscated
The jsonb element carries the complete informations about the destinationschema
The plpgSQL function executes the change on the schema in clear and theschema with obfuscated data
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 36 / 1
The DDL. A real pain in the back
The DDL replica is possible with a little trick.
MySQL even in ROW format emits the DDL as statements
A regular expression traps the DDL like CREATE/DROP TABLE or ALTERTABLE.
The mysql library gets the table’s metadata from the information schema
The metadata is used to build the DDL in the PostgreSQL dialect
This approach may not be elegant but is quite robust.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 37 / 1
Timing
Query MySQL PostgreSQL PostgreSQL cachedMaster procedure 20 hours 4 hours N/A
Extracting sharing ibans didn’t complete 3 minutes 1 minuteAdyen notification 6 minutes 2 minutes 6 seconds
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 38 / 1
Resource comparison
Resource MySQL PostgreSQLStorage Size 940 GB 664 GBServer CPUs 18 8
Server Memory 68 GB 48 GBShared Memory 50 GB 5 GBMax connections 500 100
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 39 / 1
Advantages using PostgreSQL
Stronger security model
Better resource optimisation (See previous slide)
No invalid views
No performance issues with views
Complex analytics functions
partitioning (thanks pg pathman!)
BRIN indices
some code was optimised inside, but actually very little - maybe 10-20% wasimproved. We’ll do more of that in the future, but not yet. The good thing is thatthe performance gains we have can mostly be attributed just to PG vs MySQL. Sothere’s a lot of scope to improve further.
Jeff McClelland - Growth Analyst, data guru
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 40 / 1
Advantages using PostgreSQL
Stronger security model
Better resource optimisation (See previous slide)
No invalid views
No performance issues with views
Complex analytics functions
partitioning (thanks pg pathman!)
BRIN indices
some code was optimised inside, but actually very little - maybe 10-20% wasimproved. We’ll do more of that in the future, but not yet. The good thing is thatthe performance gains we have can mostly be attributed just to PG vs MySQL. Sothere’s a lot of scope to improve further.
Jeff McClelland - Growth Analyst, data guru
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 40 / 1
Table of contents
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 41 / 1
Lessons learned
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 42 / 1
init replica tune
The replica initialisation required several improvements.
The first init replica implementation didn’t complete.The OOM killer killed the process when the memory usage was too high.
In order to speed up the replica, some large tables not required in theanalytics db were excluded from the init replica.
Some tables required a custom slice size because the row length triggeredagain the OOM killer.
Estimating the total rows for user’s feedback is faster but the output can beodd.
Using not buffered cursors improves the speed and the memory usage.
However.... even after fixing the memory issues the initial copy took 6 days.
Tuning the copy speed with the unbuffered cursors and the row number estimatesimproved the initial copy speed which now completes in 30 hours.
Including the time required for the index build.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 43 / 1
init replica tune
The replica initialisation required several improvements.
The first init replica implementation didn’t complete.The OOM killer killed the process when the memory usage was too high.
In order to speed up the replica, some large tables not required in theanalytics db were excluded from the init replica.
Some tables required a custom slice size because the row length triggeredagain the OOM killer.
Estimating the total rows for user’s feedback is faster but the output can beodd.
Using not buffered cursors improves the speed and the memory usage.
However.... even after fixing the memory issues the initial copy took 6 days.
Tuning the copy speed with the unbuffered cursors and the row number estimatesimproved the initial copy speed which now completes in 30 hours.
Including the time required for the index build.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 43 / 1
Strictness is an illusion. MySQL doubly so
MySQL’s lack of strictness is not a mystery.The replica broke down several times because of the funny way the NOT NULL ismanaged by MySQL.
To prevent any further replica breakdown the fields with NOT NULL added withALTER TABLE, in PostgreSQL are always as NULLable.
MySQL truncates the strings of characters at the varchar size automatically. Thisis a problem if the field is obfuscated on PostgreSQL because the hashed stringcould not fit into the corresponding varchar field. Therefore all the charactervarying on the obfuscated schema are converted to text.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 44 / 1
I feel your lack of constraint disturbing
Rubbish data in MySQL can be stored without errors raised by the DBMS.
When this happens the replicator traps the error when the change is replayed onPostgreSQL and discards the problematic row.
The value is logged on the replica’s log, available for further actions.
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 45 / 1
Table of contents
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 46 / 1
Wrap up
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 47 / 1
Did you say hire?
WE ARE HIRING!https://transferwise.com/jobs/
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 48 / 1
That’s all folks!
QUESTIONS?
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 49 / 1
Contacts and license
Twitter: 4thdoctor scarf
Transferwise: https://transferwise.com/
Blog:http://www.pgdba.co.uk
Meetup: http://www.meetup.com/Brighton-PostgreSQL-Meetup/
This document is distributed under the terms of the Creative Commons
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 50 / 1
Boring legal stuff
The 4th doctor meme - source memecrunch.com
The eye, phantom playground, light end tunnel - Copyright Federico Campoli
The dolphin picture - Copyright artnoose
Deadpool Maximum Effort - source Deadpool Zoeiro
Deadpool Clap - source memegenerator
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 51 / 1
The ninja elephantScaling the analytics database in Transferwise
Federico Campoli
Transferwise
25th January 2017
Federico Campoli (Transferwise) The ninja elephant 25th January 2017 52 / 1