Upload
eric-evans
View
1.429
Download
4
Tags:
Embed Size (px)
DESCRIPTION
Citation preview
Eric [email protected]
@jericevans
ApacheCon North AmericaFebruary 28, 2013
Rethinking Topology in Cassandra
1Thursday, February 28, 13
DHT 101
2Thursday, February 28, 13
DHT 101partitioning
AZ
3Thursday, February 28, 13
DHT 101partitioning
A
B
C
Y
Z
4Thursday, February 28, 13
DHT 101partitioning
A
B
C
Y Key = Aaa
Z
5Thursday, February 28, 13
DHT 101replica placement
A
B
C
Y Key = Aaa
Z
6Thursday, February 28, 13
DHT 101consistency
Consistency
Availability
Partition tolerance
7Thursday, February 28, 13
A
?
?
W
DHT 101scenario: consistency level = one
8Thursday, February 28, 13
A
?
?
R
DHT 101scenario: consistency level = all
9Thursday, February 28, 13
DHT 101scenario: quorum write
A
B
?
R+W > N
W
10Thursday, February 28, 13
DHT 101scenario: quorum read
?
B
C
R+W > N
R
11Thursday, February 28, 13
Awesome, yes?
12Thursday, February 28, 13
Well...
13Thursday, February 28, 13
Problem:Poor load distribution
14Thursday, February 28, 13
Distributing Load
A
B
C
Y
Z
M
15Thursday, February 28, 13
Distributing Load
A
B
C
Y
Z
M
16Thursday, February 28, 13
Distributing Load
A
B
C
Y
Z
M
17Thursday, February 28, 13
Distributing Load
A
B
C
Y
Z
M
18Thursday, February 28, 13
Distributing Load
AZ
Y B
CM
19Thursday, February 28, 13
Distributing Load
A
B
C
Y
Z A1
M
20Thursday, February 28, 13
Distributing Load
A
B
C
Y
Z A1
M
21Thursday, February 28, 13
Distributing Load
AZ
Y B
C
A1
M
22Thursday, February 28, 13
Problem:Poor data distribution
23Thursday, February 28, 13
Distributing DataA
B
CD
24Thursday, February 28, 13
Distributing DataA
B
CD
E
25Thursday, February 28, 13
Distributing Data
A
B
CD
EA
C
B
D
26Thursday, February 28, 13
Distributing Data
A
B
CD
EA
C
B
D
27Thursday, February 28, 13
Distributing DataA
B
CD
H E
FG
28Thursday, February 28, 13
Distributing DataA
B
CD
H E
FG
29Thursday, February 28, 13
Virtual Nodes
30Thursday, February 28, 13
In a nutshell...
host
host
host
31Thursday, February 28, 13
Benefits
• Operationally simpler (no token management)
• Better distribution of load
• Concurrent streaming involving all hosts
• Smaller partitions mean greater reliability
• Supports heterogenous hardware
32Thursday, February 28, 13
Strategies
• Automatic sharding
• Fixed partition assignment
• Random token assignment
33Thursday, February 28, 13
StrategyAutomatic Sharding
• Partitions are split when data exceeds a threshold
• Newly created partitions are relocated to a host with lower data load
• Similar to sharding performed by Bigtable, or Mongo auto-sharding
34Thursday, February 28, 13
StrategyFixed Partition Assignment
• Namespace divided into Q evenly-sized partitions
• Q/N partitions assigned per host (where N is the number of hosts)
• Joining hosts “steal” partitions evenly from existing hosts.
• Used by Dynamo and Voldemort (described in Dynamo paper as “strategy 3”)
35Thursday, February 28, 13
StrategyRandom Token Assignment
• Each host assigned T random tokens
• T random tokens generated for joining hosts; New tokens divide existing ranges
• Similar to libketama; Identical to Classic Cassandra when T=1
36Thursday, February 28, 13
Considerations
1. Number of partitions
2. Partition size
3. How 1 changes with more nodes and data
4. How 2 changes with more nodes and data
37Thursday, February 28, 13
Evaluating
Strategy No. Partitions Partition size
Random O(N) O(B/N)
Fixed O(1) O(B)
Auto-sharding O(B) O(1)
B ~ total data size, N ~ number of hosts
38Thursday, February 28, 13
Evaluating
• Automatic sharding
• partition size constant (great)
• number of partitions scales linearly with data size (bad)
• Fixed partition assignment
• Random token assignment
39Thursday, February 28, 13
Evaluating
• Automatic sharding
• Fixed partition assignment
• Number of partitions is constant (good)
• Partition size scales linearly with data size (bad)
• Higher operational complexity (bad)
• Random token assignment
40Thursday, February 28, 13
Evaluating
• Automatic sharding
• Fixed partition assignment
• Random token assignment
• Number of partitions scales linearly with number of hosts (good ok)
• Partition size increases with more data; decreases with more hosts (good)
41Thursday, February 28, 13
Evaluating
• Automatic sharding
• Fixed partition assignment
• Random token assignment
42Thursday, February 28, 13
Cassandra
43Thursday, February 28, 13
Configurationconf/cassandra.yaml
# Comma separated list of tokens,# (new installs only).initial_token:<token>,<token>,<token>
or
# Number of tokens to generate.num_tokens: 256
44Thursday, February 28, 13
Configurationnodetool info
Token : (invoke with -T/--tokens to see all 256 tokens)ID : 64090651-6034-41d5-bfc6-ddd24957f164Gossip active : trueThrift active : trueLoad : 92.69 KBGeneration No : 1351030018Uptime (seconds): 45Heap Memory (MB): 95.16 / 1956.00Data Center : datacenter1Rack : rack1Exceptions : 0Key Cache : size 240 (bytes), capacity 101711872 (bytes ...Row Cache : size 0 (bytes), capacity 0 (bytes), 0 hits, ...
45Thursday, February 28, 13
Configurationnodetool ring
Datacenter: datacenter1==========Replicas: 2
Address Rack Status State Load Owns Token 9022770486425350384127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -9182469192098976078127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -9054823614314102214127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8970752544645156769127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8927190060345427739127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8880475677109843259127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8817876497520861779127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8810512134942064901127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8661764562509480261127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8641550925069186492127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8636224350654790732......
46Thursday, February 28, 13
Configurationnodetool status
Datacenter: datacenter1=======================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving-- Address Load Tokens Owns Host ID RackUN 10.0.0.1 97.2 KB 256 66.0% 64090651-6034-41d5-bfc6-ddd24957f164 rack1UN 10.0.0.2 92.7 KB 256 66.2% b3c3b03c-9202-4e7b-811a-9de89656ec4c rack1UN 10.0.0.3 92.6 KB 256 67.7% e4eef159-cb77-4627-84c4-14efbc868082 rack1
47Thursday, February 28, 13
Configurationnodetool status
Datacenter: datacenter1=======================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving-- Address Load Tokens Owns Host ID RackUN 10.0.0.1 97.2 KB 256 66.0% 64090651-6034-41d5-bfc6-ddd24957f164 rack1UN 10.0.0.2 92.7 KB 256 66.2% b3c3b03c-9202-4e7b-811a-9de89656ec4c rack1UN 10.0.0.3 92.6 KB 256 67.7% e4eef159-cb77-4627-84c4-14efbc868082 rack1
48Thursday, February 28, 13
Configurationnodetool status
Datacenter: datacenter1=======================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving-- Address Load Tokens Owns Host ID RackUN 10.0.0.1 97.2 KB 256 66.0% 64090651-6034-41d5-bfc6-ddd24957f164 rack1UN 10.0.0.2 92.7 KB 256 66.2% b3c3b03c-9202-4e7b-811a-9de89656ec4c rack1UN 10.0.0.3 92.6 KB 256 67.7% e4eef159-cb77-4627-84c4-14efbc868082 rack1
49Thursday, February 28, 13
MigrationA
C B
50Thursday, February 28, 13
Migrationedit conf/cassandra.yaml and restart
# Number of tokens to generate.num_tokens: 256
51Thursday, February 28, 13
Migrationconvert to T contiguous tokens in existing ranges
AAAAAAAAAAAA
AA A A A A A A A A AC
AA
AA
AA
AAAA
AB
52Thursday, February 28, 13
Migrationshuffle
AAAAAAAAAAAA
AA A A A A A A A A AC
AA
AA
AA
AAAA
AB
53Thursday, February 28, 13
Shuffle
• Range transfers are queued on each host
• Hosts initiate transfer of ranges to self
• Pay attention to the logs!
54Thursday, February 28, 13
Shufflebin/shuffle
Usage: shuffle [options] <sub-command>
Sub-commands: create Initialize a new shuffle operation ls List pending relocations clear Clear pending relocations en[able] Enable shuffling dis[able] Disable shuffling
Options: -dc, --only-dc Apply only to named DC (create only) -tp, --thrift-port Thrift port number (Default: 9160) -p, --port JMX port number (Default: 7199) -tf, --thrift-framed Enable framed transport for Thrift (Default: false) -en, --and-enable Immediately enable shuffling (create only) -H, --help Print help information -h, --host JMX hostname or IP address (Default: localhost) -th, --thrift-host Thrift hostname or IP address (Default: JMX host)
55Thursday, February 28, 13
Performance
56Thursday, February 28, 13
removenode
0
100
200
300
400
Cassandra 1.2 Cassandra 1.1
57Thursday, February 28, 13
bootstrap
0
125
250
375
500
Cassandra 1.2 Cassandra 1.1
58Thursday, February 28, 13
The End
• Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall and Werner Vogels “Dynamo: Amazon’s Highly Available Key-value Store” Web.
• Low, Richard. “Improving Cassandra's uptime with virtual nodes” Web.
• Overton, Sam. “Virtual Nodes Strategies.” Web.
• Overton, Sam. “Virtual Nodes: Performance Results.” Web.
• Jones, Richard. "libketama - a consistent hashing algo for memcache clients” Web.
59Thursday, February 28, 13