Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
DHT-based P2P Architecture
1
No server to store game states
2
A client can store its own player’s state but what about
states of NPC, objects etc. ?
3
Not scalable if we replicate states of every object in the
game in every client
4
Idea: split responsibility of storing the states among the
clients
5
Who store what?
6
Object Client1 X2 Y3 Z: :: :: :
To find out who store what, each client can maintain a table ..
7
Need to update table at every node frequently. Still not scalable.
8
Idea: split responsibility of storing the table among the
clients
9
DHT: Distributed Hash Table
10
Hash Table
insert (key, object)delete (key)
obj = lookup (key)
11
Distributed Hash Table
insert (key, object)delete (key)
obj = lookup (key)
12
DHT: Objects can be stored
in any node in the network.
13
Example:Given a torrent file, find the list of peers seeding or downloading the file.
14
How to do this in a fully distributed manner?
15
Idea: Have a set of established rules to decide which key (object) is stored in which node.
16
Rule: Assign IDs to nodes and objects. An object is stored in the node with closest ID.
17
How to assign ID?
Given a object, how to find the closest node?
18
PastryA Distributed Hash Table
19
How to assign ID?
20
To assign ID, we can use a hash (e.g. into a 128 bit string).
e.g., hash IP address, URL, name etc.
21
An ID is of the form
d1 d2 d3 ... dm
with digit di = {0, 1, 2, ... n-1}
22
Example IDs (n = 10, m = 4)
051427364090
23
Example IDs (n = 3, m = 4)
121011022011
24
00002222
25
00002222
26
00002222
Any object whose IDs that falls within the blue region is stored in node A.
A
27
Given an object ID, how to find the closest node?
28
Each node only knows a small, constant number of nodes, in a routing table.
29
0121 137.12.1.0
2001 22.31.90.9
1021 45.24.8.233
1121 :
1210 :
1222 :
1200 :
- -
Routing Table for Node 1201
30
A node knows (m)x(n-1) neighbors -- m groups, each group with n-1 entries.
31
Each node i keeps a table
next(k,d) = address of node j such that
1. i and j share prefix of length k2. (k+1)-th digit of j is d3. node j is the “physically closest” match
32
0121 137.12.1.0
2001 22.31.90.9
1021 45.24.8.233
1121 :
1210 :
1222 :
1200 :
- -
next(0,0)
next(0,2)
next(1,0)
next(1,1)
next(2,1)
next(2,2)
next(3,0)
next(3,2)
33
In addition, each node knows L other nodes with closest ID. (L/2 above, L/2 below)
34
Leaf Sets
35
Example Routing Table
1201
0121
2101
10211121
36
Recall that we want to find the node with ID closest to the ID of a given object.
node = route(object_id)
37
route(0212) issued at node 1211.1211 forwards the request to next(0,0).
1211
0100
38
route(0212) received at 0100. 0100 forwards the request to next(1,2).
1211
01000201
39
1211
01000201
0210
route(0212) received at 0201. 0201 forward the request to next(2,1).
40
1211
01000201
0210
0211
0201 found that it is within the range of itsleaf set, and forward it to the closest node.
41
After 4 lookups, we found the node closest to 0212 is 0211.
42
Results of route() can be cached to avoid frequent lookup
43
We can now implement the following using route()
insert (key, object)delete (key)
obj = lookup (key)
44
Joining Procedure
45
1211
01000201
0210
0211
Suppose node 0212 wants to join. It finds a node(e.g., 1211) to run route(0212)
46
1211
01000201
0210
0211
Routing tables entries are copied from nodes encountered a long the way.
47
1211
01000201
0210
0211
Leaf sets are initialized using the leaf sets of route(0212)
48
Leaf Sets
49
ScribeApplication-Level Multicast over Pastry
50
Recall: We use multicast to implement interest management
51
In application-level multicast, nodes forward messages to each other
52
who should forward to who?
53
Idea: Use Pastry’s routing table to construct the tree.
54
0210
A group is assigned an ID, e.g., 0212
0200
02210110
21010001
55
0210
Node with ID closest to the group becomesthe rendezvous point
0200
02210110
21010001
56
0210
1200 join the group by routing a join message to group ID.
0200
02210110
21010001
1200
57
0210
The message stops once it reaches a node already on the multicast tree for group 0211
0200
02210110
21010001
1200
join 0211
58
0210
The path traversed by this join message becomespart of the multicast tree.
0200
02210110
21010001
1200
59
0210
To multicast a message, send to rendezvous point, which is then disseminated along the tree.
0200
02210110
21010001
1200
60
Who store what?
61
:
mana=9, life=3
mana=5, life=1
Knutsson’s Idea: divide game world into regions and assign a region coordinator to keep the states in each region.
62
When a player needs to read/write the state of an object, it contacts the coordinator.
player X’s mana=10
63
Game Map DHT ID space
Hash regions and nodes into the same ID space. The node whose ID is closest to the ID of a region becomes the coordinator.
64
Game Map DHT ID space
The coordinator is likely to be not from the same region it is coordinating, reducing the possibility of cheating.
65
nodes interestedin region
edges in multicasttree
Once the update message reaches the coordinator, the coordinator informs all subscribers to the region through a multicast tree using Scribe.
66
What if coordinator fails?
67
Route to Region 1111
1201
2101
11101121
68
If the coordinator fails, Pastry would route the messages to the next closest node.
1201
2101
11121121
x
69
Game Map DHT ID space
Use the next closest node to the region as the backup coordinator.
backupfor
70
The primary coordinator knows the backup from its leaf set and replicates the states to the backup coordinator.
71
If the backup receives messages for a region, it knows that the primary has failed and takes over the responsibility.
72
Issues with Knutsson’s scheme
73
1. No defense against cheating
74
2. Large latency when (i) look for objects in a region (ii) creating new objects (iii) update state of objects
75
3. Extra load on coordinators
76
4. Frequent changes of coordinators for fast moving players.
77
Knutsson’s design is for MMORPG game
(slow pace, tolerate latency)
78
Can similar architecture be used for FPS games?
79
Can we reduce the latency?
80
1. Caching to prevent frequent lookup
81
2. Prefetching objects that is near AoI to reduce delay
82
3. Don’t use Scribe. Use direct connection as in VON
83
Recap
84
Without a trusted central server:1. how to order events?2. how to prevent cheat?3. how to do interest management?4. who should store the states?
85
Many interesting proposals, but no perfect solution.
1. Increase message overhead2. Increase latency3. No conflict resolution4. Cheating5. Robustness is hard
86
Many tricks we learnt from pure P2P architecture is useful if we have a cluster of servers for games
“P2P among servers”
87
Part III of CS4344Hybrid Architecture
88