Upload
elizabeth-regina-atkinson
View
221
Download
1
Tags:
Embed Size (px)
Citation preview
DISTRIBUTED MUTEX
EE324 Lecture 11
Vector Clocks
Vector clocks overcome the shortcoming of Lamport logical clocks L(e) < L(e’) does not imply e happened before e’
Goal Want ordering that matches causality V(e) < V(e’) if and only if e → e’
Method Label each event by vector V(e) [c1, c2 …, cn]
ci = # events in process i that causally precede e
Vector Clock Algorithm
Initially, all vectors [0,0,…,0] For event on process i, increment own ci
Label message sent with local vector When process j receives message with vector [d1, d2, …, dn]:
Set local each local entry k to max(ck, dk)
Increment value of cj
4
Vector Clocks
Vector clocks overcome the shortcoming of Lamport logical clocks L(e) < L(e’) does not imply e happened before e’
Vector timestamps are used to timestamp local events They are applied in schemes for replication of data
5
Vector Clocks
At p1 a occurs at (1,0,0); b occurs at (2,0,0); piggyback (2,0,0) on m1
At p2 on receipt of m1 use max ((0,0,0), (2,0,0)) = (2, 0, 0) and add 1 to own element = (2,1,0)
Meaning of =, <=, max etc for vector timestamps compare elements pairwise
6
Vector Clocks
Note that e e’ implies L(e)<L(e’). The converse is also true
Can you see a pair of parallel events? c || e( parallel) because neither V(c) <= V(e) nor V(e) <= V(c)
Instructor’s Guide for Coulouris, Dollimore, Kindberg and Blair, Distributed Systems: Concepts and Design Edn. 5 © Pearson Education 2012
Figure 14.6Lamport timestamps for the events shown in Figure 14.5
Instructor’s Guide for Coulouris, Dollimore, Kindberg and Blair, Distributed Systems: Concepts and Design Edn. 5 © Pearson Education 2012
Figure 14.7Vector timestamps for the events shown in Figure 14.5
Logical clocks including VC doesn’t capture everything
Out-of-band communication
Distributed Mutex (Reading CDK5 15.2) We learned about mutex, semaphore, and CVs within a
single system. What do they have in common? They require a shared state and we kept it in the memory.
Distributed mutex No shared memory How do we implement it? Message passing Challenges: Message can be dropped and processes can
fail.
Distributed Mutex (Reading CDK5 15.2)
Entering/leaving a critical section Enter() --- block if neccessary ResourceAccessses() --- access shared resource (in side the CS) Leave()
Goal Safety: at most one process may execute in the CS at one time Liveness: Requests to enter/exit CS eventually succeeds. (no
deadlock or starvation) ordering: If one entry request “happened before” another,
then entry to CS must happen in that order.
Distributed Mutex (Reading CDK5 15.2) ordering
Example explained in class
Other performance objectives Reduce the number of messages Minimize synchronization delay
13
Mutual ExclusionA Centralized Algorithm
1. Process 1 asks the coordinator for permission to access a shared resource Permission is granted
2. Process 2 then asks permission to access the same resource The coordinator does not reply
3. When process 1 releases the resource, it tells the coordina-tor, which then replies to 2
Mutual ExclusionA Centralized Algorithm
Advantages Simple, small delay (one RTT) to acquire mutex Only 3 messages required to enter and leave the critical
section Disadvantages
Single point of failure Central performance bottleneck Does not ensure ordering (example?) Must elect a master in a consistent fashion
15
A Token Ring Algorithm
An unordered group of processes on a network A logical ring constructed in software Use ring to pass right to access resource
A Token Ring Algorithm
Benefits: Simple Problems: Failure recovery can be difficult.
A single process failure can break the ring. But, a failure can be recovered by dropping the
process in the logical ring. Does not ensure ordering Long synchronization delay: Need to wait for up to
N-1 messages, for N processors
Lamport’s Shared Priority Queue
Maintain a global priority queue of requests for the crit-ical section.
But each process has its own queue. The ordering in-side the Qs is enforced by Lamport’s clock. Thus, we enforce ordering.
Lamport’s Shared Priority Queue
Each process i locally maintains Qi (its own version of the priority Q)
To execute critical section, you must have replies from all other processes AND your request must be at the front of Qi
When you have all replies: All other processes are aware of your request (because the
request happens before response) You are aware of any earlier requests (assume messages
from the same process are not reordered)
Lamport’s Shared Priority Queue
To enter critical section at process i :Stamp your request with the cur-rent time T Add request to Qi Broadcast REQUEST(T) to all processes Wait for all replies and for T to reach front of Qi
To leave Pop head of Qi, Broadcast RELEASE to all processes
On receipt of REQUEST(T’) from process j: Add T’ to Qi If waiting for REPLY from j for an earlier request T, wait until j replies to you Otherwise REPLY
• On receipt of RELEASEPop head of Qi
Lamport’s Shared Priority Queue
Advantages: Fair Short synchronization delay
Disadvantages: Very unreliable (Any process failure halts
progress) 3(N-1) messages per entry/exit
Announcements
Midterm In the process of making it. Written exam: take home exam. Honor code. Only textbook
and lecture notes. No discussion. No Internet. Release at noon 10/24. Due Friday evening 10/25 Programming part: Can use the Internet, but no discussion.