Click here to load reader
View
1.552
Download
0
Embed Size (px)
SOEN 423: Project Report in fulfillment of SOEN 423 fall 2009 Ver. 1.2 project ID : 3 12/11/2009 Team Members
Date Rev. Description Author(s) Contributor(s)
10/12/2009 1.0 First Draft Ali Ahmed The Team
11/12/2009 1.2 Document Review Ali Ahmed The Team
Concordia University Montreal
Winter 2009
Concordia University Project Report SOEN 423 CS & SE Fall 2009
2 | P a g e
Table of Contents 1. Introduction ........................................................................................................................................... 3
2. Problem Statement ................................................................................................................................ 3
3. Design Description ................................................................................................................................ 3
4. Implementation Details ......................................................................................................................... 5
4.1. Corba ............................................................................................................................................. 5
4.2. Client ............................................................................................................................................. 5
4.3. Front End ...................................................................................................................................... 6
4.4. Replica Manager ........................................................................................................................... 7
4.5. Branch Servers / Replicas ............................................................................................................. 9
4.6. Byzantine Scenarios ...................................................................................................................... 9
4.7. Reliable FIFO communication via UDP ..................................................................................... 12
4.8. Synchronization .......................................................................................................................... 13
4.9. Asynchronous call back in Corba ............................................................................................... 14
5. Test cases and overview ...................................................................................................................... 15
6. Team Organization and Contribution ................................................................................................. 21
7. Conclusion .......................................................................................................................................... 21
Concordia University Project Report SOEN 423 CS & SE Fall 2009
3 | P a g e
1. Introduction
This report is in fulfillment of the requirements for Soen 423 Distributed System Programming
Project for Fall 2009. It describes the problem specified, the design and implementation of our
solution and the resulting output from the system and verification by test cases
2. Problem Statement
We were required to implement a Distributed Banking System ( DBS ) , extending the core idea
of the individual assignments . Our project ( group of 3 ) was to have the following features
A failure-free front end (FE) which receives requests from the clients as CORBA invocations,
atomically broadcasts the requests to the server replicas, and sends a single correct result back
to the client by properly combining the results received from the replicas. The FE should also
keep track of the replica which produced the incorrect result (if any) and informs the replica
manager (RM) to replace the failed replica. The FE should also be multi threaded so that it can
handle multiple concurrent client requests using one thread per request.
A replica manager (RM) which creates and initializes the actively replicated server subsystem.
The RM also manages the server replica group information (which the FE uses for request
forwarding) and replaces a failed replica with another one when requested by the FE.
A reliable FIFO communication subsystem over the unreliable UDP layer for the communication
between replicas.
3. Design Description
Based on the requirements we saw a possible bottleneck, the fact that the methods parameters in
our assignment code had return types which would result in blocking and hence low performance.
Additionally Branch Servers had to be destroyed and re-instantiated hence we reference them through a
Concordia University Project Report SOEN 423 CS & SE Fall 2009
4 | P a g e
Branch Server Proxy Object. Transfer operations were delegated to FE's and they split them into
deposit and withdraw requests since then may need to contact other FEs. Each FE and associated group
of replicas deals with a subset of accounts. If in an account transfer both accounts are to referenced to
the same FE the first account is referenced to the correct FE to do the withdraw , then a message is sent
to the same FE to do the deposit operation.
Fig . High Level design overview of the implementation and deployment
Concordia University Project Report SOEN 423 CS & SE Fall 2009
5 | P a g e
4. Implementation Details
4.1. Corba
All elements of the Corba were defined in our IDL file which is listed below. Things to be noted
are the call back object and void method declarations.
Our IDL File
//---------------
module dbs
{
module corba
{
interface CallBack
{
void responseMessage(in string message);
};
interface FailureFreeFE
{
string sayHello();
void deposit(in long long accountNum,in float amount);
void withdraw(in long long accountNum,in float amount);
void balance(in long long accountNum);
void transfer(in long long src_accountNum,in long long
dest_accountNum,in float amount);
void requestResponse(in CallBack cb);
oneway void shutdown();
};
};
};
//---------------
4.2. Client
The Client simply registers itself with the ORB daemon; it maps references of Front
ends with the Branch ID of requested accounts (first two digits). It also registers the call back
object with the FE when making the request, there is no blocking at any stage and the FE
response is asynchronous so multiple requests can be sent and the responses arrive later.
Concordia University Project Report SOEN 423 CS & SE Fall 2009
6 | P a g e
4.3. Front End
Fig . High-level view of frontend components.
The frontend manages the communication between the clients and branch server replicas. It
provides clients with a failure free interface to branch servers allowing them to perform the
basic banking operations (deposit, withdraw, balance, transfer).
Clients are only required to know the location of the server running the ORB. They then obtain a
reference to the frontend hosting the account they wish to manipulate. Requests are sent via
CORBA invocations to the frontend who then messages to the FIFO UDP to broadcast to each of
its branch server replicas. Each replica does the requested operation on the account and returns
a result in the form of an account balance. The results are all compared and response that
reflects the correct result is sent back to the client.
The CORBA middleware provides us with transparent threading and concurrency control. Also
the UDP Server runs in its own thread and all its operations are hence asynchronous. Therefore
spawn additional threads per request was no longer needed.
Concordia University Project Report SOEN 423 CS & SE Fall 2009
7 | P a g e
4.4. Replica Manager
Why replication is needed
Process failure should not imply the failure of the whole system
A distributed system needs to implement techniques guaranteeing the availability and the
correctness of the result provided to the client. We use active replication as per the specification
so that the system may provide a collection of results, which represents a consensus that the
client may rely on without the awareness of the inner process of replication.
A replica is a server process that carries the execution of a client request with peers processing
the request in its own memory space. This way we guarantee that a replicated process does not
alter the state of another replica.
But the replication comes at a cost. True or most up to date correct results must be maintained
by proper message ordering or simply avoidance of byzantine failure (malicious or non
malicious). In the best case all replicas hold the same correct value. In the worst case we expect
the value to be returned be reached by a majority vote.
We may expect the replicas to fail for arbitrary reasons. Hardware failure can be among the
most dramatic. Replicas in the same host that fail will result in the total system failure unless
you have a group that can take its place. In this project we do not make the assumption that
replica group