23
CMPE- 275: Enterprise Application Development (Section 01) Professor: John Gash Project: Massive Online Open Course Group 5 Archit Agarwal (008065434) Akshay Bapat (008020571) Akshay Wattal (008941816) Punit Sharma (009268532) Shashank Garg (009310418) 04/16/2014

Massive Online Open Course (MOOC) Implementation

Embed Size (px)

Citation preview

CMPE- 275: Enterprise Application Development

(Section 01)

Professor: John Gash

Project: Massive Online Open Course

Group 5

Archit Agarwal (008065434) Akshay Bapat (008020571) Akshay Wattal (008941816) Punit Sharma (009268532)

Shashank Garg (009310418)

04/16/2014

Akshay Wattal
Akshay Wattal
Akshay Wattal
Akshay Wattal
Akshay Wattal
Akshay Wattal
Akshay Wattal
Akshay Wattal
Akshay Wattal

Project 1 – Massive Online Open Course

2 of 23

Table of Contents Table of Contents ................................................................................................ 2

Introduction: ........................................................................................................ 3

Server Architecture ............................................................................................. 4 Topology ....................................................................................................................... 4

Line Topology for Node Failure Detection:...................................................................... 4 Fully Connected Graph Topology for Message Propagation and Request Forwarding: ......... 5

Leader Election .............................................................................................................. 5 Network Flexibility and Fault Tolerance ........................................................................... 6

DNS Design & Implementation .......................................................................... 7

Client Request Processing Design and Implementation ................................. 9

Voting inter-MOOC Server ............................................................................... 14

Database Architecture ...................................................................................... 16

Test-Cases ......................................................................................................... 18 Chat Feature .................................................................................................................... File Download.............................................................................................................. 18 C++ Client .................................................................................................................... 21

Appendix: .......................................................................................................... 21 Netty .......................................................................................................................... 21 Google Protocol Buffers ................................................................................................ 21 MongoDb .................................................................................................................... 22 Apache Ant.................................................................................................................. 22 Programming Languages ............................................................................................... 22

Project 1 – Massive Online Open Course

3 of 23

Introduction: Project MOOC is an online learning portal for users from cross geo locations. The project focuses on some of the basic functionalities like user signup, getting course list, course CRUD, download files and chat client/server implementation. In the project, Python and C++ are used for implementing client and Java for server. MongoDB is used as the database. Google Protobuf is the messaging format and protocol. Netty is used as the asynchronous NIO framework. MOOC-Massive Online Open Course Massive online open course is a portal for students to access free online courses. It is a distance education system that focuses on learning, reuse and remixing of resources. It provides courses to the users from different providers and supports lifelong learning. Any user can refer the tutorial as many times as it wants to hone their skills. The various services offered by MOOC includes research, teaching, certification etc. MOOC platform provides asynchronous access to courses and video learning tutorial.

Project 1 – Massive Online Open Course

4 of 23

Server Architecture

Topology Our approach to a selection of the network topology is a hybrid one, where we are using different strategies for achieving different functions of the cluster.

Line Topology for Node Failure Detection:

We have chosen the Line topology for our project. In this topology, each node is assigned a unique integer id, and the nodes are connected in a serial order, staring from Node 0 to Node N. The linear topology is mainly for the purposes of failure detection of neighboring nodes. The nodes at the edges, i.e. have a single neighbor to monitor, as for the other intermediate modes, every node is connected to two neighboring nodes, one to the left and one to the right.

Project 1 – Massive Online Open Course

5 of 23

Fully Connected Graph Topology for Message Propagation and Request Forwarding:

As far as node connectivity is concerned, we have a fully connected graph, where in every node is aware of all other nodes in the network. Due to this possibility, any node can directly address any other node, resulting in to one-on-one communication between the two without having any mediator between them. Thus we can use this principle for the sake of message propagation and request forwarding, thus resulting into a maximum of 2 hops for any single message in the entire network.

Leader Election

We have derived inspiration from the principle of the Bully Algorithm, to create a variation of the same for the purpose of leader election amongst the nodes. We implemented the algorithm to in such a way that it ensures that the node with the lowest id in the network always becomes the leader. We  call  it  the  “Assertive  Bully  Algorithm”. The algorithm is triggered whenever any node joins the network and also when any node leaves the network, irrespective of the fact whether the node that went down was the leader. 1. In the first case, when a node joins a network for the first time, it checks for

all alive nodes including itself, and collects their id. It sorts them according to their   id   and   determines   the   lowest   id   among   them   to   “assert”   that   the  “lowest  id”  shall  become  the  leader.  This  is  what  makes  it  different  from  the  Bully Algorithm, where every node nominates only itself as the leader depending upon its id. It broadcasts this message to all other nodes, and they update their leader information respectively. The node, whose id was chosen as the leader, updates its flag as true and also updates the DNS server to with its socket address and port.

Some Code Reference: poke.server.Server.java 2. In the case of a node goes down, the neighboring node who detected the

failure, triggers the algorithm, by attempting to check all alive nodes whose id is lower than itself. Once again, it finds the lowest id of the lot and declares the lowest id as the leader and broadcasts the winner to everyone else in the network.

Some Code Reference: poke.server.management.managers.ElectionManager.java

Project 1 – Massive Online Open Course

6 of 23

The advantages of this method are that, it always ensures that the node with the lowest id is always deterministically elected as the leader. Also, while checking for alive nodes in the cluster, we do not exchange messages and only check the open channels, which make the processing very fast. Also, this method can handle fragmented networks because the lowest alive id will always be available during election and thus be chosen as the leader.

The potential disadvantages of this method is that the leader election gets triggered every time there is a change in the network, i.e. if a node joins or leaves the network. Thus, in the case there is a situation where the nodes are rapidly being disconnected and connected from the network, will lead to several elections being triggered. However, once the network is stabilized, the lowest id at the end will always be the leader.

Considering the overall advantages of the algorithm, we felt this was the best fit for the sake for the Leader Election, as we are deterministically being able to declare a leader sending the least possible messages and rounds.

Network Flexibility and Fault Tolerance

Coming back to our hybrid network topology, we wish to explain it in terms of the overall network flexibility and fault tolerance.

As mentioned earlier, since we are using the line topology in order to monitor the heartbeats of neighboring nodes for failure detection, Node failures can be quickly detected by the neighbors and the leader election will be triggered instantaneously to ensure that the cluster always has an alive leader in the network. This is needed because we have designed the system in such a manner that a request should enter our cluster only through the current leader, by regularly updating the DNS server with the id of the current leader and using the DNS lookup for client requests. Also, it shall be noted that the messages are not being propagated through the line, rather we have a fully connected graph which makes it possible to forward a request to any node in the network with a single hop. This greatly reduces the propagation time and probability of message loss due to intermittent network failure.

Project 1 – Massive Online Open Course

7 of 23

The Responses are also being sent by the processing node back to the client to reply to user. This ensures that the cluster space is always insulated by the leader of the cluster, thus serving as a secure abstraction for our internal server network. In case a new node is added for to the network, it will be added towards the end of the line, and will thus not affect the existing structure, so that no neighbors have to be exchanged. Every node can determine its neighbors by the virtue of its node id and the diameter of the graph. Using these two parameters, a node will attach itself in the right position in the network. However if the node has been added for the very first time in the network, it will have to be updated in the configuration file of all the servers before it becomes discoverable in the network. This is something that could be achieved by partially bringing down sections of the network starting from the end and updating their configuration by using a script that will append the new details in their respective files. In this way we can keep the network flexible and open to changes without affecting the availability and performance of the system.

DNS Design & Implementation

A critical component for server discovery was the implementation of the DNS Server, which should serve as a dynamic resource that returns the address of the current leader if the cluster to an external client or Server. This aids any external entity to directly connect to the current leader of the cluster without having to maintain any of the server side details, except for the cluster id to which he wishes to connect to. For the sake of simplicity, we created a DNS implementation in the form a Distributed Hash Map, which stores a Key-Value Pair of { Cluster_id: Address:Port_of_Leader }. Following figure shows an overview of the implementation of the DNS Server:

Project 1 – Massive Online Open Course

8 of 23

To achieve the sharing of the HashMap across the nodes of a cluster, we used the principle of a Replicating HashMap. Following is the general flow:

1. The DNS Server is started on a certain host:port. It initializes a global empty HashMap.

2. After  the  leader  election  of  a  particular  cluster  ‘i’  terminates,  the  elected  leader  of that cluster pulls the latest global copy from the HashMap to its local copy.

3. After synching its own local copy with that of the global HashMap, the leader node updates its own Address in the local Map according to its own cluster id.

4. Any changes made over the local HashMap result in the invoking of an event such as ADD, EDIT, DELETE, etc.

5. This event is then forwarded to the DNS Server along with the data that was modified.

6. The DNS Server receives the incoming event, and updates the data as per the OPERATION and the values provided in the request.

Project 1 – Massive Online Open Course

9 of 23

This is how the data is replicated across the global and local HashMaps over the network. Any external client, server or entity can then just lookup the global DNS server for the IP of the leader and initiate a request with the cluster using its cluster id. The Global Map has been configured to handle concurrent access by multiple nodes. The only care that must be taken before updating the HashMap is to first pull in all the latest data from the global Map before applying any changes so as to preserve the consistency of the HashMap. Another point to consider is that in case of a single DNS implementation, we have the DNS server as a single point of failure. However, this could be mitigated by having a set of 2-3 servers acting as the DNS. In this case additional care must be taken to ensure that all three DNS servers get updated atomically, which increases the complexity of the overall system. One of the techniques we can consider would be such using the timestamp of an event to determine its order of execution on the DNS server.

Client Request Processing Design and Implementation

The above architecture diagram shows how at high level a Client Request is catered. This is an Intra-MOOC scenario. Following is the step-by-step description for the same. Assumption: For request forwarding one Leader and at-least one Slave node should be alive.

Project 1 – Massive Online Open Course

10 of 23

1. The client is NOT aware of the socket address of the Leader Node or any of the Slave Nodes. The client first contacts the DNS Server (more on DNS Design & Implementation is explained in DNS section), which contains the information of socket address of the Leader Node. In response, the DNS Server sends back the binding address of the Leader Nodes. Some Code Reference: poke.demo Jab.java & poke.server.dns *.java #1 Jab.java

ReplicatingMap map = new ReplicatingMap("192.168.0.123", 1111); map.values(); Thread.sleep(100); test = DataCache.cache.values(); System.out.println("In Jab: " + test); socketConn = test.toString().split(":"); host = socketConn[0].substring(1);

port = Integer.parseInt(socketConn[1].substring(0, socketConn[1].length() - 1)); ClientCommand cc = new ClientCommand(host , port ); #2 DNSServer.java

ServerBootstrap bootstrap = new ServerBootstrap( new NioServerSocketChannelFactory( Executors.newCachedThreadPool(), Executors.newCachedThreadPool())); // Set up the pipeline factory. bootstrap.setPipelineFactory(new ChannelPipelineFactory() { public ChannelPipeline getPipeline() throws Exception { return Channels.pipeline(new ObjectEncoder(), new ObjectDecoder(), new DNSHandler()); } }); bootstrap.bind(new InetSocketAddress("192.168.0.123",1111));

System.out.println("DNS Server Started...");

2. Given the different functionalities that our MOOC provides, one of them is User Registration. So considering that as an example, the Client sends a User Registration request as a NameSpace Operation to the Leader Node. Some Code Reference: poke.client ClientCommand.java or python client or C++ client #1 ClientCommand.java

User.Builder f = User.newBuilder(); f.setUserId("awattal"); f.setPassword("123"); f.setUserName("Akshay Wattal");

NameSpaceOperation.Builder b = NameSpaceOperation.newBuilder(); b.setAction(SpaceAction.ADDSPACE); // For Addition //b.setAction(SpaceAction.UPDATESPACE); // For Update //b.setAction(SpaceAction.LISTSPACES); //For Authentication

Project 1 – Massive Online Open Course

11 of 23

//b.setAction(SpaceAction.REMOVESPACE); // For Removing b.setUId(f.build()); Request.Builder r = Request.newBuilder(); eye.Comm.Payload.Builder p = Payload.newBuilder(); p.setSpaceOp(b.build()); eye.Comm.Header.Builder header = Header.newBuilder(); header.setOriginator("client-1"); header.setRoutingId(eye.Comm.Header.Routing.NAMESPACES); r.setHeader(header.build());

r.setBody(p.build());

3. Once, the leader gets the request it forwards this message to any one of the “alive”  slave  nodes.    This  request  forwarding  is  a  Single Hop process i.e. that is the leader first finds the list of active slave nodes in its network and then it forwards the request to one of these active slave nodes from that list.

Request Forwarding Strategy: For selecting a slave node to process the request, as stated earlier the leader first creates a hashmap of all the alive nodes. Next, since we are not required to maintain any client session in-terms  of  which  “particular”  node  should  “always“  process  the  request  for  the  “same  client”  in  this  project; we are randomly choosing one of the slave nodes to process the Client’s  request  from  the  hashmap.  It  is  a  simple  yet  effective  technique.    

Even  though,  our  MOOC  server’s  are  in  Line Topology as shown in the above diagram (double link lines), by following our request forwarding approach we save multiple hops and achieve request processing in a single hop. Some Code Reference: poke.server.queue PerChannelQueue.java & poke.server.resources ResourceFactory.java

#1 PerChannelQueue.java

//Store PerChannelQueue per Client Request in HashMap if(ElectionManager.getInstance().isLeader() && !req.getHeader().hasReplyMsg() &&

!req.getBody().getJobOp().getData().getNameSpace().equals("competition")) {

ClientQueueMap.clientMap.put(req.getHeader().getOriginator(), sq); i++; }

#2 ResourceFactory.java if(ElectionManager.getInstance().isLeader() &&

!req.getBody().getJobOp().getData().getNameSpace().equals("competition")) { group = new NioEventLoopGroup(); Bootstrap b = new Bootstrap(); boolean compressComm = false; b.group(group).channel(NioSocketChannel.class).handler(new

Project 1 – Massive Online Open Course

12 of 23

ServerInitializer(compressComm));

b.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000); b.option(ChannelOption.TCP_NODELAY, true); b.option(ChannelOption.SO_KEEPALIVE, true); //Gathering Information of Alive Nodes for (NodeDesc nn : cfg.getRoutingList()) { try { if(!nn.getNodeId().equals(ElectionManager.getInstance().getLeaderId())) {

InetSocketAddress isa = new InetSocketAddress( nn.getHost(),nn.getMgmtPort()); ManagementQueue.nodeMap.put(nn.getNodeId(), isa); ChannelFuture cf = ManagementQueue.connect(isa); cf.awaitUninterruptibly(50001); if(cf.isDone()&&cf.isSuccess()) aliveNodes.put(nn.getNodeId(), isa); cf.channel().closeFuture(); } } catch(Exception e){logger.info("Connection refused!");} } //Generating Random Node ID from Alive Nodes Random rand = new Random(); int randomNum = rand.nextInt((aliveNodes.size() -

Integer.parseInt(ElectionManager.getInstance().getLeaderId()) + 1)) + Integer.parseInt(ElectionManager.getInstance().getLeaderId()

); //If Random Node ID is same as Leader, Add One if(randomNum ==

Integer.parseInt(ElectionManager.getInstance().getLeaderId())) randomNum = randomNum + 1;

localSocketAddress = new InetSocketAddress( cfg.getRoutingList().get(randomNum).getHost(), cfg.getRoutingList().get(randomNum).getPort());

// Make the connection attempt and forward request to selected slave node fchannel = b.connect(localSocketAddress).syncUninterruptibly(); Channel ch = fchannel.channel(); ch.writeAndFlush(req); logger.info("I am Leader Node, request forwarded to Node: " + randomNum); return null;

}

Project 1 – Massive Online Open Course

13 of 23

4. Once the Slave node gets the request from the Leader node, it opens up a connection with the Primary database (more on database architecture and replication is explained in Database section), processes the request, prepares a response and sends back this response to the Leader. Some Code Reference: poke.resources NameSpaceResource.java

#1 NameSpaceResource.java

Request reply = buildMessage(request,PokeStatus.NOFOUND, "Request not fulfilled", request.getBody().getSpaceOp().getAction());

MongoDBDAO mclient = new MongoDBDAO(); try { mclient.getDBConnection(); mclient.getDB(mclient.getDbName()); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } //If Request is for User CRUD operations if(request.getBody().getSpaceOp().hasUId()) { mclient.getCollection("usercollection"); User user = new User(); user.setUserId(request.getBody().getSpaceOp().getUId().getUserId()); user.setName(request.getBody().getSpaceOp().getUId().getUserName()); user.setPassword(request.getBody().getSpaceOp().getUId().getPassword()); user.setCity(request.getBody().getSpaceOp().getUId().getCity()); user.setZipCode(request.getBody().getSpaceOp().getUId().getZipcode()); switch(request.getBody().getSpaceOp().getAction()) { case ADDSPACE:

BasicDBObject doc = new BasicDBObject("userid",user.getUserId()).append("username", user.getName()).append("password", user.getPassword()).append("city", user.getCity()).append("zipcode", user.getZipCode());

mclient.insertData(doc); reply = buildMessage(request,PokeStatus.SUCCESS, "User added to database",SpaceAction.ADDSPACE);

break; ………………… } }

Project 1 – Massive Online Open Course

14 of 23

5. The Leader node upon receiving this response message form the Slave node, forwards  or  reply’s  back  the  client  with  this  same  response  message.     Some Code Reference: poke.server.queue PerChannelQueue.java #1 PerChannelQueue.java

// Send back response to Client if(req.getHeader().hasReplyMsg()) { sq = ClientQueueMap.clientMap.get(req.getHeader().getOriginator()); ClientQueueMap.clientMap.remove(req.getHeader().getOriginator()); sq.enqueueResponse(req, null);

}

Voting inter-MOOC Server

Voting as perceived in this project is for inter-MOOC communication. The scenario that we have implemented or simulated for inter-MOOC communication is “Competition  Location”.  For  example,  consider  each  MOOC  as  a  University  such  as  San  Jose State, Stanford, Berkley and UCDavis. Now, there is a Robotics competition that needs to be hosted at any one of these. So, voting will take place amongst them to decide where should this competition be hosted. Once the voting is finished we have a clear winner for the competition location.

x Assumption & Standards: Scenario: Competition Location Message Resource: JobResource DNS Server: It  should  hold  the  Cluster  ID  and  Socket  Address  of  Leader’s  of  each  MOOC/Cluster.

x Working & Our Implementation:

Scenario 1: Our MOOC Leader receives the request directly from the client. a) First,  the  client  sends  a  request  of  type  “competition”  to  one  of  the  Leaders  (in  

this scenario ours) in the inter-MOOC cluster. Once,  the  leader  receives  the  message  of  type  “competition”  it  performs a lookup on the DNS, gathers the list of all the Leader nodes in the inter-MOOC Cluster and sends this request to all the Leader nodes except self.

b) Next, the leader waits for a message type of JobBid as a response from all the Leaders. In our implementation positive response is value=1 and negative is value=0. Positive response denotes that a particular MOOC (analogy: San Jose State) is interested in hosting the competition. The algorithm that we have implemented works in such a way, that the first MOOC leader to send the positive response is declared as the winner for hosting

Project 1 – Massive Online Open Course

15 of 23

the  competition,  we  have  named  this  algorithm  as  “Fastest  Response  First”.  The  premise for creating such an algorithm is that each MOOC has a certain load and network latency, so the MOOC which is fastest in sending the response is considered to have the least load and least latency.

c) Thus, once the leader gets a message of type JobBid with a positive response, the socket address and cluster id of that winning MOOC Leader is sent back the Client. Scenario 2: Our MOOC Leader receives a request for Voting i.e. a JobProposal request

a) When the Leader node receives the JobProposal request for voting from the broadcasting Leader, it sends this proposal request to all the slave nodes inside its MOOC.

b) Next, the Slave nodes on receiving this request creates a JobBid with value as either positive i.e. 1 or negative i.e. 2 and sends back a response to the Leader.

c) The leader waits for all the responses from all the slave nodes and creates one

“global”  bid.  This  global  bid  is  created  after  calculating  the  count  of  positive  and  negative responses from the slave nodes. If the maximum count is positive then a positive global bid is created, else if maximum count is negative then a negative global bid is created.

d) Once this global bid is created, it is sent to the originating Leader (this is the leader that had sent the JobProposal request in the first place). Some Code Reference: poke.resources JobResource.java & poke.server.management.managers JobManager The below architecture diagram summarizes the voting and inter-MOOC communication discussed above.

Project 1 – Massive Online Open Course

16 of 23

Database Architecture

Data handling is a critical task for any application and to make the concerned database fault tolerant a robust database architecture is needed to respond to any failure. Mongodb is used as the database in our application. Replication of data provided by mongodb is the powerful feature to encounter data loss in case of failure of any database instance over which the data is being replicated. The synchronization of data over multiple instances of database is handled by mongodb itself.

In our application we have created multiple instances of database. One database instance   of   mongodb,   known   as   “Monogd”,   is   running on each of the three servers. Thus, data for our application is running on multiple nodes of our cluster. Each of the instance is bound together by the concept of replica set. A replica set consist of information regarding the instances of database belonging to that particular replica set, like   address   and   port   of   that   instance,   name   of   replica   set,   “dbpath”   for   instance,  configuration etc. By defining replica set we define a collection of mongodb instances and let every database instance in that collection to know about each other. One of the instance is selected as a primary while the others act as secondary.

Project 1 – Massive Online Open Course

17 of 23

When a primary instance goes down election is carried out among the remaining secondary instances, which elects a new primary instance. Once the primary is selected it  performs  as  per  the  configuration  defined  for  the  previous  primary.  Hence  there’s  no  single   point   of   failure   and   the   transactions   doesn’t   have   to   wait   for   all   the   database  instances to be active at the same time.

Project 1 – Massive Online Open Course

18 of 23

Test-Cases

Chat Feature

Chat feature is provided by implementing chat client/server using netty. The feature enables all nodes in a cluster to pass messages to each other. A chat client broadcasts the message to all the active clients in the cluster and other clients can do the same. Every active client is notified whenever a new client joins or leaves chat. A channel handler is implemented which takes care of framing, encoding and decoding the message sent by clients. There can be any number of clients at any instant using this feature.

File Download

File download feature is provided to enable user to download a simple text file. This is achieved by reading the contents of the file on server and replicating the same on the client machine. The functionality can be improved by allowing user to download a file belonging to a particular course or by allowing multiple files to download concurrently.

Project 1 – Massive Online Open Course

19 of 23

List All Courses (Standard for Class)

User can use this feature to list all the courses available over the MOOC cluster. Request to get the list all courses can be initiated through Java or Python client or C++ client.   Response   for   user’s   request   contains   list   of   all   courses   with   their   name   and description.

Get Course Description (Standard for Class)

User can use this feature to fetch available course base on the course name over the MOOC cluster. Request to get course description can be initiated through Java or Python client or C++ client. Response   for  user’s   request   contains  a description of a particular post.

Project 1 – Massive Online Open Course

20 of 23

User Management

All operations like creating, deleting, modifying and authenticating user are provided under user management. Details of the users are maintained in the database which are referred at the time of authentication. Source Code: poke.client ClientCommand.java Course Management

Course management is done by allowing addition, deletion and modification of courses. Details of all the courses are maintained in the database which are fetched whenever the client request for the same. Source Code: poke.client ClientCommand.java

Project 1 – Massive Online Open Course

21 of 23

C++ Client

Appendix:

Netty Netty is a client server framework for the development of network applications such as protocol clients and servers. It is an asynchronous event driven network application framework and aims at simplifying network application development by providing built in support for HTTP protocol and tools to simplify socket programming for TCP/UDP socket servers. It also provides support for Web Sockets, message compression, SSL/TLS and SPDY protocol and can be integrated with Google Protocol Buffers. Netty provides performance for applications by offering lower latency and reduced resource consumption.

Google Protocol Buffers Protocol buffers are used to serialize structured data. How the data will be structured is defined only once for which source code can be generated in multiple languages like

Project 1 – Massive Online Open Course

22 of 23

Java, Python, C++ etc which can be used to access the predefined structured data in the application. Protobufs are replacement to xml and are better than Abstract Syntax Notation One in terms of decoding performance and message size.

MongoDb Mongodb is a document oriented, cross platform, no sql database. It offers a number of features like indexing of fields, replication of data over multiple instances of database, load balancing through sharding, file storage through GridFS, aggregation, javascript can be used to design queries and aggregation functions and sent directly to the database for execution, fixed sized collections etc.

Apache Ant Apache ant is a command line tool and a java library which makes use of build files to drive processes described as targets. It is primarily used for automating the software build process. There are a number of tasks provided by ant allowing user to compile, test and run java applications. C/C++ applications can also be built using ant. It provides flexibility as there are no predefined coding conventions or directory layout for projects implementing it as a build tool.

Programming Languages Programming language used for developing server side functionality is Java and clients are developed using Python, C++ and Java.

Project 1 – Massive Online Open Course

23 of 23

GitHub Repository Details Project URL: https://github.com/akshaywattal/cmpe275-project1

We are Popular -!