29
Project title: A Community networking Cloud in a box. Experimental research evaluation (final) Deliverable number: D4.4 Version 1.1 This project has received funding from the European Union’s Seventh Programme for research, technological development and demonstration under grant agreement No 317879

Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

  • Upload
    others

  • View
    11

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

Project title: A Community networking Cloud in a box.

Experimental research evaluation (final)

Deliverable number: D4.4

Version 1.1

This project has received funding from the European Union’s SeventhProgramme for research, technological development and demonstrationunder grant agreement No 317879

Page 2: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

Project Acronym: ClommunityProject Full Title: A Community networking Cloud in a boxType of contract: Small or medium-scale focused research project (STREP)contract No: 317879Project URL: http://clommunity-project.eu

Editor: Roger Baig (Guifi.net), Felix Freitag (UPC)Deliverable nature: Report (R)Dissemination level: Public (PU)Contractual Delivery Date: June 30, 2015Actual Delivery Date June 30, 2015Suggested Readers: Project partnersNumber of pages: 27Keywords: WP4, experimental research, evaluationAuthors: Vladimir Vlassov (KTH), Hooman Peiro Sajjad (KTH),

Paris Carbone (KTH), Jim Dowling (SICS), Lars Kroll(KTH, SICS), Alexandru - Adrian Ormenisan (KTH, SICS),Amin Khan (UPC), Mennan Selimi (UPC), Felix Freitag (UPC)

Peer review: Roger Pueyo (Guifi.net), Roc Meseguer (UPC)

Abstract

This document presents the work carried out in T4.3 during the second reporting period of the CLOM-MUNITY project to perform experimental evaluations of research on community clouds.

Page 3: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

Contents

1 Introduction 51.1 Contents of the deliverable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Relationship to other CLOMMUNITY deliverables . . . . . . . . . . . . . . . . . . 5

2 Extensions of the Community Cloud Testbed 62.1 New Features of the Experimental System . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.1 Colombia, Universidad Pontificia Bolivariana (UPB) . . . . . . . . . . . . . 62.1.2 Pakistan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.2.1 ICEPT, Iqra University, Islamabad . . . . . . . . . . . . . . . . . . 72.1.2.2 Abasyn University, Islamabad . . . . . . . . . . . . . . . . . . . . 8

2.1.3 Portugal, INESC-ID, IST, Universidade de Lisboa, Lisbon . . . . . . . . . . 92.1.4 Venezuela, Universidad de Los Andes (ULA) . . . . . . . . . . . . . . . . . 9

3 Cloudy on SBCs 113.1 Cloudy on Raspberry Pi, BeagleBone and Alix boards . . . . . . . . . . . . . . . . . 11

4 Experiments on IaaS in Cloudy 144.1 Virtual Research Devices in Cloudy . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4.1.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.1.2 Experiments and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 14

5 Service Discovery in Cloudy 175.1 Evaluation of Avahi-Tinc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5.1.1 Selection of cloud nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.1.2 Experiment scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.1.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5.1.3.1 Scenario 1 results: Single service discovery . . . . . . . . . . . . . 185.1.3.2 Scenario 2 results: Timely service discovery of the same service type 185.1.3.3 Scenario 3 results: Timely service discovery of all services in the

network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.2 Serf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6 PeerStreamer Live Video-Streaming Service 216.1 Experiments Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

6.1.1 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.1.2 Scenario UPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.1.3 Scenario ALL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226.1.4 Scenario RTMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236.2.1 Scenario UPC and ALL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236.2.2 RTMP Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

1

Page 4: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

Contents Contents

7 Conclusions and Outlook 25

Bibliography 25

Licence 27

2Deliverable D4.4

Page 5: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

List of Figures

2.1 Guifi node in UBP, Medellin, Colombia . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Guifi node in ICEPT, Iqra University, Islamabad . . . . . . . . . . . . . . . . . . . . 82.3 Guifi node in Abasyn University, Islamabad . . . . . . . . . . . . . . . . . . . . . . 82.4 Guifi node in INESC-ID, IST, Universidade de Lisboa, Lisbon . . . . . . . . . . . . 92.5 Guifi node in ULA, Merida, Venezuela . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.1 Cloudy system running on Alix board . . . . . . . . . . . . . . . . . . . . . . . . . 123.2 Cloudy system running on Beaglebone Black board . . . . . . . . . . . . . . . . . . 123.3 Cloudy system running on RaspBerry Pi board . . . . . . . . . . . . . . . . . . . . 133.4 ThingSpeak platform running on Beaglebone Black board . . . . . . . . . . . . . . 133.5 ThingSpeak platform running on RaspBerry Pi board . . . . . . . . . . . . . . . . . 13

4.1 Average chunks received rate at peers when Tahoe-Lafs is also running (third evalua-tion scenario). Baseline as in the current deployment in Community-Lab. . . . . . . 15

4.2 Average chunk playout at peers when Tahoe-LAFS is also running (third evaluationscenario). Baseline as in the current deployment in Community-Lab. . . . . . . . . . 16

5.1 Responsiveness of service discovery (same service type) . . . . . . . . . . . . . . . 195.2 Responsiveness of service discovery (all type services ) . . . . . . . . . . . . . . . . 195.3 Services discovered in different hops . . . . . . . . . . . . . . . . . . . . . . . . . 20

6.1 Our PeerStreamer topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216.2 Summary of our scenario parameters . . . . . . . . . . . . . . . . . . . . . . . . . 226.3 Chunk Receive rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236.4 Comparison between averaged chunks received and chunks on time and not dupli-

cated (ALL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246.5 Average chunk rate received within each time frame (ALL) . . . . . . . . . . . . . . 24

3

Page 6: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

List of Tables

2.1 BGP data sorted by the network prefix. . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.1 SBCs used. Model, µProcessor and RAM memory. . . . . . . . . . . . . . . . . . . 11

4.1 Summary of our scenarios and settings . . . . . . . . . . . . . . . . . . . . . . . . . 15

5.1 Nodes, their location and RTT from the client node . . . . . . . . . . . . . . . . . . 175.2 Summary of our scenario parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4

Page 7: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

1 Introduction

1.1 Contents of the deliverable

This deliverable describes the experimental work carried out in T4.3 of WP4 during the second re-porting period of the CLOMMUNITY project. Task T4.3 supported the research work done in WP3with experimental evaluations in the community cloud testbed. Therefore, most of the content re-ported in this deliverable are the experiments, which were carried out for the research work of WP3.The research context of the experiments reported here is therefore explained in the WP3 deliverableD3.3, while D4.4 reports the related experiments. In addition, we report within this deliverable someextensions we did on the community cloud testbed as part of T4.2.

1.2 Relationship to other CLOMMUNITY deliverables

Deliverable D4.4 is closely related with D3.3 and D3.4 of WP3, since in WP3 the research tasks,which required experimental evaluation, were conducted. As such, while D4.4 focuses on reportingthe experimental evaluation, D3.3 focuses on explaining the corresponding research issues whichoriginated the experiments.D4.4 is also related with D2.3, since the experiments reported in D4.4 evaluated some of the compo-nents which were developed in WP2.D4.5, produced also within WP4, focuses on the pilot deployments, while D4.4 focuses on the exper-iments required by the research in WP3.There is also a relation of D4.4 with D4.2 (M16), since D4.2 describes the community cloud testbedused for this experimentation. The testbed was already reported in D4.2 at the end of the first reportingperiod.

5

Page 8: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

2 Extensions of the Community Cloud Testbed

2.1 New Features of the Experimental System

In the second reporting period, the Community Cloud Testbed described in D4.2 was extended byconnecting it via tunnels to other partners and collaborators. The extensions are on-going and contin-uous, and how far they will finally develop will depend on the evolution of collaborations beyond thescope of the Clommunity project.This testbed interconnects different clouds within the Guifi.net community network. To do that, eachcloud is connected to the UPC using GRE and Mikrotik EoIP1 (based on GRE) tunnels.When the peer-to-peer tunnels are established, a BGP peer is configured to interchange the routesbetween the collaborators and the Guifi.net network. At this point, all the devices in each cloud areable to reach any node in the network.

Note: the BGP in Guifi.net is configured internally and does not reach nor exchange prefixes to theInternet.

Table 2.1 summarizes the Guifi.net network prefixes of each cloud we established and the correspond-ing BGP AS numbers.

Table 2.1: BGP data sorted by the network prefix.Node Network prefix BGP AS

INECS-ID 10.91.128.0/27 62318KTH 1 10.93.0.0/24 59858KTH 2 10.93.1.0/25 59222ICTP 10.95.0.0/27 59671ULA 10.97.0.0/27 66000UPB 10.98.0.0/27 73380

ICEPT 10.99.0.0/27 75320Abasyn 10.99.128.0/27 75321

The following subsections detail the new clouds that were added. These are Universidad PontificiaBolivariana in Colombia, ICEPT - Iqra University and Abasyn University in Pakistan, and Universi-dad de Los Andes in Venezuela.

2.1.1 Colombia, Universidad Pontificia Bolivariana (UPB)

Universidad Pontificia Bolivariana in Medellın2, Colombia, set up a Guifi.net node using a MicroTikRB750 router, that was connected to the UPC BarcelonaTech through a RouterOS EoIP tunnel.

1http://wiki.mikrotik.com/wiki/Manual:Interface/EoIP2http://www.upb.edu.co/medellin

6

Page 9: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

2. Extensions of the Community Cloud Testbed 2.1. New Features of the Experimental System

Figure 2.1 shows the node at UPB3.

The node is available through public IP 200.3.145.40 and Guifi IP 10.98.0.1

Figure 2.1: Guifi node in UBP, Medellin, Colombia

2.1.2 Pakistan

UPC BarcelonaTech collaborated with two universities in Islamabad, Pakistan, to set up Guifi nodesand Cloudy installations.

2.1.2.1 ICEPT, Iqra University, Islamabad

Iqra Center for Emerging Products and Technologies (ICEPT), Iqra University Islamabad Campus4,set up a Guifi.net node using a MicroTik RB750 router, that was connected to the UPC BarcelonaTechthrough a RouterOS EoIP tunnel. Figure 2.2 shows the node at ICEPT5.

The node is currently available through public IP 124.109.46.14 and Guifi IP 10.99.0.1.

ICEPT is planning to attach a server running Proxmox VE and Cloudy Debian-based distribution.Currently, the process is on-going.

3https://guifi.net/UPB4http://iqra.edu.pk/isl/5https://guifi.net/ICEPT

Deliverable D4.47

Page 10: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

2.1. New Features of the Experimental System 2. Extensions of the Community Cloud Testbed

Figure 2.2: Guifi node in ICEPT, Iqra University, Islamabad

2.1.2.2 Abasyn University, Islamabad

Abasyn University, Islamabad Campus6, set up a Guifi node using a Cisco 2600 router, that wasconnected to the UPC BarcelonaTech through a GRE tunnel.

Figure 2.3 shows the node at Abasyn7.

The node has been available through public IP 115.186.163.225 and Guifi IP 10.99.128.1.However, the node had some connectivity problems due to BGP configuration issues.

Abasyn University is planning to fix these issues and attach a server running Proxmox VE and CloudyDebian-based distribution. Currently, the process is on-going.

Figure 2.3: Guifi node in Abasyn University, Islamabad

6http://www.abasynisb.edu.pk/7https://guifi.net/Abasyn

8Deliverable D4.4

Page 11: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

2. Extensions of the Community Cloud Testbed 2.1. New Features of the Experimental System

2.1.3 Portugal, INESC-ID, IST, Universidade de Lisboa, Lisbon

INESC-ID, IST, Universidade de Lisboa8, set up a Guifi.net node using a MicroTik RB750 router,that was connected to the UPC BarcelonaTech through a RouterOS EoIP tunnel.Figure 2.4 shows the node at INESC-ID9.The node is available through public IP 146.193.41.83 and Guifi IP 10.91.128.1.INESC-ID also attached a server with an installation of Proxmox VE, which is connected to the Guifi-cloud cluster explained in D4.2, and it is used to host multiple virtual machines running the CloudyDebian-based distribution.

Figure 2.4: Guifi node in INESC-ID, IST, Universidade de Lisboa, Lisbon

2.1.4 Venezuela, Universidad de Los Andes (ULA)

Universidad de Los Andes in Merida, Venezuela, set up a Guifi.net node using a MicroTik RB750router, that was connected to the UPC BarcelonaTech through a RouterOS EoIP tunnel.Figure 2.5 shows the node at UPB10.The node is available through public IP 190.168.2.200 and Guifi IP 10.97.0.1.

8http://www.gsd.inesc-id.pt/9https://guifi.net/INESC-ID

10https://guifi.net/ULA

Deliverable D4.49

Page 12: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

2.1. New Features of the Experimental System 2. Extensions of the Community Cloud Testbed

Figure 2.5: Guifi node in ULA, Merida, Venezuela

10Deliverable D4.4

Page 13: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

3 Cloudy on SBCs

3.1 Cloudy on Raspberry Pi, BeagleBone and Alix boards

We tested the installation of Cloudy on several SBCs, different architectures, and devices of differentform factors. We foresee SBCs as future community home gateways, in which Cloudy will run.Here we report some of our experiences with Raspberry Pi, BeagleBone and Alix boards, which weretested in combination with the Thingspeak (TS) platform. A description of the SBCs used is shownin Table 3.1.This low cost and compact solution hosted both Cloudy and the TS server.To install Cloudy on SBCs three steps need to be followed:

1. Add Debian Backports, Clommunity and Guifi repositories.2. Install Debian packages.3. Install non-Debian packages related to Cloudy.

After this steps, the Web interface is reachable at port 7000. Examples of Cloudy running on SBCsare shown in Fig. 3.1 for the Alix board, Fig. 3.2 for the BeagleBone and 3.3 for the Raspberry Pi.The ThingSpeak platform is built using Ruby on Rails, which is an open-source web frameworkoptimized for sustainable productivity.It uses combinations of dependencies and package managerswith Ruby, Gems and Rails. After the installation of ThingSpeak server the is running on the SBCson port 3000. Examples of ThingSpeak running on SBCs are shown in Fig. 3.4 on BeagleBoneblackand 3.5 on a Raspberry Pi.

Table 3.1: SBCs used. Model, µProcessor and RAM memory.

SBCs Beaglebone Raspberry Alix

Model Black A+ 3D2

µProc. 1GHz 700MHz 500MHz

ARM ARM AMD

Cortex A8 1176JZFS LX800

RAM 512MB 512MB 256MB

IP 10.95.0.25 10.95.0.24 10.95.0.23

11

Page 14: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

3.1. Cloudy on Raspberry Pi, BeagleBone and Alix boards 3. Cloudy on SBCs

Figure 3.1: Cloudy system running on Alix board

Figure 3.2: Cloudy system running on Beaglebone Black board

12Deliverable D4.4

Page 15: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

3. Cloudy on SBCs 3.1. Cloudy on Raspberry Pi, BeagleBone and Alix boards

Figure 3.3: Cloudy system running on RaspBerry Pi board

Figure 3.4: ThingSpeak platform running on Beaglebone Black board

Figure 3.5: ThingSpeak platform running on RaspBerry Pi board

Deliverable D4.413

Page 16: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

4 Experiments on IaaS in Cloudy

4.1 Virtual Research Devices in Cloudy

The experiments reported in this section relate to our paper published in [1]. We proposed to createin the Cloudy devices a multi-purpose environment in a single device, in concrete one environmentfor the device owner and other environments to be shared with the community network.In order to validate the proposed approach, we created a small scale physical Community-Lab infras-tructure using several low-power devices together as compute and storage devices and, separately,deploy the Community-Lab controller in a virtual machine inside a desktop PC. In this experimentalsystem we deployed two applications, Tahoe-LAFS [2, 3] and PeerStreamer [4, 5], as communitycloud services and measured the performance of these applications in the environment given by ourapproach.

4.1.1 Experimental Setup

For our experimental setup, we used four physical research devices with different configurations.These devices are built with Intel powered Atom N2600 CPU processors. Two of them have 2 GB ofRAM, 60 GB storage and 2 GB of RAM, 120 GB storage, and the other device has 4 GB of RAMwith 500 GB of storage disk and linked to the community network (Guifi.net). A desktop computersetup with the Proxmox system1 and built with an Intel Core i7-3770 CPU @ 3.40GHz (8 cores),with 1 TB of storage disk and 16 GB of RAM, was used to deploy a local Community-Lab controllerin a virtual machine environment. It was necessary to use such deployment in order to not affect theperformance of the current Community-Lab infrastructure, which is in a production state.For our experiments, we setup a replica of the Community-Lab testbed architecture, i.e. one localcontroller which controls the overall system and a set of computing nodes (Community-Lab nodes)that can be used to deploy slivers from the controller. In this case, the local controller can be deployedeither in a container or in a separate virtual machine instance. However, this depends on the users’requirements and the resources which are available.

4.1.2 Experiments and Evaluation

Our experiments were performed in order to evaluate the proposed deployment, summarized in Ta-ble 4.1, by utilizing different services with different purposes. In the first evaluation scenario weused Tahoe-LAFS distributed storage, and a storage benchmark application to evaluate the impact ofthe proposed deployment on the physical devices. In the second evaluation scenario we used Peer-Streamer, a peer-to-peer video streaming application, in order to evaluate the impact on services thathave time sensitive data processing. The third evaluation scenario combines both services and allowsan evaluation of the concurrency of services within the same physical devices.For each scenario we collected results and plotted them against the baseline values. The baselinevalues were obtained by running the same set of experiments on the Community-Lab infrastructure

1https://www.proxmox.com

14

Page 17: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

4. Experiments on IaaS in Cloudy 4.1. Virtual Research Devices in Cloudy

Table 4.1: Summary of our scenarios and settings

Scenario 1 2 3Number of local 8 8 8

Community-Lab nodes

Number of slivers 8 8 16

Services deployed Tahoe-LAFS Peerstreamer Tahoe-LAFS

and Peerstreamer

performance storage chunks received storage benchmark

metrics benchmark and played-out chunks received

and played-out

(from our groups’ earlier works [6]) under the same set of configurations. This gives us the behaviourof the proposed system in terms of performance and user experience.In the Tahoe-LAFS experiments, we measured the performance for read and write operations in orderto understand the impact these type of operations have on the proposed deployment. In the Peer-Streamer experiments, we measured the average chunk rates (data that is received in the peers side)and average chunks played out on peers (data that is sent to be watched in the peers side) in or-der to measure the quality of the video stream. In both cases we measured the CPU utilization todemonstrate that these low-power devices can deliver the multi-purpose execution environment whilemaintaining the multi-service community cloud model.We reproduce here some of the our experiments done for the evaluation. The complete set of experi-ments and outcomes are reported in a paper publication [1]. Deliverable D3.3 reports the results andinterpretation of these experiments within the context of the research work carried out in WP3.Fig. 4.1 and Fig. 4.2 show the results of the current deployment on Community-Lab as a baselineagainst the proposed deployment results. We can see a noticeable, to a certain degree, variation ofthe chunks played out which affects the perceived video quality. This is due to the CPU time beingshared among more processes. However, while concurrent services may differ, for our results we cansay that the loss is minimal when using the proposed deployment against the current Community-Labdeployment.

Figure 4.1: Average chunks received rate at peers when Tahoe-Lafs is also running (third evaluationscenario). Baseline as in the current deployment in Community-Lab.

Deliverable D4.415

Page 18: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

4.1. Virtual Research Devices in Cloudy 4. Experiments on IaaS in Cloudy

Figure 4.2: Average chunk playout at peers when Tahoe-LAFS is also running (third evaluationscenario). Baseline as in the current deployment in Community-Lab.

16Deliverable D4.4

Page 19: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

5 Service Discovery in Cloudy

5.1 Evaluation of Avahi-Tinc

5.1.1 Selection of cloud nodes

We report some experiments, which we performed to evaluate the service discovery mechanism.The main configuration for the experiments includes nodes of two geographically distant communitynetworks: Guifi.net in Spain and AWMN (Athens Wireless Metropolitan Network) in Greece. Thenodes of our experiments are real nodes with Guifi.net and AWMN IP addresses. The connectivity be-tween community network nodes varies significantly. We observe that network characteristics are notsymmetric. Both community networks (Guifi.net and AWMN) are connected through a tunnel overInternet to enable network federation at IP level. In order to deploy service discovery experiments ina realistic community cloud setting, we used the Community-Lab testbed, a distributed infrastructureprovided by the CONFINE1 project, where researchers can deploy experimental services, performexperiments or access open data traces. We used some of the available nodes of that testbed fordeploying the applications.

Table 5.1: Nodes, their location and RTT from the client node

Nr. of nodes Community Network Location RTT13 Guifi.net Barcelona, Spain 1-7 ms

7 Guifi.net Catalonia, Spain 10-20 ms

5 AWMN Athens, Greece 90-100 ms

We used in total 25 nodes spread between two community networks (see Table 5.1). We used 20 nodesfrom Guifi.net community network, where 13 of the nodes are located in the city of Barcelona and 7 ofthem are located in the Catalonia region of Spain. From AWMN we used 5 nodes which are distributedin Athens, Greece. Most Community-Lab nodes are built with a Jetway device that is equipped withan Intel Atom N2600 CPU, 4GB of RAM and 120GB SSD. Nodes (i.e. research devices in CONFINEterminology) in the Community-Lab testbed run a custom firmware (based on OpenWRT2) providedby CONFINE, which allows running on one node several slivers simultaneously implemented asLinux containers (LXC). The Community-Lab nodes deploy the Cloudy distribution in slivers.

5.1.2 Experiment scenarios

To assess the applicability of decentralized discovery mechanisms in community networks, threescenarios were chosen that reflect common use cases of service discovery. The parameters used in thescenarios are summarized in Table 5.2.

1https://confine-project.eu2https://openwrt.org

17

Page 20: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

5.1. Evaluation of Avahi-Tinc 5. Service Discovery in Cloudy

Table 5.2: Summary of our scenario parameters

Scenario 1 2 3Number of service providers 1 25 25

Maximum discovery time 10 sec 30 sec 1 min

1. Scenario 1: Single service discoveryOur first goal is to measure the responsiveness of single service discovery. In this scenario, theservice network consists of one client and one provider. The client is allowed to wait up to 10seconds for a positive response. This is a common scenario for service discovery and can beconsidered as the baseline. Only one answer needs to be received and there is enough time towait for it (we consider 10 seconds is a reasonable time for our community network settings).In this case, the client discovers a Tahoe-LAFS distributed storage service.

2. Scenario 2: Timely service discovery of the same service typeService networks are populated with multiple instances of the same service type (e.g. Tahoe-LAFS service). The clients needs to discover as many instances as possible and will thenchoose one that optimally fits its requirements. In this scenario we have one service client and25 service providers. The discovery is successful if all 25 provided service instances have beendiscovered. We measure how responsiveness increases with time given. The client waits 30seconds to receive responses.

3. Scenario 3: Timely service discovery of all services in the networkIn this scenario we have one service client and 25 service providers that offer services of dif-ferent types (Tahoe-LAFS, print service etc). The client needs to discover all instances of thedifferent service types. Here the service providers offer more than one service. The discoveryis successful if all services of the different types published from 25 providers are discovered.The client waits 1 minute to receive the responses from the service providers.

5.1.3 Experimental Results

In this section the results for the three scenarios described in the previous section.

5.1.3.1 Scenario 1 results: Single service discovery

Discovery of a single service instance within ten seconds proved to be reasonable responsive. Ourclient node is located in the Guifi community network (UPC Campus Barcelona). When the serviceprovider is located also in the Guifi.net community network, it took approximately 4 seconds for theclient to discover the service. On the other side, it took 9 seconds for the client to discover a servicefrom the AMWN community network service provider.

5.1.3.2 Scenario 2 results: Timely service discovery of the same service type

Figure 5.1 shows that discovery of services increases rapidly with the time. The size of the circle hereis proportional to the number of services discovered. In this scenario the client node waits 30 secondsto receive the responses from service publishers. In the first 5 seconds, the client receives 5 serviceresponses from the publishers. The service published in this case is the Tahoe-LAFS distributed

18Deliverable D4.4

Page 21: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

5. Service Discovery in Cloudy 5.1. Evaluation of Avahi-Tinc

storage service. These 5 services received in the first 5 seconds are close to the client (nodes in theUPC campus). After 20 seconds the last 5 services are discovered. These are the five service providersfrom AWMN in Athens, Greece. In total, the client discovers 25 services published from 25 serviceproviders spread across two community networks.

Figure 5.1: Responsiveness of service discovery (same service type)

5.1.3.3 Scenario 3 results: Timely service discovery of all services in the network

Figure 5.2 shows the number of different types of services discovered in a time window of 1 minute.Different types of services present in the network, such as Tahoe-LAFS and print service, are discov-ered. However, the location of nodes in Greece and the high diversity of the quality of wireless linksin Guifi.net [7] and AWMN leads to an increase of service discovery time. Around 10 services arediscovered between the forty and the fifty seconds. Figure 5.3 shows the number of hops from theclient to other Cloudy instances. Also, it shows the number of services discovered at each hop.

Figure 5.2: Responsiveness of service discovery (all type services )

Deliverable D4.419

Page 22: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

5.2. Serf 5. Service Discovery in Cloudy

Figure 5.3: Services discovered in different hops

5.2 Serf

The Distributed Announcement and Discovery of services (DADS) operates in parallel at both theglobal community network cloud level and at the micro-cloud level. On each of these two levels adifferent technological approach is used. Cloudy includes a tool to announce and discover servicesin the CN cloud based on Serf, a decentralized solution for cluster membership, failure detection,and orchestration. It relies on an efficient and lightweight gossip protocol to communicate with othernodes, which periodically exchange messages between each other. This protocol is, in practice, avery fast and extremely efficient way to share small pieces of information. An additional byproductis the possibility of evaluating the quality of the point-to-point connection between different Cloudyinstances. This way, Cloudy users can decide which service provider to choose based on networkmetrics like RTT, number of hops or packet loss. The second level of DADS occurs in the micro-cloud, where a number of Cloudy instances are federated and share a common, private Layer 2 overLayer 3 network built with Getinconf3. At that level, Avahi4 is used for announcement and discovery.Originally this solution was to be applied to the whole CN, but as more Cloudy instances started toappear, it became clear that the solution would not scale further than to some tenths of nodes as weexplain in [8]. However, in the context of an orchestrated micro-cloud, it can be used not only forpublishing cloud services, but also for publishing other resources like network folder shares, etc.

3https://github.com/Clommunity/getinconf/4https://avahi.org

20Deliverable D4.4

Page 23: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

6 PeerStreamer Live Video-Streaming Service

6.1 Experiments Design

The main configuration of our experiments included nodes of two geographically distant CNs:Guifi.net in Spain and AWMN (Athens Wireless Metropolitan Network) in Greece. For our scenarios,we constructed our topology as seen in Fig. 6.1, to easily manage the connectivity of the nodes. Thecamera streams to a local PeerStreamer service, acting as the peer source, converted the stream to beused in the P2P PeerStreamer network. Thus each of the nodes only needed to connect as a peers tothis source. Also, it is defined that each chunk contains one forth of a frame of the video, and thatthe source can generate and send around 104 chunks per second. For our metrics, we choose RTT fortime and location, chunk rate for data on network behaviour, chunks received for data coming fromthe P2P network, played out ratio for quality of image, and neighbourhood size for quality of the P2Pnetwork.

Figure 6.1: Our PeerStreamer topology

6.1.1 Evaluation Metrics

To assess the applicability of PeerStreamer in CNs, two scenarios were chosen that reflect commonuse cases of PeerStreamer usage in CNs. The evaluation metrics used are summarized in the tablebelow:

6.1.2 Scenario UPC

In this first scenario, we connected a node from UPC, called best node that we previously measuredwith the best round-trip time, to the streaming publisher (PeerStreamer source service). We connected

21

Page 24: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

6.1. Experiments Design 6. PeerStreamer Live Video-Streaming Service

Figure 6.2: Summary of our scenario parameters

the rest of the nodes to this node in order for us to manage our own network topology. With thistopology in mind, we gathered statistical data about the current state of each node and how thesenodes would manage their data (chunk) exchange. Three experiments were conducted with differenttime frames of 30 minutes, 1 hour and 12 hours of continuous running of the PeerStreamer service.This was done in order to gather information about the chunks received and/or played-out by eachnode, and also to analyse their progression within different time frames. These chunks contain the dataused to watch the actual streaming of the camera images. Therefore gathering this kind of informationis essential to know how PeerStreamer is handling the network and image quality.

6.1.3 Scenario ALL

For the second scenario, we used three groups of nodes comprising UPC, Guifi.net and AWMN. Weselected one node of each group, the one which has the best RTT to the PeerStreamer source service.These nodes connect to the PeerStreamer service while the rest of the nodes connected to the group’sbest node, respecting the group each node belongs to. This way, we can control the topology network,which allows us to analyze in more detail the node behaviour of each group. Having nodes located indifferent geographical areas of the community networks enriches our experiments, since we can betterunderstand the effect of distance on the degradation of the image quality. For a better comparison withthe first scenario, the three experiments are also executed within a time frame of 30 minutes, 1 hourand 12 hours of continuous live streaming, respectively, with an addition of a 6 hours experiment.

6.1.4 Scenario RTMP

The main purpose of this experiment was to determinate whether our plataform supports or not thestreaming of rtmp streams. The scenario was basically based on one node with a good connectionto the whole Guifi network. The stream we selected was the continuous stream of Guifi-TV. Thenode was responsible for getting the rtmp stream and republish it in Guifi through the Peerstreamerinfrastructure so other nodes could watch the stream using Cloudy. Since the main goal was to checkif Cloudy could be used in this field as an user-friendly video streamer, the only thing we wanted tocheck was if there was a constant video and audio flow with a rather low package loss. Besides theGuifi network, there were also some nodes from UPC watching the streaming.

22Deliverable D4.4

Page 25: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

6. PeerStreamer Live Video-Streaming Service 6.2. Results

6.2 Results

6.2.1 Scenario UPC and ALL

Figure 6.3 below depicts a comparison for each node, between all the chunks the node received andthe chunks that were on time to be displayed and not duplicated, which gives us the number of chunksthat were discarded. However, all the nodes do have mostly the same number in each experiment,meaning that there were very few extra chunks flooding the network, within each time frame. Wecan also see that each node received a different amount of chunks within the same experiment. Thereason is that each node is unique, and therefore performs the tasks of receiving and sending chunksdifferently the others.

Figure 6.3: Chunk Receive rate

Figure 6.4 depicts the average chunks received for each group of nodes within each time frame (for30 minutes, 1 hour and 12 hours). We can see that each group receives a different amount of chunkson average. For each group the amount of discarded chunks on average is very low. It means thatall chunks received were viable to be played out, and thus the network was not flooded with extrachunks, even when we have distant groups. In Figure 6.5 it can be seen that on average each grouphas a receive chunk rate of mostly the same ratio. Also, the time frame of each stream seems to affectthe number of received chunks with longer time frames. This can happen because of the constantchange of the network behaviour. While some of the nodes can loose temporally their connection tothe network, others may became unavailable for short periods of time. Therefore, we see that in theadditional 6 hours experiment, the chunk rate is higher on average, and on the 12 hours it averagesalong the 1 hour and the 6 hours results.

6.2.2 RTMP Scenario

During the first 20 minutes, the stream was running pretty good: the image flow was constant (withsome package loss, but almost cero) and the audio was constant; but at a certain time a node fromGuifi which was a little far away from the UPC joined the stream, and afterwards it got stuck. Whenthis happened, the stream was resetted, but since the previous instance ended under unexpected cir-cumstances, the ports it used were blocked and they could not be used again. So finally the new streamwas bound to different ports. To see what was happening, all the process were invoked manually fromthe node, being these a vlc and a Peerstreamer instance in one peer (source), a Peerstreamer instancewith UDP output in another peer, and finally a machine which watched the UDP final video usingvlc from the command line. The results when the first source was an RTSP stream were clean andgood, there was not any unusual error. However, when the original stream is RTMP, a high numbersof errors from the final machine’s VLC instance could be appreciated. Tthe hypothesis arouse that

Deliverable D4.423

Page 26: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

6.2. Results 6. PeerStreamer Live Video-Streaming Service

Figure 6.4: Comparison between averaged chunks received and chunks on time and not duplicated(ALL)

Figure 6.5: Average chunk rate received within each time frame (ALL)

errors are carried out when transcoding the RTMP. The only possible moment for this was when aninstance of avconv (from libav-tools libpack) was called to transform the RTMP into a UDP flow sothat Peerstreamer could read it and stream it again.

24Deliverable D4.4

Page 27: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

7 Conclusions and Outlook

The experimental evaluation for the research work in WP3 was carried out. For the experimentsthe community cloud testbed was used, in combination with Community-Lab provided by the FIREConfine project. The community cloud testbed was made operational already in the first reportingperiod. In addition, in the second reporting period a few new nodes and resources were added bycollaborators. Cloudy was experimented on several SBCs, since they represent an important scenariofor future community home gateways. The experiments addressed different levels of the communitycloud system, including infrastructure services, support services and application-layer services.We currently can observe the transition of an operational testbed that was used for experimenta-tion into an environment of stable community cloud services. Research work of WP3 has alreadyaddressed issues which may arise with device owners. The goal is to achieve in the contributed re-sources of the production community cloud the dual usage of devices, both for the owner’s servicesand for the community services. The vision is that the devices deployed will enable both owners’ us-age and community services, including for experimentation, of community clouds beyond the projectduration.

25

Page 28: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

Bibliography

[1] Nuno Apolonia, Roshan Sedar, Felix Freitag, and Leandro Navarro, “Leveraging low-powerdevices for cloud services in community networks,” in The 3rd International Conference onFuture Internet of Things and Cloud (FiCloud 2015), Rome, Italy, 24–08, 2015, p. xx. 4.1, 4.1.2

[2] Zooko Wilcox-O’Hearn and Brian Warner, “Tahoe: The least-authority filesystem,” in Proceed-ings of the 4th ACM Int. Workshop on Storage Security and Survivability, New York, NY, USA,2008, StorageSS ’08, pp. 21–26, ACM. 4.1

[3] Mennan Selimi and Felix Freitag, “Tahoe-LAFS Distributed Storage Service in CommunityNetwork Clouds,” in 4th IEEE Int. Conference on Big Data and Cloud Computing 2014, Sydney,Australia, Dec. 2014, IEEE. 4.1

[4] Luca Baldesi, Leonardo Maccari, and Renato Lo Cigno, “Improving P2P Streaming inCommunity-Lab Through Local Strategies,” in 10th IEEE Int. Conference on Wireless and Mo-bile Computing, Networking and Communications, Larnaca, Cyprus, October 2014, pp. 33–39.4.1

[5] Robert Birke et al., “Architecture of a network-aware p2p-tv application: The napa-wine ap-proach,” IEEE Communications Magazine, vol. 49, pp. 154–163, 06/2011 2011. 4.1

[6] Mennan Selimi, Felix Freitag, Roger Centelles, and Agusti Moll, “Distributed Storage and Ser-vice Discovery for Heterogeneous Community Network Clouds,” in 7th IEEE/ACM Int. Confer-ence on Utility and Cloud Computing (UCC’14), London, UK, Dec. 2014, IEEE/ACM. 4.1.2

[7] Davide Vega, Llorenc Cerda-Alabern, Leandro Navarro, and Roc Meseguer, “Topology patternsof a community network: Guifi.net,” in 1st International Workshop on Community Networks andBottom-up-Broadband (CNBuB 2012), within IEEE WiMob, Barcelona, Spain, Oct. 2012, pp.612–619. 5.1.3.3

[8] M. Selimi, F. Freitag, R. Pueyo Centelles, A. Moll, and L. Veiga, “Trobador: Service discoveryfor distributed community network micro-clouds,” in Advanced Information Networking andApplications (AINA), 2015 IEEE 29th International Conference on, March 2015, pp. 642–649.5.2

26

Page 29: Experimental research evaluation (final)felix.site.ac.upc.edu/Clommunity_deliverables/D4_4_Sept_2015.pdfProject title: A Community networking Cloud in a box. Experimental research

Licence

The CLOMMUNITY project, June 2015, CLOMMUNITY-201506-D4.4:

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License.

27