19
PlanetLab Applications and Federation Kiyohide NAKAUCHI NICT [email protected] 23 rd ITRC Symposium 2008/05/16 Aki NAKAO Utokyo / NICT [email protected] tokyo.ac.jp [email protected]

PlanetLab Applications and Federation

  • Upload
    weylin

  • View
    33

  • Download
    0

Embed Size (px)

DESCRIPTION

PlanetLab Applications and Federation. Aki NAKAO Utokyo / NICT [email protected] [email protected]. Kiyohide NAKAUCHI NICT [email protected] 23 rd ITRC Symposium 2008/05/16. (1) PlanetLab Applications. Over 400 nodes. CoMon: monitoring slice-level statistics. - PowerPoint PPT Presentation

Citation preview

Page 1: PlanetLab Applications and Federation

PlanetLab Applications and Federation

Kiyohide [email protected]

23rd ITRC Symposium2008/05/16

Aki NAKAOUtokyo / [email protected]@nict.go.jp

Page 2: PlanetLab Applications and Federation

(1) PlanetLab Applications

CoMon: monitoring slice-level statistics

2008/05/16 K.NAKAUCHI, NICT 2

http://summer.cs.princeton.edu/status/index_slice.htmlOver 400 nodes

Page 3: PlanetLab Applications and Federation

Typical Long-running Applications

CDN CoDeeN[Princeton], Coral[NYU], Coweb[Cornell]

Large-file transfer CoBlitz, CoDeploy[Princeton], SplitStream[Rice],

Routing Overlays i3 [UCB], Pluto[Princeton]

DHT / P2P middleware Bamboo[UCB], Meridian[Cornel] , Overlay Weaver[UWaseda]

Brokerage service Sirius[UGA]

Measurement, Monitoring ScriptRoute[Maryland, UWash], S-cube[HPLab] CoMon, CoTop, PlanetFlow[Princeton]

DNS, Anomaly Detection , streaming, multicast, anycast, …

In addition, there are many short-term research projects on PlanetLab

2008/05/16 K.NAKAUCHI, NICT 3

Page 4: PlanetLab Applications and Federation

CoDeeN : Academic Content Distribution Network

Improve web performance & reliability 100+ proxy servers on PlanetLab Running 24/7 since June 2003 Roughly 3-4 million reqs/day aggregate One of the highest-traffic projects on PlanetLab

2008/05/16 K.NAKAUCHI, NICT 4

Page 5: PlanetLab Applications and Federation

How CoDeen Works?

Each CoDeeN proxy is a forward proxy, reverse proxy, & redirector

CoDeeN Proxy Reques

tResponse

Cache hit

Cache miss

Response

Cache hit

Cache missRespons

eRequest

Cache Miss

2008/05/16 K.NAKAUCHI, NICT 5

Page 6: PlanetLab Applications and Federation

Coblitz : Scalable Large-file CDN

Faster than BitTorrent by 55-86% (~500%)

2008/05/16 K.NAKAUCHI, NICT 6

Agent CDNClient

Only reverse proxy(CDN) caches the chunks!

CDN

CDNCDN

CDN ClientAgent

CDN

chunk1

chun

k 1

chunk 2

chunk 3

chunk 2

chunk 5

chunk 5

chunk 1

chunk 1

chunk 4 chunk 5 chunk 5

chun

k 4

chunk1 chunk2

chunk 3 chunk3

chunk5 chunk4

CDN = Redirector + Reverse ProxyDNS

coblitz.codeen.org

OriginServer

HTTP RANGE QUERY

Page 7: PlanetLab Applications and Federation

How Does PlanetLab Behave? Node Availability

2008/05/16 K.NAKAUCHI, NICT 7

[Larry Peterson, et al, "experiences building PlanetLab", OSDI’06]

Page 8: PlanetLab Applications and Federation

Live Slices

2008/05/16 K.NAKAUCHI, NICT 8

[Larry Peterson, et al, "experiences building PlanetLab", OSDI’06]

50% nodes have 5-10 live slices

Page 9: PlanetLab Applications and Federation

Bandwidth

2008/05/16 K.NAKAUCHI, NICT 9

Bandwidth in

Bandwidth out

[Larry Peterson, et al, "experiences building PlanetLab", OSDI’06]

Median: 500-1000 Kbps

Page 10: PlanetLab Applications and Federation

(2) Extending PlanetLab

Federation Distributed operation/management

Private PlanetLab Private use, original configuration CORE [UTokyo, NICT]

Hardware support (C/D separation) Custom hardware: Intel IXP, NetFPGA, 10GbE E.g. Supercharging PlanetLab [UWash]

Edge diversity Wireless technologies integration [OneLab] E.g. HSDPA, WiFi, Bluetooth, ZigBee, 3GPP LTE

GENI, VINI

2008/05/16 K.NAKAUCHI, NICT 10

Page 11: PlanetLab Applications and Federation

Federation Split PlanetLab

Several regional PlanetLabs with original policy

Interconnection Share node resources among PlanetLabs

InternetInternet

PLC

PLC

PLC

PlanetLab 1

PlanetLab 2

PlanetLab 3…

VMM

NodeMgr

VM1 VM2 VMn

2008/05/16 K.NAKAUCHI, NICT 11

VMM

NodeMgr

VM1 VM2 VMn

Trade

Page 12: PlanetLab Applications and Federation

PlanetLab-EU Starts Federation

Emerging European portion of public PlanetLab 33 nodes today (migrated from PlanetLab)

Supported by OneLab project (UPMC, INRIA) Control center in Paris

PlanetLab-JP will also follow federation

2008/05/16 K.NAKAUCHI, NICT 12

Page 13: PlanetLab Applications and Federation

MyPLC for Your Own PlanetLab

PlanetLab in a box Complete PlanetLab Central (PLC) portable package

Easy to install, administer Isolate all code in a chroot jail

Single configuration file

2008/05/16 K.NAKAUCHI, NICT 13

/plc/plcPLC

Linux

ApachOpenSSLPostgreSQL…pl_dbplc_wwwplc_apibootmanagerbootcd_v3

Page 14: PlanetLab Applications and Federation

Resource Management

Resource sharing policy By contributing 2 nodes to any one PlanetLab, a site can create 10 slices that span the federated PlanetLab

2008/05/16 K.NAKAUCHI, NICT 14

Rspec General, Extensible,

Resource Description Portals presents a

higher-level front-end view of resources

Portals will use RSpec as part of the back-end

Page 15: PlanetLab Applications and Federation

Rspec Example<component type=”virtual access point” requestID=”siteA-ap1”

physicalID=”geni.us.utah.wireless.node45”>

<processing requestID=”cpu1”>

<power units=”CyclesPerSecond”>

<value>1000000000</value>

</power>

<function>Full</function>

</processing>

<storage requestID=”disk1”>

<capacity units=”GB”>

<value>10</value>

</capacity>

<access>R/W<access>

</storage>

<wireless:communication requestID=”nic1”>

<medium>FreqShared</medium>

<mediumtype>broadcast</mediumtype>

<wireless:protocol>802.11g</wireless:protocol>

<wireless:frequency type=”802.11channel”>16</wireless:frequency>

</wireless:communication>

</component>

2008/05/16 K.NAKAUCHI, NICT 15

Page 16: PlanetLab Applications and Federation

Summary

PlanetLab applications 800+ network services running in their own slice

Long-running infrastructure services Measurement using a set of useful monitoring tools reveals the extensive use of PlanetLab

Federation Distributes operation and management Future PlanetLab = current PL + PL-EU + PL-JP +…

2008/05/16 K.NAKAUCHI, NICT 16

Page 17: PlanetLab Applications and Federation

Backup

Page 18: PlanetLab Applications and Federation

Monitoring Tools CoTop: monitoring what

slices are consuming resources on each node, like “top”

CoMon: monitoring statistics for PlanetLab at both a node-level and a slice-level

2008/05/16 K.NAKAUCHI, NICT 18

Page 19: PlanetLab Applications and Federation

OpenDHT/OpenHash

Publicly accessible distributed hash table (DHT) service Simple put-get interface is accessible over both Sun RPC and XML RPC

2008/05/16 K.NAKAUCHI, NICT 19