47
1 Enabling near-VoD via P2P Networks Siddhartha Annapureddy Saikat Guha, Dinan Gunawardena Christos Gkantsidis, Pablo Rodriguez World Wide Web, 2007

Enabling near-VoD via P2P Networks

Embed Size (px)

Citation preview

Page 1: Enabling near-VoD via P2P Networks

1

Enabling near-VoD via P2P Networks

Siddhartha Annapureddy

Saikat Guha, Dinan Gunawardena

Christos Gkantsidis, Pablo Rodriguez

World Wide Web, 2007

Page 2: Enabling near-VoD via P2P Networks

2

Motivation

� Growing demand for distribution of videos� Distribution of home-grown videos

� BBC wants to make its documentaries public

� People like sharing movies (in spite of RIAA)

� Video distribution on Internet very popular� About 35% of the traffic

� What we want?� (Near) Video-on-Demand (VoD)

� User waits a little, and starts watching the video

Page 3: Enabling near-VoD via P2P Networks

3

First Generation Approaches

� Multicast support in network� Hasn’t taken off yet �

� Research yielded good ideas� Pyramid schemes

� Dedicated Infrastructure � Akamai will do

� Costs big bucks

Page 4: Enabling near-VoD via P2P Networks

4

Recent Approaches

� BitTorrent Model

� P2P mesh-based system

� Video divided into blocks

� Blocks fetched at random

� Preference for rare blocks

+ High throughput through cooperation

- Cannot watch until entire video fetched

Page 5: Enabling near-VoD via P2P Networks

5

Current Approaches

� Distribute from a server farm� YouTube, Google Video

- Scalability problems� Bandwidth costs very high

- Poor quality to support high demand

- Censor-ship constraints� Not all videos welcome at Google video

� Costs to run server farm prohibitive

Page 6: Enabling near-VoD via P2P Networks

6

Live Streaming Approaches

- Provides a different functionality� User arriving in middle of a movie cannot

watch from the beginning

� Live Streaming� All users interested in same part of video

� Narada, Splitstream, PPLive, CoolStreaming

� P2P system caching windows of relevant chunks

Page 7: Enabling near-VoD via P2P Networks

7

Challenges for VoD

� Near VoD� Small setup time to play

videos

� High sustainable goodput� Largest slope osculating the

block arrival curve

� Highest video encoding rate system can support

Goal: Do block scheduling to achieve

low setup time and high goodput

Given this motivation for VoD, what are the challenges?

Page 8: Enabling near-VoD via P2P Networks

8

System Design – Principles

� Keep bw load on server to a minimum

� Server connects and gives data only to few users

� Leverage participants to distribute to other users

� Maintenance overhead with users to be kept low

� Keep bw load on users to minimum

� Each node knows only a small subset of user population (called neighbourhood)

� Bitmaps of blocks

Page 9: Enabling near-VoD via P2P Networks

9

System Design – Outline

� Components based on BitTorrent

� Central server (tracker + seed)

� Users interested in the data

� Central server

� Maintains a list of nodes

� Each node finds neighbours� Contacts the server for this

� Another option – Use gossip protocols

� Exchange data with neighbours� Connections are bi-directional

Page 10: Enabling near-VoD via P2P Networks

10

Overlay-based Solutions

� Tree-based approach

� Use existing research for IP Multicast

� Complexity in maintaining structure

� Mesh-based approach

� Simple structure of random graphs

� Robust to churn

Page 11: Enabling near-VoD via P2P Networks

11

Simulator – Outline� All nodes have unit bandwidth capacity

� 500 nodes in most of our experiments

� Heterogeneous nodes not described in talk

� Each node has 4-6 neighbours

� Simulator is round-based� Block exchanges done at each round

� System moves as a whole to the next round

� File divided into 250 blocks � Segment is a set of consecutive blocks

� Segment size = 10

� Maximum possible goodput – 1 block/round

Page 12: Enabling near-VoD via P2P Networks

12

Why 95th percentile?

� (x, y)

� y nodes have a goodput which is at least x.

� Red line at top shows 95th percentile

� 95th percentile a good indicator of goodput

� Most nodes have at least that goodput

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.80

50

100

150

200

250

300

350

400

450

500

Nodes

Goodput (in blocks/round)

Page 13: Enabling near-VoD via P2P Networks

13

Feasibility of VoD –What does block/round mean?

� 0.6 blocks/round with 35 rounds setup time

� Two independent parameters� 1 round ≈ 10 seconds

� 1 block/round ≈ 512 kbps

� Consider 35 rounds a good setup time� 35 rounds ≈ 6 min

� Goodput = 0.6 blocks/round

� Size of video = 250 / 0.6 ≈ 70 min

� Max encoding rate = 0.6 * 512 > 300 kbps

Page 14: Enabling near-VoD via P2P Networks

14

Naïve Scheduling Policies

3 6 8 1 4 2 7 5

1 2 3 4 5 6 7 8

4 1 3 6 5 8 72

Random

Sequential

Segment-Random

Segment-Random Policy

� Divide file into segments

� Fetch blocks in random order inside same segment

� Download segments in order.

Page 15: Enabling near-VoD via P2P Networks

15

Naïve Approaches – Random

� Each node fetches a block at random

� Throughput – High as nodes fetch disjoint blocks � More opportunities for

block exchanges

� Goodput – Low as nodes don’t get blocks in order� 0 blocks/roundSetup Time (in rounds)

Goodput

( blo

cks /

r ound)

0 10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Page 16: Enabling near-VoD via P2P Networks

16

Naïve Approaches – Sequential

� Each node fetches blocks in order of playback� Throughput – Low as fewer opportunities for exchange

� Increase this

Sequential

Random

Setup Time (in rounds)

Goodput

( blo

cks /

r ound)

< 0.25 blocks/round

0 10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Page 17: Enabling near-VoD via P2P Networks

17

Naïve Approaches –Segment-Random Policy

� Segment-random� Fetch random block from the segment the earliest block falls in

� Increases the number of blocks in progress

Segment-Random

Sequential

Random

Setup Time (in rounds)

Goodput

( blo

cks /

r ound)

< 0.50 blocks/round

0 10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Page 18: Enabling near-VoD via P2P Networks

18

Peaks and Valleys

� Throughput could be improved at two places – Reduce time:

� To obtain last blocks of current segment

� For initial blocks of segment to propagate

� Throughput is number of block exchanges system-wide

� Period between two valleys corresponds to a segment

50 60 70 80 90 100 110 120 130 140 1500

50

100

150

200

250

300

350

400

Thr o

ughput

(in b

l ock

s )

Time (in rounds)

Page 19: Enabling near-VoD via P2P Networks

19

Mountains and Valleys

� Local rarest� Fetch the rarest block in the current segment� System progress improves 7% (195.47 to 208.61)

Segment-RandomLocal-rarest

50 60 70 80 90 100 110 120 130 140 1500

50

100

150

200

250

300

350

400

Pr o

gre

ss (

in b

lock

s )

Time (in rounds)

Page 20: Enabling near-VoD via P2P Networks

20

Naïve Approaches –Rarest-client

� How can we improve this further? � Pre-fetching

� Improve availability of blocks from next segment

Segment-RandomLocal-rarest

Goodput

( blo

cks /

r ound)

Setup Time (in rounds)0 10 20 30 40 50 60 70 80 90 100

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Page 21: Enabling near-VoD via P2P Networks

21

Pre-fetching

3 6 8 1 4 2 7 5

1 2 3 4 5 6 7 8

4 1 3 6 5 8 72

3 2 5 4 7 6 81

Random

Sequential

Segment-Random

Pre-fetching

� Pre-fetching

� Fetch blocks from the next segment with a small probability

� Creates seeds for blocks ahead of time

� Trade-off: block is not immediately useful for playback

Page 22: Enabling near-VoD via P2P Networks

22

0 10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Pre-fetching

� Pre-fetching with probability 0.2

� Throughput:

� 9% improvement

� 195.47 to 212.21

� Get better with Network Coding!!!

Goodput

( blo

cks /

r ound)

Setup Time (in rounds)

80-20Local-rarest Segment-Random

Page 23: Enabling near-VoD via P2P Networks

23

NetCoding – How it Helps?

� A has blocks 1 and 2

� B gets 1 or 2 with equal prob. from A

� C gets 1 in parallel

� If B downloaded 1

� Link B-C becomes useless

� Network coding routinely sends 1⊕2

Page 24: Enabling near-VoD via P2P Networks

24

Network Coding – Mechanics

� Coding over blocks B1, B2, …, Bn

� Choose coefficients c1, c2, …, cn from finite field

� Generate encoded block: E1 = Σi=1..n ci.Bi

� Without n coded blocks, decoding cannot be done� Setup time at least the time to fetch a segment

Page 25: Enabling near-VoD via P2P Networks

25

Benefits of NetCoding

� Throughput:

� 39% improvement

� 271.20 with NC

� 212.21 with pre-fetching and no NC

� 195.47 initial

� Pre-fetching segments moderately beneficialG

oodput

( blo

cks /

r ound)

Setup Time (in rounds)

0 10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1 Netcoding 80-20Netcoding-segment-random80-20

Page 26: Enabling near-VoD via P2P Networks

26

Implementation

� With the above insights, built a C# prototype

� 25K lines of code

� Rest of talk

� Identify problems when nodes arrive at different times, nodes with heterogeneous capacities

� Address these issues with improvements to the basic Netcoding scheme

� Note – All evaluation done with the prototype

Page 27: Enabling near-VoD via P2P Networks

27

Segment Scheduling

� NetCoding good for fetching blocks in a segment, but cannot be used across segments� Because decoding requires all encoded blocks

� Problem: How to schedule fetching of segments? � Until now, sequential segment scheduling

� Segment pre-fetching beneficial

� Results in rare segments when nodes arrive at different times

� Segment scheduling algorithm to avoid rare segments

Page 28: Enabling near-VoD via P2P Networks

28

Segment Scheduling� A has 75% of the file

� Flash crowd of 20 Bs join with empty caches

� Server shared between A and Bs

� Throughput to A decreases, as server is busy serving Bs

� Initial segments served repeatedly

� Throughput of Bs low because the same segments are fetched from server

Server

B B

B

A

Page 29: Enabling near-VoD via P2P Networks

29

Server

A

1 2 3 4 5 6 7 8

Bs

2 2 2 2 2 2 1 1Popularities

Segment #

Segment Scheduling – Problem

� 8 segments in the video; A has 75% = 6 segments, Bs have none

� Green is popularity 2, Red is popularity 1

� Popularity = # full copies in the system

Page 30: Enabling near-VoD via P2P Networks

30

Server

A

1 2 3 4 5 6 7 8

Bs

2 2 2 2 2 2 1 1Popularities

Segment #

Server

A

1 2 3 4 5 6 7 8

Bs

2 2 2 2 2 2 2 1Popularities

Segment #

Segment Scheduling – Problem

� Instead of serving A …

Page 31: Enabling near-VoD via P2P Networks

31

Segment Scheduling – Problem

� The server gives segments to Bs

Server

A

1 2 3 4 5 6 7 8

Bs

3 2 2 2 2 2 1 1Popularities

Segment #

Page 32: Enabling near-VoD via P2P Networks

32

Segment Scheduling – Problem

� A’s goodput plummets

� Get the best of both worlds – Improve A and Bs� Idea: Each node serves only rarest segments in system

� A

� Bs

Page 33: Enabling near-VoD via P2P Networks

33

Segment Scheduling – Algorithm

� When dst node connects to the src node…� Here, dst is B, src is Server

Server

A

1 2 3 4 5 6 7 8

Bs

2 2 2 2 2 2 1 1Popularities

Segment #

Page 34: Enabling near-VoD via P2P Networks

34

Segment Scheduling – Algorithm

� src node sorts segments in order of popularity

� Segments 7, 8 least popular at 1

� Segments 1-6 equally popular at 2

Server

A

1 2 3 4 5 6 7 8

Bs

2 2 2 2 2 2 1 1Popularities

Segment #

Page 35: Enabling near-VoD via P2P Networks

35

Segment Scheduling – Algorithm

� src considers segments in sorted order one-by-one, and serves dst� Either completely available at src, or

� First segment required by dst

Server

A

1 2 3 4 5 6 7 8

Bs

2 2 2 2 2 2 1 1Popularities

Segment #

Page 36: Enabling near-VoD via P2P Networks

36

Server

A

1 2 3 4 5 6 7 8

Bs

2 2 2 2 2 2 1 1Popularities

Segment #

Segment Scheduling – Algorithm

� Server injects segment needed by A into system� Avoids wasting bandwidth in serving initial segments multiple

times

Page 37: Enabling near-VoD via P2P Networks

37

Segment Scheduling – Popularities

� How does src figure out popularities?

� Centrally available at the server

� Our implementation uses this technique

� Each node maintains popularities for segments

� Could use a gossiping protocol for aggregation

Page 38: Enabling near-VoD via P2P Networks

38

Segment Scheduling

� Note that goodput of both A and Bs improves

Page 39: Enabling near-VoD via P2P Networks

39

Topology Management� A has 75% of the file

� Flash crowd of 20 Bs join with empty caches

� Limitations of segment scheduling

� Bs do not benefit from server, as they get blocks for A

� A delayed too because of re-routing of blocks from Bs

Server

B B

B

A

� Present an algorithm that avoids these problems

Page 40: Enabling near-VoD via P2P Networks

40

Topology Management – Algorithm

� Cluster nodes interested in the same part of the video

� Retain connections which have high goodput

� When dst approaches src, src serves dst only if

� src has spare capacity, or

� dst is interested in the worst-seeded segment per src’s estimate

� If so, kick a neighbour with low goodput

Server

B B

B

A

Page 41: Enabling near-VoD via P2P Networks

41

Topology Management – Algorithm

� When Bs approach server…

� Server does NOT have spare capacity

� B is NOT interested in the worst-seeded segment per server’s estimate (segment 7)

Server

B B

B

A� When dst approaches src,

src serves dst only if

� src has spare capacity, or

� dst is interested in the worst-seeded segment per src’s estimate

� If so, kick a neighbour with low goodput

Page 42: Enabling near-VoD via P2P Networks

42

Topology Management

� Note that both A and the Bs see increased goodput

Page 43: Enabling near-VoD via P2P Networks

43

Heterogeneous Capacities

� Above algorithms do not work well with heterogeneous nodes

� A is a slow node

� Flash crowd of 20 fast Bs

� Bs get choked because A is slow

Server

B B

B

A

Page 44: Enabling near-VoD via P2P Networks

44

Heterogeneous Capacities

� Server refuses to give initial segments to Bs

� Popularity calculation takes A to be a source

� Adjust segment popularity with weight proportional to the capacity of the node

� Initial segments have < 2 popularity, rounded to 1

� Server then gives to Bs

Fast

Slow

Page 45: Enabling near-VoD via P2P Networks

45

Heterogeneous Capacities

� Note the improvement in the fast/slow nodes

Fast

Slow

Page 46: Enabling near-VoD via P2P Networks

46

Conclusions

� System designed to support near-VoD using peer-to-peer networks� Hitherto, only large file distribution possible

� Three techniques to achieve low setup time and sustainable goodput� NetCoding – Improves throughput

� Segment scheduling – Prevent rare blocks

� Topology management – Cluster “similar” nodes

Page 47: Enabling near-VoD via P2P Networks

47

Questions?