33
Network Layer 4-1 Chapter 5 Multicast and P2P A note on the use of these ppt slides: All material copyright 1996-2007 J.F Kurose and K.W. Ross, All Rights Reserved Computer Networking: A Top Down Approach 4 th edition. Jim Kurose, Keith Ross Addison-Wesley, July 2007.

Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-1

Chapter 5Multicast and P2P

A note on the use of these ppt slides:

All material copyright 1996-2007J.F Kurose and K.W. Ross, All Rights Reserved

Computer Networking: A Top Down Approach 4th edition. Jim Kurose, Keith RossAddison-Wesley, July 2007.

Page 2: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-2

R1

R2

R3 R4

sourceduplication

R1

R2

R3 R4

in-networkduplication

duplicatecreation/transmissionduplicate

duplicate

Broadcast RoutingDeliver packets from source to all other nodesSource duplication is inefficient:

Source duplication: how does source determine recipient addresses?

Page 3: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-3

In-network Duplication

Flooding: when node receives brdcst pckt, sends copy to all neighbors

Problems: cycles & broadcast stormControlled flooding: node only brdcsts pktif it hasn’t brdcst same packet before

Node keeps track of pckt ids already brdcstedOr reverse path forwarding (RPF): only forward pckt if it arrived on shortest path between node and source

Spanning treeNo redundant packets received by any node

Page 4: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-4

A

B

G

DE

c

F

A

B

G

DE

c

F

(a) Broadcast initiated at A (b) Broadcast initiated at D

Spanning Tree

First construct a spanning treeNodes forward copies only along spanning tree

Page 5: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-5

A

B

G

DE

c

F1

2

3

4

5

(a) Stepwise construction of spanning tree

A

B

G

DE

c

F

(b) Constructed spanning tree

Spanning Tree: CreationCenter nodeEach node sends unicast join message to center node

Message forwarded until it arrives at a node already belonging to spanning tree

Page 6: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Multicast Routing: Problem StatementGoal: find a tree (or trees) connecting routers having local mcast group members

tree: not all paths between routers usedsource-based: different tree from each sender to rcvrsshared-tree: same tree used by all group members

Shared tree Source-based trees

Page 7: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Approaches for Building Mcast Trees

Approaches:Source-based tree: one tree per source

Shortest path treesReverse path forwarding

Group-shared tree: group uses one treeMinimal spanning (Steiner) Center-based trees

…You can read the details about the above approaches in the textbook ……

Page 8: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-8

IP Multicast – Related Works

Seminal work by S. Deering in 1989Huge amount of follow-on work

Research• 1000s papers on multicast routing, reliable

multicast, multicast congestion control, layered multicast

Standard: IPv4 and IPv6, DVMRP/CBT/PIMDevelopment: in both routers (Cisco etc.) and end systems (Microsoft, all versions of Unix)Deployment: Mbone, major ISP’s

Page 9: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-9

IP Multicast – ProblemsProblems

Scalability• Large number of multicast groups

Requirement of dynamic spanning tree• Practical problem under dynamic environment

System complexity• Routers maintain state information of multicast groups –

deviated from stateless router design • Bring out higher level features, e.g. error, congestion control..

Autonomous • Difficult across different domain for consistent policies

Page 10: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-10

Content Distribution Networks (CDN)

Push content to servers at network edge close to usersSupport on-demand traffic, but also support broadcastReduce backbone trafficCDNs like Akamai places ten of thousands of severs

Akamai

Edge Server

Source: http://esm.cs.cmu.edu/

Page 11: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-11

CDN – Streams DistributionContent delivery network

(CDN)

. . . . . . . . . . . . . . . . . .

Splitterservers

Mediaserver

ExampleAOL webcast of Live 8 concert (July 2, 2005)

1500 servers in 90 locations

50 Gbps

175,000 simultaneousviewers

8M unique viewers

SlideSlide by by Bernd GirodBernd Girod

Page 12: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-12

The Scale Problem

The aggregate capacityTo reach 1M viewers with MPEG-5 (1.5 Mbps) TV quality video, it requires 1.5 Tbps aggregate capacityCBS NCAA tournament (March 2006), video at 400 Kbps with 268,00 users, the aggregate capacity is 100 GbpsAkamai, the largest CND service provider, reports at the peak 200 Gbps aggregate capacity

ImplicationSelf-scaling property

Page 13: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-13

Overlay Multicast – Basic

Application layer multicast or Overlay MulticastBuild multicast trees at the application end

A virtual topology over the unicast InternetEnd systems communicate through an overlay structure

Existing multicast approachesSwarming-based (tree-less or data-driven)Tree-based (hierarchical-based)

Examples:End system multicast (ESM) – Hui Zhang et al.Yoid – Paul Francis et al.…

Page 14: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-14

Overlay Multicast

Page 15: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-15

Overlay Multicast – Discussion

Major advantagesEfficient multicast service deployment without the need of infrastructure supportFeasibility of implementing multicast function at the end of systemEasy to apply additional features (metrics)

IssuesLimited topological information at end user side?How to find/determine an ideal topology?Lack of practical system and experiment?

Page 16: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-16

Ideal OverlayEfficiency:

Routing (delay) in the constructed overlay network is close to the one in the underlying networkEfficient use of bandwidth

• Less duplicated packets on the same link• Proper number of connections at each node

Support node locality in overlay constructionScalability:

Overlay remains tractable with the increasing number of hosts and data trafficSmall overlay network maintenance costOverlay constructed in a distributed way and support node locality

Page 17: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-17

Randomly-connected overlay

Locality-aware and Randomly-connected Overlay

AS-1 AS-2

Locality-aware overlay AS-1 AS-2

1

2

34 5

6

7

8

1

2

34 5

6

7

8

Page 18: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-18

Objective of mOverlay [1]The ability to exploit local resources over remote ones when possible

• Locate nearby object without global communication • Permit rapid object delivery

Eliminate unnecessary wide-area hops for inter-domain messages

• Eliminate traffic going through high latency, congested stub links

• Reduce wide-area bandwidth utilization

Locality-aware Unstructured Overlay

[1] X. Zhang, Q. Zhang, Z. Zhang, G. Song and W. Zhu, "A Construction of Locality-Aware Overlay Network: mOverlay and its performance", IEEE JSAC Special Issue on Recent Advances on Service Overlay Networks, Jan. 2004.

Page 19: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-19

Key Concepts for mOverlay

Two-level hierarchical networkA group consists of a set of hosts close to each other

• For ANY position P in the underlying network, the distance between P and hosts within a group could be considered as equal

Neighbor groups in this overlay are the groups nearby in the underlying networkA desirable overlay structure is that most links are between hosts within a group and only a few links between two groups

ApproximationUse neighbors of a group as dynamic landmarks

Page 20: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-20

Locating Process

(1) Return boot host B from Group 1

(2) Measurement and information exchange

(3)

(4)

(5)

(6)

(7)

4 phrases locatingContact RP to fetch boot hostsMeasure the distance to boot host and its neighbor groupsDetermine the closest group with group criterion checkingTerminate with group criterion or stop criterion meet

Page 21: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-21

Popular Deployed Systems

Live P2P streaming has become increasingly popular approachMany real deployed systems. Just name a few …Coolstreaming: Cooperative Overlay Streaming

First release: May 2004Till Oct 2006

Download: > 1,000,000Average online users: 20,000Peak-time online user: 80,000Google entries (CoolStreaming): 370,000

CoolStreaming is the base technology for Roxbeam Corp., which launched live IPTV programs jointly with Yahoo Japan in October 2006

Page 22: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-22

Popular Deployed Systems (Cont.)

PPlive: well-known IPTV system3.5 M subscribers in 200536.9 M subscribers in 2009 predictedMay 2006 –over 200 distinct online channelsRevenues could up to $10 BNeed to understand current system to design better future systems

More to come …

Page 23: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-23

Pull-based StreamingAlmost all real-deployed P2P streaming systems are based on pull-based protocol

Also called “data-driven”/“swarming” protocol Basic idea

Live media content is divided into segments and every node periodically notifies its neighbors of what packets it hasEach node explicitly requests the segments of interest from its neighbors according to their notificationVery similar to that of BitTorrent

The well-acknowledged advantagesRobustness and simplicity

Page 24: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-24

Hybrid Pull-Push Protocol

Pull-based protocol has the tradeoff between control overhead and delay

To minimize the delay• Node notifies its neighbors of packet arrival

immediately• Neighbors should also request the packet immediately• Result in a remarkable control overhead

To diminish the overhead• Node can wait until dozens of packets arrived before

inform its neighbors • Neighbors can also request a bunch of packets each time• Leads to a considerable delay

Page 25: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-25

Push-Pull Streaming MechanismHow to reduce the delay of pull mechanism while keeping the advantages of pull mechanism?

Use the pull mechanism as a startup to measure the partners’ ability to provide video packetsUse the push mechanism to reduce the delayPartition the video stream according to the video packets received from the partners in last intervalPackets loss during push time interval will be recovered by pull mechanism

Page 26: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-26

GridMedia

Gridmedia is designed to support large-scale live video streaming over world-wide Internet

http://www.gridmedia.com.cn/The first generation: Gridmedia I

Mesh-based multi-sender structureCombined with IP multicastFirst release: May 2004

The second generation: Gridmedia IIUnstructured overlayPush-pull streaming mechanismFirst release: Jan. 2005 GridMediaTM

Page 27: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-27

Real DeploymentGala Evening for Spring Festival 2005 and 2006

Streaming server: double-core Xeon serverVideo encoding rate = 300 kbpsMaximum connections from server

• 2005: 200• 2006: 800

Partners number = about 10Buffer Deadline = 20s

For the largest TV station in China (CCTV)

Page 28: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-28

Performance AnalysisGala Evening for Spring Festival 2005

More than 500,000 person times in total, maximum concurrent users 15,239Users from 66 countries, 78.0% from ChinaEnabled 76 times (15,239/200≈76) in terms of capacity amplification to bounded server outgoing bandwidth

21:00 22:00 23:00 0:006000

8000

10000

12000

14000

16000

Time

Num

ber o

f con

curre

nt o

nlin

e us

ers

Others22%

China78% Canada

20%

USA18%

UK15%

Japan13%

Others28%

GM 6%

Page 29: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-29

Performance Analysis (Cont.)Gala Evening for Spring Festival 2006

More than 1,800,000 person times in total, maximum concurrent users 224,453Users from 69 countries, 79.2% from ChinaEnabled 280 times (224,453/800≈280) in terms of capacity amplification to bounded server outgoing bandwidth

20:00 21:00 22:00 23:00 0:00 1:000

0.4

0.8

1.2

1.6

2

2.4x 10

5

Time

Num

ber o

f con

curre

nt o

nlin

e us

ers USA

Canada

Japan

Australia

UK

IANA

NewZealandAPNIC

Singapore

Page 30: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-30

DeploymentExperience

20:30 21:00 21:30 22:00 22:30 23:000

50

100

150

200

250

300

350

Time

Ave

rage

Stre

amin

g R

ate

Incoming RateOutgoing Rate

Online Duration

Connection Heterogeneity

Request Characteristics

In 2005, about 60.8% users were behind different types of NATswhile at least 16.0% users (in China) accessed Internet via DSL connectionsIn 2006, about 59.2% users were behind different types of NATswhile at least 14.2% users (in China) accessed Internet via DSL connections

An effective NAT traversal scheme should be carefully considered in the system design of P2P-based live streaming applications

Page 31: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-31

In 2005, nearly 50% users spent less 3 minutes and about 18% users kept active for more than 30 minutesIn 2006, roughly 30% users in 2006 left the system in 3 minutes and more than 35% user would like to enjoy the show for more than 30 minutesPeers with longer online duration are expected to have larger average remaining online time

0 2000 4000 6000 8000 10000 120000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Online Time(sec)

CD

F of

Onl

ine

Tim

e

20052006

0 1000 2000 3000 4000 5000 60001500

2000

2500

3000

3500

4000

4500

5000

5500

6000

Online Time (sec)

Rem

aini

ng o

nlin

e Ti

me

DeploymentExperience Online

DurationConnection

HeterogeneityRequest

Characteristics

Taking online duration information into consideration when designing overlay structure or selecting upstream peers can improve system performance

Page 32: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-32

23:10 23:20 23:30 23:40 23:50 0:000

1000

2000

3000

4000

Time

Req

uest

Rat

e in

200

5

0

1

2

3

4x 10

4

Req

uest

Rat

e in

200

6

DeploymentExperience Online

DurationConnection

HeterogeneityRequest

Characteristics

Request rate per 30 seconds from 23:00pm to 0:00am in 2005 and 2006

The average request rate always kept at a record of hundreds in 2005 while thousands in 2006

Occasionally the request rate rushed to a peak beyond 3,700 in 2005 while 32,000 in 2006The high request rate and sporadic flush-crowd essentially

pose great challenge on the reliability and stability of RP server and system

Page 33: Chapter 5 Multicast and P2P - cse.ust.hk · Download: > 1,000,000 Average online users: 20,000 Peak-time online user: 80,000 Google entries (CoolStreaming): 370,000 CoolStreaming

Network Layer 4-33

Future DirectionsThroughput improvement should not be the only key focusInteresting future directions

Minimize ISP core network and cross-ISP traffic• Use proxy cache and locality-aware technique to relieve the link

stress

Server bandwidth reduction• How to let home users broadcast video with high quality?

Real Internet environment• Connections across the peer link bridge between ISPs have low

rate• NAT/firewall prevent end-host from connecting with each other