A Collaborative Transcoding Strategy for Live Broadcasting Over Peer-to-Peer IPTV Networks

  • Published on
    13-Apr-2017

  • View
    212

  • Download
    0

Embed Size (px)

Transcript

  • 220 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 2, FEBRUARY 2011

    Transactions Letters

    A Collaborative Transcoding Strategy for Live Broadcasting overPeer-to-Peer IPTV Networks

    Jui-Chieh Wu, Polly Huang, Jason J. Yao, and Homer H. Chen, Fellow, IEEE

    AbstractReal-time video transcoding that is often needed forrobust video broadcasting over heterogeneous networks is notsupported in most existing devices. To address this problem,we propose a collaborative strategy that leverages the peeringarchitecture of peer-to-peer Internet protocol television networksand makes the computational resources of peers sharable. Thevideo transcoding task is distributed among the peers and com-pleted collaboratively. A prototype of the live video broadcastingsystem is evaluated over a 100-node testbed on the PlanetLab.The experimental results show that the proposed strategy workseffectively even when the majority of the peers have limitedcomputational resource and bandwidth.

    Index TermsIPTV, live broadcasting, multiple descriptioncoding, P2P network, video streaming.

    I. Introduction

    V IDEO CONTENT distribution over peer-to-peer (P2P)networks has evolved and become an everyday norm [1].The evolution will continue as the deployment of broadbandwireless access, such as WiFi, 3G, and WiMAX, accelerates[10] and as more people create and share contents of their ownover the Internet. The problem we are concerned with in thisletter is related to the support of live video broadcasting overa P2P Internet protocol television (IPTV) network.

    Multiple description coding (MDC) [2], [3] and layeredvideo coding (LVC) [4][6] techniques have been developedto enhance the quality of service for networks that are het-erogeneous in nature. However, most multimedia devices onlysupport popular coding standards such as MPEG-4 [7], [8] as

    Manuscript received September 29, 2008; revised April 27, 2009 andNovember 29, 2009; accepted August 13, 2010. Date of publication Jan-uary 13, 2011; date of current version March 2, 2011. This work wassupported in part by the grants from the National Science Council of Taiwan,under Contracts NSC 95-2219-E-002-012, NSC 94-2220-E-002-027, andNSC 95-2219-E-002-015. This paper was recommended by Associate EditorM. Comer.

    J.-C. Wu is with the Graduate Institute of Computer Science and InformationEngineering, National Taiwan University, Taipei 10617, Taiwan (e-mail:r93922115@ntu.edu.tw).

    P. Huang and J. J. Yao are with the Department of Electrical Engi-neering and Graduate Institute of Communication Engineering, NationalTaiwan University, Taipei 10617, Taiwan (e-mail: phuang@cc.ee.ntu.edu.tw;jasonyao2000@hotmail.com).

    H. H. Chen is with the Department of Electrical Engineering, Graduate In-stitute of Communication Engineering, and Graduate Institute of Networkingand Multimedia, National Taiwan University, Taipei 10617, Taiwan (e-mail:homer@cc.ee.ntu.edu.tw).

    Digital Object Identifier 10.1109/TCSVT.2011.2105571

    the native compression format and cannot perform MDC orLVC in real time.

    To break the computational bottleneck, we leverage theP2P streaming architecture of an IPTV system and makethe computational resource, in addition to bandwidth, ofthe peers sharable. Under this strategy, the native streamgenerated by the source peer is divided into small segmentsand assigned to peers that have the computational resource toshare the transcoding workload. That is, the transcoding taskis distributed among peers and completed collaboratively.

    The notion of P2P networking for resource sharing hasbeen adopted in various systems [11][14]. However, ourapproach is different in that the source peer does not collectthe distributed results from the peers and that the transcod-ing is carried out in a pure P2P fashion without centralgoverning.

    II. System Overview

    The collaborative video transcoding strategy is built upon aP2P IPTV system [2] as the baseline, which incorporates pull-based content delivery (swarming) and layered encoding tostream media to heterogeneous viewers. The system architec-ture is shown in Fig. 1. The partnership formation componentmaintains the partner relationship between the peers, and thepeer information exchange component maintains a buffer mapthat records the data availability. The exchange period of thebuffer maps is referred to as the swarming cycle. Based onthe buffer map, the segment scheduling module searches thevideo segments that can arrive before the display time and aredownloadable within the available bandwidth.

    The baseline system is extended to incorporate thecollaborative transcoding strategy for live video broadcasting.A peer passes a transcoding task to its downstream peers ifit cannot handle the task, which involves decoding a segmentin the native compression format and re-encoding it into thetarget format.

    The source peer generates a native compressed video streamand broadcasts it through the P2P network. The receivers areclassified into transcoding peers (those which transcode thereceived segments) and transporting peers (those which onlytransport the received segments without transcoding them).After going through one or more overlay hops in the P2P

    1051-8215/$26.00 c 2011 IEEE

  • WU et al.: A COLLABORATIVE TRANSCODING STRATEGY FOR LIVE BROADCASTING OVER PEER-TO-PEER IPTV NETWORKS 221

    Fig. 1. P2P IPTV system architecture with collaborative transcoding for livevideo broadcasting. The highlighted area represents the baseline system.

    network, a segment in the native format is converted to thetarget format.

    There are three requirements. First, a peer should collectthe information about the available computing power of otherpeers. Second, to avoid assigning a transcoding task too manytimes, a peer should collect information of the transcodingstatus of a segment. Third, based on the collected information,a peer should assign a transcoding task (defined in Section IV)to each downstream peer, and each downstream peer shouldbe able to determine if a task is acceptable and, if yes, itspriority.

    Consequently, the baseline system is extended to equipwith: 1) capability estimation; 2) exchange of capability andtranscoding status information; and 3) job scheduling. Thecapability estimation component periodically estimates theavailable computing power of a peer. The extended peerinformation exchange component manages the exchange ofinformation with other peers about the computational capabil-ity of a peer and the transcoding status of a video segment.The job scheduling component, which is integrated with thesegment scheduling component of the baseline system, deter-mines for a partner peer which segments should be sent to therequesting peer and which requesting peers can transcode thesegments. For a requesting peer, this component determineswhich segments to download.

    III. Computational Capability Estimation

    Computational capability estimation is a challenging issuebecause the available computational resource varies fromdevice to device and depends on factors such as the centralprocessing unit (CPU) clock speed, the random access memory(RAM) size, the current CPU load, the transcoding scheme,and the video characteristics. For each specific CPU clockspeed and RAM size, it is possible to determine and recordthe available computing power in a lookup table. However, it ispractically impossible to enumerate all possible combinations.

    To have a robust and universal mechanism that is indepen-dent of codec, hardware platform, and operating system, weuse the average transcoding time ta of a 1-s video segment for

    Fig. 2. Structure of a buffer map.

    Fig. 3. Five phases of collaborative video transcoding and their values inbinary format.

    computational capability estimation and update it whenever thepeer completes a transcoding task. The number of segments apeer can handle in one swarming cycle Cs is determined bydividing the swarming cycle ts by ta.

    The capability estimate cannot be completely accurate sincethe available resource of a peer varies over time. Therefore,in the event that a peer is unable to finish all tasks within aswarming cycle due to an overestimate of its computationalcapability, the pending transcoding tasks must be subtractedfrom Cs in the next swarming cycle.

    IV. Peer Information Exchange

    In each swarming cycle, a peer requests the buffer mapfrom its partner peers. The request message also contains theavailable uplink bandwidth, the number of requesting peers,and the computational capability of the peer. The availableuplink bandwidth is estimated by using the transmitted mediadata as the probing packets to avoid traffic overload. Uponreceiving the request, a partner peer uses the informationcontained in the request message to determine the candidatesegment and responds to the request with a buffer map for jobscheduling.

    The buffer map extended from the baseline system carriesadditional information about transcoding, as shown in Fig. 2.A buffer map consists of one 8-bit long ID and 64 contiguoussegment descriptors, each of which is 8-bit long and corre-sponds to a 1-s video segment. The last 4 bits of a segmentdescriptor indicate which MDC descriptions [2] are availableif it is an MDC segment, or the number of requesting peersthat receive the segment for transcoding if this segment is nottranscoded yet. The latter is needed for job scheduling.

    The first 4 bits of the descriptor of a segment describe thetranscoding status of the segment, namely, initial (1), transport(2), candidate (3), progressing (4), and complete (5), as shownin Fig. 3. The first four phases mean that the segment is stillin the native compression format, whereas the last one meansthe segment is completely transcoded to the MDC format.

    The transcoding status of a new segment in the nativecompression format is set to initial. Once requested, its statusis changed to transport. If a peer finds that it can transcode

  • 222 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 2, FEBRUARY 2011

    the segment, it changes the status of the segment to candidate.As the segment is being transcoded, its status is changed toprogressing. When the segment is finally converted to theMDC format, its status is changed to complete. The buffermap is updated every second in our system to match with thevideo rate since each live video segment is 1 s. long.

    V. Job Scheduling

    The job scheduling module relies on the information period-ically exchanged between peers to determine the transcodingschedule of the video segments. The message flow of the jobscheduling is as follows. First, a peer sends to all its partnerpeers a buffer map request. Upon receiving the buffer maprequest, a partner peer decides which segments to announce(or expose) to the requesting peer. This operation is referred toas candidate segment selection. The descriptors of the selectedsegments are inserted in the buffer map and sent along with thebuffer map to the requesting peer. After collecting all buffermap responses, the requesting peer decides which segments todownload and schedules the transmission of these segments.

    A. Candidate Segment Selection

    The candidate segment selection runs on partner peers. If allsegments in the buffer of a partner peer are MDC segments,the partner peer just has to announce all the segments to the re-questing peer. However, when some of them are untranscoded,the partner peer needs to decide how many of them it cantranscode and which of the remaining segments it should passto the requesting peer. It works as follows.

    First, all MDC segments are selected and announced to therequesting peer because we want MDC segments to populatethe live broadcasting network as soon as possible.

    Second, those segments in the native compression formatwhich cannot be converted to the MDC format before thedisplay deadline are announced to the requesting peer.

    Finally, given the computational capability C of the re-questing peer, the partner peer selects additional segmentsand announces them to the requesting peer. These additionalsegments are the first C segments of the remaining segmentsin the native compression format sorted by three criteria:1) transcoding status; 2) request count; and 3) display deadlineof the segments. These criteria are applied in order.

    B. Segment Transmission Scheduling

    The candidate segment transmission scheduling runs onrequesting peers. After the buffer map is acquired, a requestingpeer has the information about the transcoding status ofthe segments and can decide which segments to downloadfor display and which for transcoding. Since our goal isto disseminate each MDC segment as soon as it becomesavailable, we download the MDC segments first.

    Then we download the segments in the native compressionformat, which are divided into urgent and non-urgent seg-ments. If no peers, including the peer itself, can transcodeor complete the transcoding of a segment before its displaydeadline, the segment is classified as urgent, otherwise, it is

    non-urgent. All urgent segments are downloaded next. Thenthe peer evaluates which non-urgent segments it can transcodebefore the other peers. C of these segments are downloadedfor transcoding according to the score defined as follows:

    Ai =

    jSiri, j , 1 (1)

    where Ai is the score of the ith segment, Si the set of partnerpeers that have the ith segment, ri, j the transcoding statusof the ith segment assigned by the jth partner peer, and aparameter that controls the preference of selection. Non-urgentsegments with lower scores are selected first.

    VI. Experimental Results

    The performance of the collaborative transcoding strategyon a P2P live broadcasting system is tested with differentcoded streams as the source video data.

    A. Experimental Settings and Metrics

    The prototype of the proposed strategy is loaded into thenodes on the PlanetLab [9] for evaluation, and the experimen-tal P2P network is constructed by using the TYPHOON part-nership formation protocol of the baseline system [2]. Sincethe PlanetLab does not provide functions for controlling thecomputational resource of a node, we simulate the computingpower of a node by controlling the average transcoding timeof a video segment. The P2P network of our testbed consistsof 100 peers, 80 in North America, 15 in Europe, and 5 inEastern Asia. The swarming cycle is set to 4 s, and is set to2. Each experiment lasts 3500 s, and all peers join the networkin the first 400 s.

    The system broadcasts the common international format-size Foreman sequence repeatedly in all the experiments. Theframe rate is set to 30 f/s, and the native compression formatis the simple profile of MPEG-4 Part 2. The bit rate of theMPEG-4 stream is set to 400 kb/s and the corresponding MDCstreams to 500 kb/s. We evaluate the system performance bycontinuity index (CI), and transcoding load.

    CI is defined as the ratio of the number of segments receivedin time to the total number of segments sent by the source peer.In our experiments, four types of CIs are computed because...

Recommended

View more >