12
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004 599 Design a Progressive Video Caching Policy for Video Proxy Servers Wei-hsiu Ma and David H. C. Du, Fellow, IEEE Abstract—Proxy servers have been used to cache web objects to alleviate the load of the web servers and to reduce network congestion on the Internet. In this paper, a central video server is connected to a proxy server via wide area networks (WANs) and the proxy server can reach many clients via local area networks (LANs). We assume a video can be either entirely or partially cached in the proxy to reduce WAN bandwidth consumption. Since the storage space and the sustained disk I/O bandwidth are limited resources in the proxy, how to efficiently utilize these resources to maximize the WAN bandwidth reduction is an important issue. We design a progressive video caching policy in which each video can be cached at several levels corresponding to cached data sizes and required WAN bandwidths. For a video, the proxy server determines to cache a smaller amount of data at a lower level or to gradually accumulate more data to reach a higher level. The proposed progressive caching policy allows the proxy to adjust caching amount for each video based on its resource condition and the user access pattern. We investigate the scenarios in which the access pattern is priorly known or unknown and the effectiveness of the caching policy is evaluated. Index Terms—Caching policy, progressive video caching, proxy server, two-constraint multiple-choice knapsack problem. I. INTRODUCTION T HE ROLE OF streaming stored video over high-speed net- works and Internet becomes increasingly important for multimedia applications. The high bandwidth requirement and rate variation of compressed video introduce some challenging issues to the end-to-end delivery of streaming video over wide area networks (WANs). WAN is usually shared by many insti- tutions or organizations across distant regions and thus its band- width is more expensive and difficult to expand than local area networks (LANs). Proxy servers have been widely used for web accesses to decrease the response time and to alleviate the load of busy web servers. The same concept can be applied to video streaming. Originally, a central video server provides video archive and delivers video content to the users directly. A video proxy server (proxy server or proxy) residing at the same local area of the client can assist the delivery by taking advantage of its close proximity to the client. The proxy server stores portions of a video and transmits the cached and staged data to the client Manuscript received December 3, 2001; revised July 18, 2002. This work was supported in part by the National Science Foundation under Grants CMS- 0086602 and EIA-0224424, and by the Digital Technology Center Intelligent Storage Consortium.The associate editor coordinating the review of this paper and approving it for publication was Dr. Shue-Lee Chang. The authors are with the Department of Computer Science and Engi- neering, University of Minnesota, Minneapolis, MN 55455 USA (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/TMM.2004.830819 through LAN such that the average rate of the transport from the central server to the client is reduced. Ideally, the proxy server stores all the interested videos of this local community. How- ever, the proxy has limited resources, especially storage capacity and sustained disk transfer rate (disk I/O bandwidth) for storing and accessing cached video data. Hence, it is important for the proxy server to store the “right” video data such that the overall WAN bandwidth requirement can be reduced as much as pos- sible. Note that we assume the bandwidth of LAN is much less expensive and adequate. Considering video caching instead of web caching, it is pos- sible to store partial video in a proxy server such as video staging [12], [28], prefix caching [13], [25], layered video caching [21] and selective frame caching [15]. From the related studies, some (initial portion or segments) of the video frames or partial video frame (e.g., some layers of layer-encoded videos) can be cached to achieve various objectives like WAN bandwidth reduction, startup delay reduction or quality adaptation. There are good reasons to allow the proxy server caching par- tial video instead of the entire video. 1) Due to the large size of a video file, the storage space in the proxy may be wasted if the entire video cannot fit into the available space. The proxy can only store a few entire video files and thus caching whole video may not be efficient especially when many users are accessing different videos. 2) In some situations, caching part of a video may be sufficient to achieve the purpose or gain much benefit. For example, caching the initial portion of the video can reduce startup delay [13], [25]. Storing some video frames can achieve efficient CBR transmission over WAN so that larger peak rate due to smoothed VBR delivery is eliminated [12]. 3) In whole video caching, it takes the proxy longer time and much higher disk I/O bandwidth to cache a whole video. Based on a removal policy, some other videos may have to be discarded to free space to accommodate the newly cached video. It is possible that the proxy caches a potentially cold movie and gets rid of a hotter movie. A wrong decision will waste much I/O bandwidth and storage space. To reduce such a risk, partial video caching can be useful. Theoretically, the proxy server can arbitrarily select cached data amount for a video but this will cause complexity and dif- ficulty in managing cached data. To avoid arbitrary selection and still provide the proxy server flexibility to cache partial video, we propose the concept of progressive video caching. Each video provides several levels (options) in terms of proxy resource consumption and bandwidth reduction amount (or re- quired WAN bandwidth) so that the proxy server can select one 1520-9210/04$20.00 © 2004 IEEE

Design a Progressive Video Caching Policy for Video Proxy Servers

  • Upload
    dhc

  • View
    216

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Design a Progressive Video Caching Policy for Video Proxy Servers

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004 599

Design a Progressive Video Caching Policyfor Video Proxy Servers

Wei-hsiu Ma and David H. C. Du, Fellow, IEEE

Abstract—Proxy servers have been used to cache web objectsto alleviate the load of the web servers and to reduce networkcongestion on the Internet. In this paper, a central video server isconnected to a proxy server via wide area networks (WANs) andthe proxy server can reach many clients via local area networks(LANs). We assume a video can be either entirely or partiallycached in the proxy to reduce WAN bandwidth consumption. Sincethe storage space and the sustained disk I/O bandwidth are limitedresources in the proxy, how to efficiently utilize these resources tomaximize the WAN bandwidth reduction is an important issue.We design a progressive video caching policy in which each videocan be cached at several levels corresponding to cached data sizesand required WAN bandwidths. For a video, the proxy serverdetermines to cache a smaller amount of data at a lower level orto gradually accumulate more data to reach a higher level. Theproposed progressive caching policy allows the proxy to adjustcaching amount for each video based on its resource condition andthe user access pattern. We investigate the scenarios in which theaccess pattern is priorly known or unknown and the effectivenessof the caching policy is evaluated.

Index Terms—Caching policy, progressive video caching, proxyserver, two-constraint multiple-choice knapsack problem.

I. INTRODUCTION

THE ROLE OF streaming stored video over high-speed net-works and Internet becomes increasingly important for

multimedia applications. The high bandwidth requirement andrate variation of compressed video introduce some challengingissues to the end-to-end delivery of streaming video over widearea networks (WANs). WAN is usually shared by many insti-tutions or organizations across distant regions and thus its band-width is more expensive and difficult to expand than local areanetworks (LANs). Proxy servers have been widely used for webaccesses to decrease the response time and to alleviate the loadof busy web servers. The same concept can be applied to videostreaming.

Originally, a central video server provides video archive anddelivers video content to the users directly. A video proxy server(proxy server or proxy) residing at the same local area of theclient can assist the delivery by taking advantage of its closeproximity to the client. The proxy server stores portions of avideo and transmits the cached and staged data to the client

Manuscript received December 3, 2001; revised July 18, 2002. This workwas supported in part by the National Science Foundation under Grants CMS-0086602 and EIA-0224424, and by the Digital Technology Center IntelligentStorage Consortium.The associate editor coordinating the review of this paperand approving it for publication was Dr. Shue-Lee Chang.

The authors are with the Department of Computer Science and Engi-neering, University of Minnesota, Minneapolis, MN 55455 USA (e-mail:[email protected]; [email protected]).

Digital Object Identifier 10.1109/TMM.2004.830819

through LAN such that the average rate of the transport from thecentral server to the client is reduced. Ideally, the proxy serverstores all the interested videos of this local community. How-ever, the proxy has limited resources, especially storage capacityand sustained disk transfer rate (disk I/O bandwidth) for storingand accessing cached video data. Hence, it is important for theproxy server to store the “right” video data such that the overallWAN bandwidth requirement can be reduced as much as pos-sible. Note that we assume the bandwidth of LAN is much lessexpensive and adequate.

Considering video caching instead of web caching, it is pos-sible to store partial video in a proxy server such as video staging[12], [28], prefix caching [13], [25], layered video caching [21]and selective frame caching [15]. From the related studies, some(initial portion or segments) of the video frames or partial videoframe (e.g., some layers of layer-encoded videos) can be cachedto achieve various objectives like WAN bandwidth reduction,startup delay reduction or quality adaptation.

There are good reasons to allow the proxy server caching par-tial video instead of the entire video.

1) Due to the large size of a video file, the storage space inthe proxy may be wasted if the entire video cannot fit intothe available space. The proxy can only store a few entirevideo files and thus caching whole video may not be efficientespecially when many users are accessing different videos.2) In some situations, caching part of a video may besufficient to achieve the purpose or gain much benefit. Forexample, caching the initial portion of the video can reducestartup delay [13], [25]. Storing some video frames canachieve efficient CBR transmission over WAN so that largerpeak rate due to smoothed VBR delivery is eliminated [12].3) In whole video caching, it takes the proxy longer timeand much higher disk I/O bandwidth to cache a whole video.Based on a removal policy, some other videos may have tobe discarded to free space to accommodate the newly cachedvideo. It is possible that the proxy caches a potentially coldmovie and gets rid of a hotter movie. A wrong decision willwaste much I/O bandwidth and storage space. To reduce sucha risk, partial video caching can be useful.

Theoretically, the proxy server can arbitrarily select cacheddata amount for a video but this will cause complexity and dif-ficulty in managing cached data. To avoid arbitrary selectionand still provide the proxy server flexibility to cache partialvideo, we propose the concept of progressive video caching.Each video provides several levels (options) in terms of proxyresource consumption and bandwidth reduction amount (or re-quired WAN bandwidth) so that the proxy server can select one

1520-9210/04$20.00 © 2004 IEEE

Page 2: Design a Progressive Video Caching Policy for Video Proxy Servers

600 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004

level of the video to be cached based on its caching policy andresource status.

One extreme level (Level 0 in this paper) is the selection ofno data caching. In this case, there is no resource consumptionin the proxy server or just small amount of consumption (e.g.memory buffer) for the passing data. If the proxy decides tocache portions of the video, it can store the passing data of thisvideo to a certain amount which is defined as Level 1. Then,WAN bandwidth requirement can be reduced due to the cacheddata when this video is requested later. Based on the cachingpolicy, the proxy server can accumulate more data to reach ahigher caching level. The other extreme (highest) level is tocache the entire video so that the requests for this video willbe completely served by the proxy server. Between the two ex-tremes the cached data amount is gradually increased from alower level to a higher level. Note that the level is simply cor-responding to the amount of data to be cached in a video andthe video is not necessarily multilayer encoded. However, if avideo is multilayer encoded, the level can naturally correspondto each additional layer to be cached.

The objective of our study is to design basic caching poli-cies to determine which portion (or level) of a video shouldbe cached in progressive video caching such that the maximumWAN bandwidth reduction can be accomplished. The disk spaceand disk I/O bandwidth of the proxy server are the two con-straints to be considered. Our approach is twofold.

1) Investigate various caching policies for a given static useraccess pattern. That is, the number of concurrent accesses oneach video is known and very stable. Under this assumption,the proxy has to decide which level is chosen for each videosuch that the overall bandwidth reduction over WAN can bemaximized. It is possible that the proxy prefetches the videodata offline since dynamic adjustment is not required.2) Design caching policy for unknown or dynamically

changed user access patterns. In this case the proxy serverdoes not have priori knowledge about the user access pat-tern. It accumulates the information from the requests todynamically adjust the cached level of each video based onthe proposed caching policy.

The rest of the paper is organized as follows. First, somerelated work is introduced. We also review a scheme, ChunkAlgorithm, used to partition a video into segments and decidethe caching Levels. Section III formulates the progressive videocaching problem as a generalized knapsack problem. Severalheuristic approaches are provided to solve the scenario of staticuser access patterns. In Section IV, we consider the dynamicsituation in which the proxy server will dynamically adjustthe level for each video based on the estimated access pattern.Section V concludes this paper.

II. RELATED WORK

Related work in the area of video smoothing, and video proxycaching or staging is described in this section. By taking advan-tage of the client buffer, video traffic can be smoothed in eitherconstant bit rate (CBR) [14], [26] or variable bit rate (VBR) [9],[24]. The video server decides a transmission plan based on thecharacteristics of the stored video and client buffer size such

that buffer overflow and underflow are avoided to provide con-tinuous playback.

There are a few researchers studying issues regarding videoproxy servers. One purpose is to reduce network bandwidthrequirement [12], [13], [28]. Sen et al. has proposed a prefixcaching scheme to store the initial part of the video at the proxyto reduce the startup latency [25]. Smoothing between the proxyand the client is investigated in [22], [25]. Under the assumptionof using layered-encoded multimedia stream, a fine-grain re-placement algorithm for caching data is provided in [20]. Miaoand Ortega proposed to cache some frames of the video to min-imize the quality loss under network congestion [15].

There are many caching policies designed for web caching.Similar approaches are gathered in the same item of the fol-lowing list. They are only intended to be used for referencesso that the detail of each policy is not described.

1) Least Recent Used (LRU);2) Least Frequent Used (LFU);3) LRU-Threshold [1];4) Size [29];5) LRU-Min [1], Log(Size) [29], Pyramidal Selection [2];6) Day [19];7) Page Load Delay [30], Hybrid [30];8) Greedy Dual-Size [4], [5];9) Lowest Relative Value (LRV) [23];

10) Logistic Regression [10].

In general, most of the policies define a value for each web ob-ject. Then based on the value, objects are selected for caching.A threshold may be set to only cache the objects under thisthreshold.

Another work on resource-based caching [27] proposed byTewari et al. is closely related to our study. Their segmenta-tion of the multimedia objects are based on interval caching[6] which is originally designed for buffer caching in a videoserver. Interval caching considers two or several consecutive re-quests (called a run) and temporarily keep the data between thefirst and the last streams. They formulated the caching model asa two-constraint knapsack problem. Our work has several dif-ferent aspects.

1) We consider progressive video caching in which theproxy has several options for caching each video under theconstraints of disk storage. The problem can be formulatedas a 2-constraint multiple-choice 0–1 knapsack problem.Therefore, we design a different heuristic algorithm to solvethe problem.2) In interval caching, the cached data is released after thelast stream of a run finishes. Our approach does not dependon how close the requests are and the cached data is kept inthe proxy until it has to be removed according to the removalpolicy.3) We further consider the admission control in the proxyserver and some other issues in the design of the videocaching policy.In our previous work [12], we have demonstrated a delivery

model, client-synchronization model, for caching partial videoin the proxy. The client allocates two buffers (separated orshared); one to receive the data flow from the central server

Page 3: Design a Progressive Video Caching Policy for Video Proxy Servers

MA AND DU: DESIGN A PROGRESSIVE VIDEO CACHING POLICY 601

(central stream) over WAN and the other from the proxy (proxystream) via LAN. The client merges and synchronizes the videodata from the two sources and then feeds the data to its decoderin right order. The proxy server allows the central stream topass through so that it is possible to cache more data.

We also have proposed an approach, Chunk Algorithm, to de-cide which frames are cached in the proxy based on a givenWAN CBR rate and a client buffer size for the centralstream [12]. The video frames are divided into segments whichare called chunks and stored in the central server and the proxyserver alternately. Upper part of Fig. 1 demonstrates the basicmechanism of Chunk Algorithm for the central stream. The de-cision of frame selection is based on maintaining the feasibilityof the CBR transmission schedule . That is, no buffer un-derflow or overflow happens for the client buffer . The bufferunderflow and overflow curves are denoted by and , re-spectively and they always have the same shape.

has to hold.In the beginning of the video, the frames are selected to be

cached in the proxy until its client buffer is close to be full.This causes a flat period on and the client does not displayany frames of the central stream during this period. Then theframes are not cached and the consumption curve starts toincrease to form an ascending period. It goes back to the flatperiod if the buffer is almost underflow due to consumption. Onthe other hand, the buffer constraint curves for the proxy streamare opposite. We use optimal smoothing [24] approach to deliverproxy frames in VBR based on the other client buffer . It isshown in the lower part of Fig. 1.

The main purpose of Chunk Algorithm is to divide video intochunks so that the storage management at the proxy and clientsite synchronization can be handled efficiently. It is also usedfor the assumed underlying delivery model in this paper for theprogressive video caching. However, we do not eliminate otherpossible models for defining caching levels. The video serviceprovider may decide that Video is served in one of the levels,from Level 0 to Level . Assume is selected to be the CBRrate for Level of Video . By following our convention, theproxy server stores more data at Level than Level . Clearly,

. Then Chunk Algorithm can be used to determinethe cached frames for each level. After each CBR rate isdecided, the resource requirements in the proxy and the clientcan be determined by Chunk Algorithm for each level.

III. CACHING POLICY FOR STATIC USER ACCESS

In this section, we concentrate on the caching policy as-suming the user access pattern is known and is static for a longperiod of time. Therefore, the cached data will remain the sameonce it is determined. The caching policy for this scenario onlydecides which level of each video will be selected such that themaximum WAN bandwidth reduction can be achieved.

A. Problem Formulation

A user access pattern or access profile is defined as the proba-bility of a video being accessed and thus indicates the popularityof each video. The access profile of video on-demand service iscommonly modeled using Zipf distribution. Let denote

Fig. 1. Chunk Algorithm.

the total number of concurrent accesses from the communitythat the proxy is serving. Assume that the videos are ordereddecreasingly according to their popularity. That is, Video 1 ismost popular, Video 2 is second most popular, etc. Let denotethe total number of available videos which may be requested.Based on Zipf distribution, the number of accesses for Video is

, where , ,is the skew parameter. The Zipf distribution becomes uniformwhen . When is close to 0, the distribution is the mostskew and the differences between popular and cold videos arethe largest. A study suggests according to the actualrental data [7].

The service provider defines levels for a Video from Level0 to Level . The tradeoff happens between the WAN band-width reduction and proxy resource consumption. The more re-sources the proxy provides, the more bandwidth reduction canbe achieved. Let , , and denote the requiredtransmission rate over WAN, bandwidth reduction amount, diskspace requirement in the proxy and I/O bandwidth requirementin the proxy for Level of Video , and

. At the lowest level (Level 0), , and are all0 since no data is cached. As the level goes higher, the bandwidthrequirement is reduced, but the proxy stores more video dataand causes larger and . denotes the bandwidth re-duction which is . An example of the caching levelsfor Video is listed in Table I with . Assume the averagerate of this video is 400 Kbps. From Level 0, each level reduces1/4 of the required rate. However, the disk space and I/O band-width consumption also increases. is the size of the entirevideo.

For a single stream of Video , the benefit for WAN band-width is if the proxy performs caching at Level . Whenthere are streams, the benefit will accumulate as .However, the proxy server also needs to provide more disk I/Obandwidth to serve streams at the local site. Notethat the storage space occupancy is still no matter how

Page 4: Design a Progressive Video Caching Policy for Video Proxy Servers

602 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004

TABLE IAN EXAMPLE OF CACHING LEVEL TABLE FOR VIDEO i

many streams are served since there is only one copy of thecached data.

After the user access pattern and caching level are defined, itbecomes an optimization problem. Let and denotethe total storage space and disk I/O bandwidth, respectively. Theproblem is to find out the level selection for each video suchthat the total WAN bandwidth reduction is maximized under thestorage space and disk I/O bandwidth constraints in the proxy.Then, the problem can be formulated as an integer programmingby rounding up all parameters to integers in proper units:

maximize

subject to

for each

represents whether the Level of Video is selected (1)or not (0), and exact one level of each video has to be selected.By normalizing the resource consumption of each video basedon the proxy’s capability, we define and

and thus the two resource constraints areequivalent to

(1)

This formulation is identical to two-dimensional (2-D) (con-straint) multiple-choice 0–1 knapsack problem. In our case,each video consists of several levels like each set containingseveral subitems. One of the levels has to be included in the se-lection. Moreover, selecting each level of a video provides ratereduction (value of this sub-item) and occupies two resources,storage space and disk I/O bandwidth, in the proxy. Hence,maximizing the WAN bandwidth reduction under the tworesource constraints can be transformed to 2-D multiple-choiceknapsack problem [16], [17]. If the proxy only deals with wholevideo caching without level options, the problem is 2-D knap-sack problem [3], [11] after the choices are eliminated. On theother hand, if one resource is the major bottleneck and the other

constraint is not critical, the situation becomes multiple-choiceknapsack problem [8], [18]. All of these different variants ofknapsacks problem have been proven to be NP-hard.

B. Heuristic Algorithms

Since the problem is NP-hard, heuristic algorithms have tobe designed so that efficient near-optimal solution may be de-rived. We describe the dynamic programming approach for theoptimal solution when only single resource constraint is consid-ered. It can be used for comparison with other heuristics if oneof the two constraints is the bottleneck. Then, some heuristicalgorithms are proposed to solve the whole or partial progres-sive video caching problem. The whole video caching is in-troduced for comparison to investigate how much progressivevideo caching can improve the rate reduction.

1) Dynamic Programming for Single Constraint: Dynamicprogramming can provide an optimal solution for mul-tiple-choice knapsack problem. This approach can be applied ifone of the two resources is the bottleneck in the proxy server.Since the memory and computation requirements may be pro-hibitively high in practice for considering two constraints at thesame time, only the approach for one constraint is introducedfor comparison purpose.

We only describe the dynamic programming for storagecapacity constraint. The same scheme can be applied to diskI/O bandwidth constraint, too. Let denote the maximumWAN bandwidth reduction when the total storage requirementis not greater than for Video 1 to . The goal is to find

. The recursive equation based on [8] with slightmodification is

(2)When Video does not contribute any storage consumption,Level 0 is selected and in this case .Its computation time is and

for storage and I/O bandwidth constraints,respectively.

2) Whole Video Caching: If the proxy does not support par-tial video caching, then only the whole video caching is allowed.For this case, we introduce two algorithms to decide whichvideos are to be stored in the proxy.

Most Access First (MAF): MAF scheme is to cache theentire video files at the proxy according to their popularity.A similar scheme has been mentioned in [28] for a singleconstraint. In other words, Video 1 is staged first, then Video2, etc until the I/O bandwidth or the storage constraint cannot be satisfied. This approach is if the video is al-ready sorted based on the number of accesses. Formally, thefirst videos are selected if the followingconditions hold: or

. As long as one ofthe two conditions is met, the selection process stops even theother resource may still have free space. Each video is either atLevel 0 or after selection.

Most Bandwidth Reduction First (MBRF): MBRF is sim-ilar to MAF, but it selects videos based on the bandwidth re-duction amount instead of the number of concurrent accesses.

Page 5: Design a Progressive Video Caching Policy for Video Proxy Servers

MA AND DU: DESIGN A PROGRESSIVE VIDEO CACHING POLICY 603

Fig. 2. Illustration of greedy algorithm.

Video with the highest bandwidth reduction is con-sidered to be cached first. This process is repeated until the I/Obandwidth or the storage constraint is violated. If all the videoshave similar characteristics and same bandwidth reduction, thevideo with more concurrent accesses will contribute more band-width reduction and thus MBRF is the same as MAF. How-ever, for caching heterogeneous types of videos, the video withmore accesses may not always reduce more bandwidth. Thisapproach requires sorting bandwidth reduction values of thevideos and then sequentially selects the staged videos. Hence, ittakes time to complete by using some popular sortingalgorithm.

3) Progressive Video Caching: Moser et al. propose a so-phisticated heuristic in [16], [17] which solve the multidimen-sional multiple-choice knapsack problem. Since the knapsackproblem and its variants are usually solved using greedy heuris-tics, we design a simple and efficient greedy approach calledMost Valuable Level Difference First (MVLDF) to solve theprogressive video caching problem.

For Video , assume is given as a beneficial value for theupgrade from Level to Level . If the Level of Video

is selected, the overall beneficial value will be incremented bycompared with the selection of Level . The exact values

of will be defined later. MVLDF greedy approach can beillustrated in Fig. 2. Initially, the Level 0 (no caching) of eachvideo is selected. The algorithm tries to upgrade the levels ofvideos gradually. In the first round, the values of for allthe videos are compared which are represented by the grey por-tion. If is the largest one, upgrade to Level 1 for Video 3gains most beneficial value (most valuable level difference) inthis step. Therefore, this upgrade is selected and marked cross.Then in the second round, the level upgrades marked by circlesare investigated. MVLDF repeats this process by upgrading onemore level of a video each round until either resource (storagespace or disk I/O bandwidth) is filled up.

The varables, and , represent the current utilizationof storage space and disk I/O bandwidth respectively. Since thetwo constraints may need to be considered together, we use nor-malized and instead of absolute and such thatthe two factors have a fare scale. indicates the current level se-lection for Video . In each round, the maximum value of ,

, which happens on Video is searched. Clearly, inthese options, the resource consumption for each level upgradecan not be larger than the free portions of the two resources.Then Video can be upgraded from Level to . Finally,the total bandwidth reduction based on the selection is reported.

Fig. 3. Greedy algorithm.

The value of may be the same in the whole process andthus just need to be calculated once. However, may changefrom time to time according to, for example, the current status ofproxy resources. In this case, their values are recalculated basedon the given condition in each round. The detail of MVLDF Al-gorithm is described by pseudo code in Fig. 3.

The performance of the algorithm is determined by how wedecide . The basic idea is to evaluate the relationship ofbandwidth reduction amount and required resource consump-tion. We list several ways to determine forand . Let ,

and denotethe differences of WAN bandwidth reduction, storage space re-quirement in the proxy and total disk I/O bandwidth in the proxybetween Level and , respectively. Note that and

include multiplier since multiple accesses have aggre-gate effect on these two factors.

• Case 1: .This definition of can be called the beneficial

ratio of bandwidth reduction to storage space. It makesthe algorithm focus on finding the level upgrade with

Page 6: Design a Progressive Video Caching Policy for Video Proxy Servers

604 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004

the maximum benefit (bandwidth reduction) per unit ofresource consumption (storage space). The I/O bandwidthconsumption is not considered in this criteria and thusthis case assumes the storage space is the bound and themore critical resource.

• Case 2: .This is similar to Case 1, but only disk I/O band-

width is considered. Although the bandwidth reduction isproportional to the popularity of the videos, the I/O band-width is also consumed proportionally. Hence, the multi-plier in and will be eliminated.

• Case 3: .The two resource factors are included together in this

case. The denominator is the norm of the required re-sources from Level to . It is possible that a levelupgrade requires small amount of I/O bandwidth access,but occupies large storage space. Hence, it may not be fareto use single constraint like the previous two cases. Thepurpose of this is to take the combination of two re-sources into consideration.

• Case 4:.

This criteria chooses the larger bandwidth reduction toresource ratio between storage space and I/O bandwidth.The value of each may change based on the currentresource status of the proxy server. By including and

, it is like adding weights on both beneficial ratios.When the utilization of a resource is large, the ratio onthis resource is more emphasized and vice versa. To sim-plify the explanation, let and

. Assume in the early stage. However, after some rounds, the two resources

are filled in by more levels using the greedy approach,the utilizations of the two resources change and and

are recalculated. It is possible that (afterrecalculation) if the utilization of I/O bandwidth growsfaster than storage space. In this scenario, I/O bandwidth isthe relative scarce resource compared with storage space.Hence, to efficiently utilize I/O bandwidth is more impor-tant for the current status of the proxy server. A level up-grade with higher tends to be selected. Note that theterm of or is ignored if it is zero.

• Case 5:.

This criteria can be treated as the combination of theprevious two cases. We consider the consumption of tworesources together as well as the status of the proxy re-sources. Therefore, even the utilization of one resource islarge, the other resource still plays a role in the decision,but with lower weight. The weight for each resource con-sumption is or instead ofor in Case 4. It is because the two resource fac-tors are combined in the denominator. One resource, e.g.storage space, with larger utilization will gainhigher weight since . There-fore, will have larger impact in the value.

For Case on the selection of , we call the heuristicMVLDF- . Since in MVLDF-4 and 5 depends on the cur-

TABLE IIMOVIE PROFILE

rent resource utilization, it needs to be recalculated before eachround. During the selection process in the greedy algorithm,for each level upgrade needs to be adjusted. On the other hand,

is always fixed for MVLDF-1 to 3 and only calculated oncein the beginning.

C. Performance Analysis

We define the (bandwidth) reduction ratio as the per-formance metric which is the ratio of total bandwidthreduction to the total required bandwidth without usingthe proxy. Assume is the selected level for Video

after running an algorithm. The reduction ratio is

where is the WAN bandwidth requirement at Level 0 (nocaching) for Video . Clearly, the goal is to make the reductionratio as high as possible under the same resource constraint ofthe proxy server.

1) Simulation Settings: There are nine video traces used inthe simulation. Table II lists the characteristics of the videos.The first six are in MPEG-1 format and the last three are inJPEG. The display rates for MPEG and JPEG videos are 24 and30 frames/s respectively. The peak rates are obtained by using anoptimal smoothing scheme [24] with an 8 MByte client bufferand a 1 s startup delay.

The levels of each video are decided using Chunk Algorithm[12]. For each video, the multiple levels are determined bychoosing several CBR rate options first and then calculating theresource requirements of the proxy server for the correspondingtransmission rates. In the simulation, 8 MByte client buffer isused to receive the video data from the central video server andthe proxy server. Each video has up to five level options, from0 to 4. The CBR rates are 1/4, 2/4, and 3/4 of the average ratesof the video at Level 1, 2, and 3, respectively. We adopt thepeak rate in Table II as of Video . Since there are manyconcurrent accesses in the proxy server, we assume the I/Obandwidth consumption is the average rate of disk access foreach video stream.

Page 7: Design a Progressive Video Caching Policy for Video Proxy Servers

MA AND DU: DESIGN A PROGRESSIVE VIDEO CACHING POLICY 605

Fig. 4. Performance results by varying number of current accesses.

Fig. 5. Performance results by varying skew factor.

The disk model follows the real performance of SeagateElite-9 disk which provides 9 GB capacity and 6 MB/s sus-tained disk I/O transfer rate. We choose 50 videos and usually200 concurrent accesses in order to obtain the reasonableresults based on the single Elite-9 disk and to reveal the effec-tiveness of the algorithms. In the real streaming systems, theymay handle thousands of videos and concurrent accesses. Oursettings and simulation results can be scaled up by both addingmore disks and increasing the I/O bandwidth in a proportionalmanner. The simulation on a single disk is sufficient to theinvestigation of the proposed algorithms and the study of theresource bottleneck.

2) User Access Pattern: Many proxy servers may reside atdifferent communities where the popularities and their prefer-ences for videos vary. A good heuristic scheme must be ableto adjust to various user access profiles. The access pattern in-cludes the total number of concurrent accesses, the total numberof videos and the skew factor of the Zipf’s distribution. We as-sume the number of available videos is fixed. The performanceevaluation of various heuristics can be obtained by changing thenumber of concurrent accesses or the skew factor. Only singlevideo trace (Star Wars) is used. That is, all the videos have thesame characteristics to clearly reveal and compare the behaviorof the heuristics.

Fig. 4 depicts the relationship between the number of concur-rent accesses and the bandwidth reduction ratio. The number ofvideos is 50 and the skew factor is set at 0.3. It is observed thatthe performance is close in the cases of MVLDF-3 to 5 whereboth resources are considered. MVLDF-5 provides higher re-duction ratio in general. When the number of accesses is smaller,MVLDF-1 is among the leading heuristics, but it can not com-pete if the number of accesses grows. MVLDF-2 behaves op-positely. The reason of the behavior can be explained usingthe resource utilization of MVLDF-5 on the right-hand side ofFig. 4. We only use MVLDF-5 as the example, but the otherheuristics follow the same trend. For 100 and 150 accesses, thestorage space is the bottleneck so that MVLDF-1 can handle thissituation well although it only focuses on the space resource.MVLDF-2 is appropriate for the condition of I/O bandwidthbound while the number of accesses increases. In general, thebandwidth reduction ratio drops for larger number of accessesdue to saturation of proxy I/O bandwidth and higher WAN band-width requirement.

Based on 200 concurrent accesses, Fig. 5 shows the perfor-mance of different skew factors. MVLDF-5 still provides thebest result no matter how skewness varies. The reduction ratiostarts to drop when skew factor increases from 0.5. Since the

Page 8: Design a Progressive Video Caching Policy for Video Proxy Servers

606 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004

Fig. 6. Disk models with various space.

total number of accesses is fixed at 200, the WAN bandwidth re-quirement without using proxy is the same for all the cases. Theresource utilization is depicted on the right-hand side of Fig. 5when MVLDF-5 is used. It indicates that the disk I/O bandwidthof the proxy is almost fully utilized for smaller skew factors. Inthese cases, the proxy reaches its limit in bandwidth reductionratio around 0.7. When the a skew factor is larger, the requestsdo not focus on small number of the videos. Hence, storagespace becomes the bottleneck since each video does not con-sume much disk I/O bandwidth. As a result, the storage space isfully occupied, but some disk I/O bandwidth is wasted so thatthe heuristics can not achieve the same reduction ratio for largeskew factors.

3) Storage Capability: Another important issue in perfor-mance evaluation of the proxy server is how much bandwidthreduction ratio can be improved by expanding its resources in-cluding disk I/O bandwidth and disk space. Our approach is tofix one parameter first to observe the effectiveness of the algo-rithms. Based on a real disk model, we also investigate the im-pact of adding more disks. It is worthwhile to study if the algo-rithm can adjust for videos with different characteristics insteadof using only one video trace. In our simulation, each video israndomly selected from nine video traces.

We first assume the disk I/O bandwidth of the storage systemis fixed (20 MB/s) but with different disk space for cachingvideo data. Fig. 6 depicts the bandwidth reduction ratio usingwhole video caching (MAF and MBRF) and progressive videocaching (MVLDF-5). The user access pattern is 200 concurrentaccesses with a skew factor 0.3. When the disk space is small,the performance of the proxy server is bounded by the disk spaceand its utilization is close to be full. In this case, adding diskspace can improve the bandwidth reduction. However, after I/Obandwidth consumption reaches to its capacity, the reductionratio is also pushed to its limit. Then, it is not useful to increasedisk space since I/O bandwidth is the bound. MVLDF-5 outper-forms the whole video caching in all the cases.

Then, the storage space is fixed at 15 GB and disk I/Obandwidth varies. We study the impact if the disk transfer ratebecomes faster. Fig. 7 plots the result. The resource bound isshifted from the I/O bandwidth to the storage space along withthe increase of the disk I/O bandwidth.

After studying the effect of extending disk space or I/O band-width at a time, we simulate adding disks in the proxy. The disk

Fig. 7. Disk models with various disk I/O bandwidth.

Fig. 8. MVLDF-5 versus dynamic programming.

model is still Seagate Elite-9. MVLDF-5 and the dynamic pro-gramming approach is first compared in Fig. 8 to explore howclose the heuristic algorithm can approach the optimal solution.Since the dynamic programming scheme only handles a singleconstraint, we choose 300 accesses with skew factor 0.3 suchthat the resource bound is always the I/O bandwidth. It is ob-served that the difference between the optimal and heuristic so-lutions is very small. The curve is also close to linear, that is, thegreedy scheme effectively improves the reduction ratio whilemore disks are added.

IV. CACHING POLICY FOR UNKNOWN OR DYNAMIC

ACCESS PATTERN

Since the user access pattern may be either unknown or it maychange from time to time, the proxy server needs to dynamicallyadjust the cached data based on the estimated access pattern toachieve higher WAN bandwidth reduction. In this section wedesign the policy for progressive video caching to handle thissituation. The caching policy for this scenario includes waysto estimate access pattern, and the decision for caching and re-moving video data based on the current resource status in theproxy.

Page 9: Design a Progressive Video Caching Policy for Video Proxy Servers

MA AND DU: DESIGN A PROGRESSIVE VIDEO CACHING POLICY 607

A. User Access Pattern Estimation

In the scenario of unknown or dynamically changed accesspattern, the popularity of each video is estimated to help makingcaching decision. Counting the access frequency, the number ofrequests in a period of time, of a particular object is a typicalway for estimation. For example, the scheme of LFU (LeastFrequently Used) discards the cached objects with the smallestaccess frequency when additional space is needed to cache arequested object so that the most popular objects are kept in thecache.

Since the popularity of a video may change as time goes on,the proxy can focus on the most recent access behavior to es-timate the current access pattern. Hence, we define a samplewindow with time length . Only the requests in the samplewindow, from time ago to the current time, are counted formaking caching decision.

Although the proxy can calculate the total number of ac-cesses for each video in the sample window, how to estimate thenumber of concurrent accesses is even more important. At anytime instance, the total disk I/O bandwidth is the constraint ofserving concurrent streams. Hence, the total number of accessesdoes not directly fit into the consideration of this I/O bandwidthconstraint. Let denote the number of accesses obtained forVideo in the sample window. We define the estimated numberof concurrent accesses in the average sense for Video by using

Length of Video(3)

This definition takes the video length into account in additionto just the access frequency. Each access of a long video alsooccupies disk I/O bandwidth for a long period of time. Hence,the disk accesses from multiple requests are likely to overlapwith each other and cause higher number of concurrent accessesalthough the requests are issued at different time. On the otherhand, a larger may not create a higher if the length ofVideo is short. For example, assume the sample window is2 h, Video 1 is 1 h long and Video 2 is 2 h long. If the proxyfinds , that is, there are two requests for Video 1 in theprevious 2 h, then which means in average there is oneaccess on this video simultaneously. If is also 2,indicates a larger number of concurrent accesses is estimatedfor Video 2 due to its longer length.

B. Dynamic Caching Policy

In this subsection, we design the caching policy suitable todynamic adjustment situation for progressive video caching.Whenever the proxy receives a request from the client, therequested video may be at caching Level 0 (miss) or thereis some data already cached (hit). If the video is entirely orpartially stored in the proxy, the proxy needs to check if theavailable disk I/O bandwidth can handle serving (reading) thecached data from the storage. If the available bandwidth is notenough, it is not allowed to serve. In this case, the whole videois served by the central server although partial or full cache hithappens.

If a cache miss or partial caching occurs for the request, itmeans that the video is not entirely stored in the proxy and up-grading the video to the next level has to be considered. Then aquestion raises: Is the video allowed to be upgraded (i.e., to store

more data in the proxy for future accesses)? First, the proxy stillneeds to make sure that it can provide enough disk I/O band-width for caching process (writing video data to the disk) in ad-dition to serving data if the video is partially cached. Secondly,the proxy checks if the current free storage space is enough tohold the extra video data for upgrade. That is, the required (nor-malized) space is from Level to if the currentlevel is for the requested Video . If the free space is greaterthan the required size, the proxy is allowed to cache one morelevel directly. Otherwise, the proxy has to remove some videodata to free certain disk space for caching.

Another important issue is which videos are selected for re-moval (downgrade by one level) such that the free storage spacecan accommodate the upgraded level after removal. Accordingto the greedy algorithm in the previous section for static ac-cess pattern, we adopt the same beneficial value for Levelto of Video :

. However, andare re-defined where the original

is replaced by the estimated number of concurrent access .Note that is not involved in due to only one copy of thevideo data cached in the storage.

indicates the beneficial ratio of network bandwidth re-duction to the resource consumption in the proxy from Levelto Level . It is not reasonable to discard the levels of cur-rent cached video with larger beneficial values to upgrade therequested video and gain smaller benefit. Therefore, upgrade ofVideo is not allowed if the selected videos for downgrade havea larger total beneficial ratio. Let denote the set of cachedvideos which are the candidates of removal when Video is re-quested and the proxy intends to upgrade Video from Levelto . Assume a Video is currently at caching Level .

. That is, only the videos which are notserved by the proxy are eligible for downgrade. These videosare also at least partially cached so that they can provide spacefor removal. If the number of items is in , the candidatescan be denoted and sorted as based on their beneficialvalue: . The sorted listis scanned from the video with the smallest value until enoughspace can be vacated. Formally, the removal and caching are al-lowed only if and the following three conditions hold:

(4)The first condition guarantees that the total beneficial value ofthe removed data is smaller than the value of the target cachinglevel. The second condition makes sure that the total free spaceafter downgrade of some videos is enough for caching. Thethird one focuses on the free bandwidth can handle both writing

and serving data.

C. Performance Evaluation

Simulation is performed to evaluate the effectiveness of theprogressive video caching policy for unknown or dynamic useraccess pattern. The requests are generated using Poisson dis-tribution with mean . That is, the mean of the interarrival time

Page 10: Design a Progressive Video Caching Policy for Video Proxy Servers

608 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004

Fig. 9. Comparison of known and unknown access patterns.

between two consecutive requests is seconds. Clearly, smallerwill generate more requests in the same period of time. Which

video is requested by the client each time is decided according tothe probability of Zipf’s distribution with skew factor .Although and the parameters in Zipf’s Law are given in thesimulation to generate the requests, it is assumed the proxydoes not have any knowledge about the access pattern. The totalnumber of videos is assumed to be fixed at 50. The disk modelis still the Elite-9 disk and one disk is used.

We first investigate the difference between the scenarios ofknown (static) and unknown (dynamic) access patterns. In theknown and static case, the proxy server prefetches the video dataand then the cached data does not change. On the other hand,the proxy dynamically adjusts the caching data based on theresults of estimation if the pattern is unknown. It is expected thatthe static case will provide better bandwidth reduction since nodisk write operation during serving process consumes disk I/Obandwidth.

Even the Zipf’s skew factor is the same for both cases, theunknown access pattern uses mean interarrival time to producethe requests along the time line instead of the number of concur-rent access. To make a fair comparison, we transform the meaninterarrival time to the average number of concurrent accessesas for the known pattern. It is like gen-erating each request with exactly interval and representsthe total number of requests in the period of one video length.Since serving each request endures for one video length, alsorepresents the average total number of concurrent accesses. Thetransformation of the access pattern is related to video lengthand thus all the 50 videos use the same trace (Star Wars) withthe same length (120.9 minutes).

Fig. 9 depicts the results of various values for both knownand unknown user access patterns. The performance result is thetotal bandwidth reduction ratio of 24 h simulation time exceptthe first two hours when the proxy initially accumulates videodata. The size of the sample window is set at 4 h. This is a rea-sonable choice of viewing custom. For instance, in a home en-tertainment scenario, most of people may watch video aroundthe 4 h period of 6 pm to 10 pm, but less in 2 pm to 6 pm. FromFig. 9, the performance of dynamic adjustment is not far away

Fig. 10. LRU, LFU, and progressive video caching policies.

from the static situation although the disk write overhead is in-troduced. For the interarrival times from 60 to 120, the proxyreaches the limitation of disk I/O bandwidth and both cases arevery close.

The popular caching schemes, LRU and LFU, are used forcomparison, but only whole video caching is simulated in thesetwo schemes. The three schemes, LRU, LFU and progressivecaching, are compared by varying the mean interarrival time

in Fig. 10. Each of the 50 videos is randomly selected fromTable II. The 4 h sample window is only applied to LFU and pro-gressive video caching since they refers the request frequency.Surprisingly, LRU performs almost identically as LFU. This sit-uation does not happen in web caching in which LRU usuallyis worse than LFU. According to the observation during sim-ulation, the two schemes have a similar behavior on removingcached files since removal does not happen so often and there arenot many choices of removal items due to disk I/O bandwidthadmission control. Progressive caching indeed consistently out-performs the other two schemes.

In LRU and LFU, we found many times of removal happeningat . Then, significant amount of I/O bandwidth is usedfor disk write and the overall performance of the proxy is de-graded. On the other hand, the progressive caching scheme ismore robust. Caching and removing one level each time also re-duce the risk of caching wrong videos. Therefore, the progres-sive caching scheme maintains reduction ratio growth whenis increased.

Lastly, we consider that the underlying access pattern changesfrom time to time to test how the proxy reacts. Fig. 11 illustrateshow the underlying access pattern changes in two days based onthe value. The period of the same pattern lasts for 8 h.

The hourly statistics are illustrated in Fig. 12 using 4, 2, or 1 hsample window. The accumulated bandwidth reduction ratio ineach hour is plotted. The performance in steady state from theprevious results (Fig. 10) is also depicted for the corresponding

value. In general, a 1 or 2 h sample window produces slightlylower reduction ratio. It is found that smaller window createsmore opportunities of removal since the request frequencies forsome cached videos may be zero and thus the beneficial valueis also zero.

Page 11: Design a Progressive Video Caching Policy for Video Proxy Servers

MA AND DU: DESIGN A PROGRESSIVE VIDEO CACHING POLICY 609

Fig. 11. Simulation time line for dynamic user access pattern.

Fig. 12. Hourly statistics for dynamic user access pattern.

One thing interesting is the oscillating effect around the linebased on the steady state in Fig. 12. The effect is more obviouswhen is smaller such as 60 and 30. After tracing the cachingbehavior in the simulation, we found that the reason is due toserving requests in best effort by the proxy server. To explainbriefly, we assume a simplified scenario: There are three con-secutive groups of people requesting videos in order and Group1 first issues many requests. The proxy tries its best to servethese people and its I/O bandwidth is quickly occupied (ifvalue is small.) Then, Group 2 will not be served by the proxysince there is not much free I/O bandwidth for them even therequested videos are in the proxy. Hence, the bandwidth reduc-tion ratio is low for Group 2. After Group 2, most of the requestsfor Group 1 are finished and the proxy releases I/O bandwidthfor the turn of Group 3. The proxy can achieve higher reductionratio during serving Group 3. This up and down condition maycontinue. A possible solution for reducing the oscillating effectis to explicitly limit some accesses on the proxy if its I/O band-width utilization is high.

V. CONCLUSION

We propose progressive video caching for video proxy server.In this approach, partial video caching is allowed and each videomay be cached in one of several levels, from no caching to entirecaching. The two most important resources, storage space anddisk I/O bandwidth, in the proxy are considered as the tradeoffto the WAN bandwidth reduction. We first assume the user ac-cess pattern is known and fixed so that the problem is formulatedas a two-constraint multiple-choice knapsack problem. Severalheuristic algorithms are proposed and their performance is com-pared. Based on the best heuristic, when the user access pat-tern is unknown or dynamically changed, we further enhancethe caching policy such that the proxy server can dynamically

adjust the video cached level based on the current resource con-dition and the estimated access behavior.

REFERENCES

[1] M. Abrams, C. R. Standridge, G. Abdulla, S. Williams, and E. A. Fox,“Caching proxies: limitations and potentials,” in Proc. 4th WWW Conf.,Boston, MA, Dec. 1995, pp. 119–133.

[2] C. Aggarwal, J. L. Wolf, and P. S. Yu, “Caching on the world wide web,”IEEE Trans. Knowl. Data Eng., vol. 11, pp. 94–107, Jan./Feb. 1999.

[3] U. Belhe and A. Kusiak, “Dynamic scheduling of design activities withresource constraints,” IEEE Trans Syst., Man, Cybern. A, vol. 27, pp.105–111, Jan. 1997.

[4] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker, “Web caching andZipf-like distributions: evidence and implications,” in INFOCOM’99,New York, Mar. 1999.

[5] P. Cao and S. Irani, “Cost-aware WWW proxy caching algorithms,” inProc. 1997 USENIX Symp. Internet Technology and Systems, Dec. 1997,pp. 193–206.

[6] A. Dan and D. Sitaram, “Multimedia caching strategies for heteroge-neous application and server environments,” Multimedia Tools Applicat.,vol. 4, no. 3, pp. 279–312, 1997.

[7] A. Dan, D. Sitaram, and P. Shahabuddin, “Scheduling policies for anon-demand video server with batching,” in Proc. ACM Multimedia ’94,Oct. 1994, pp. 15–23.

[8] K. Dudzinski and S. Walukiewicz, “Exact methods for the knapsackproblem and its generalizations,” Eur. J. Oper. Res., vol. 28, pp. 3–21,1987.

[9] W. Feng and J. Rexford, “A comparison of bandwidth smoothing tech-niques for the transmission of prerecorded compressed video,” in Proc.IEEE INFOCOM ’97, Apr. 1997, pp. 58–66.

[10] A. P. Foong, Y. Hu, and D. M. Heisey, “Logistic regression in an adaptiveweb caching,” IEEE Internet Comput., vol. 3, no. 5, pp. 27–36, Sept./Oct.1999.

[11] R. Loulou and E. Michaelides, “New greedy-like heuristics for the mul-tidimensional 0–1 knapsack problem,” Oper. Res., vol. 27, no. 6, pp.1101–1114, Nov.–Dec. 1979.

[12] W. Ma and D. H. C. Du, “Reducing bandwidth requirement for deliv-ering video over wide area networks with proxy server,” IEEE Trans.Multimedia, vol. 4, pp. 539–550, Dec. 2002.

[13] , “Proxy-assisted video delivery using prefix caching,” Tech. Rep.,Dept. Comput. Sci. Eng., Univ. Minnesota, Minneapolis, 2001.

[14] J. M. McManus and K. W. Ross, “Video-on-demand over ATM: con-stant-rate transmission and transport,” IEEE J. Select Areas Commun.,vol. 14, pp. 1087–1098, Aug. 1996.

[15] Z. Miao and A. Ortega, “Proxy caching for efficient video services overthe internet,” presented at the 9th Int. Packet Video Workshop, Apr. 1999.

[16] M. Moser, “Declarative scheduling for optimal graceful QoS degrada-tion,” in Proc. IEEE Int. Conf. Multimedia Computing and Systems, Jun.1996, pp. 86–94.

[17] M. Moser, D. P. Jokanovic, and N. Shiratori, “An algorithm for the mul-tidimensional multiple-choice knapsack problem,” IEICE Trans. Fund.,vol. E80-A, no. 3, pp. 582–589, Mar. 1997.

[18] D. Pisinger, “Algorithms for Knapsack Problems,” Ph.D. thesis, Dept.Comput. Sci., Univ. Copenhagen, Denmark, Feb. 1995.

[19] J. E. Pitkow and M. M. Recker, “A simple yet robust caching algorithmbased on dynamic access patterns,” in Proc. 2nd WWW Conf., Chicago,IL, Oct. 1994, pp. 1039–1046.

[20] R. Rejaie, M. Handley, H. Yu, and D. Estrin, “Proxy caching mechanismfor multimedia playback streams in the internet,” presented at the 4thWeb Cache Workshop, Mar. 1999.

[21] R. Rejaie, H. Yu, M. Handely, and D. Estrin, “Multimedia proxy cachingmechanism for quality adaptive streaming applications in the internet,”presented at the INFOCOM 2000, Mar. 2000.

[22] J. Rexford, S. Sen, and A. Basso, “A smoothing proxy service for vari-able-bit-rate streaming video,” presented at the Global Internet Symp.,Dec. 1999.

[23] L. Rizzo and L. Vicisano, Replacement policies for a proxy cache, inUCL-CS Res. Note.

[24] J. Salehi, Z.-L. Zhang, J. Kurose, and D. Towsley, “Supporting storedvideo: reducing rate variability and end-to-end resource requirementsthrough optimal smoothing,” ACM SIGMETRICS, pp. 222–231, May1996.

[25] S. Sen, J. Rexford, and D. Towsley, “Proxy prefix caching for multimediastreams,” presented at the INFOCOM ’99, Mar. 1999.

Page 12: Design a Progressive Video Caching Policy for Video Proxy Servers

610 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 4, AUGUST 2004

[26] S. Sen, J. Dey, J. Kurose, J. Stankovic, and D. Towsley, “CBR transmis-sion of VBR stored video,” in SPIE Symp. Voice Video and Data Com-munications, Dallas, TX, Nov. 1997.

[27] R. Tewari, H. Vin, A. Dan, and D. Sitaram, “Resource-based cachingfor web servers,” presented at the Proc. of SPIE/ACM Conference onMultimedia Computing and Networking, San Jose, Jan. 1998.

[28] Y. Wang, Z.-L. Zhang, D. H. C. Du, and D. Su, “A network-consciousapproach to end-to-end video delivery over wide area networks usingproxy servers,” presented at the INFOCOM ’98, Apr. 1998.

[29] S. Williams, M. Abrams, C. R. Standridge, G. Abdulla, and E. A. Fox,“Removal policies in network caches for world-wide web documents,”in Proc. SIGCOMM ’96, Aug. 1996, pp. 293–305.

[30] R. Wooster and M. Abrams, “Proxy caching that estimates page loaddelay,” presented at the 6th WWW Conference, Santa Clara, CA, Apr.1997.

Wei-hsiu Ma received the Ph.D. degree in computerscience and engineering from the University of Min-nesota, Minneapolis, in 2000.

He is a software engineer with Todd Video Net-work Management, Minneapolis, where he workson the development of management systems forvideo conferencing and broadcasting networks. Hisresearch interests include distributed multimediasystem and networking on video streaming.

David H. C. Du (M’81–SM’95–F’98) receivedthe B.S. degree in mathematics from NationalTsing-Hua University, Taiwan, R.O.C., in 1974 andthe M.S. and Ph.D. degrees in computer sciencefrom the University of Washington, Seattle, in 1980and 1981, respectively.

He is currently a Professor with the ComputerScience and Engineering Department, Universityof Minnesota, Minneapolis. His research interestsinclude multimedia computing, storage systems,high-speed networking, high-performance com-

puting over clusters of workstations, database design and CAD for VLSIcircuits. He has authored and co-authored more than 140 technical papers,including 70 referred journal publications in his research areas.

Dr. Du was an Editor of IEEE TRANSACTIONS ON COMPUTERS from 1993 to1997. He has also served as Conference Chair and Program Committee Chairto several conferences in multimedia and database areas.