13
Multimedia information caching for personalized video-on-demand Christos Papadimitriou, Srinivas Ramanathan, P Venkat Rangan and Srihari SampathKumar The synergy between computing and information systems promises to herald a new epoch in which users have access to an entirely new variety of entertainment services that are customized to suit their individual needs. In this paper, we explore the architectural considerations that underlie the efficient realization of personalized video-on-demand enter- tainment services over metropolitan area broadband networks. To deliver video programmes to users’ homes at attractive costs, we present intelligent caching strategies that judiciously store media information at strategic locations in the network. For information caching in a metropolitan area network, we devise a simple, yet effective caching strategy that uses a Greedy heuristic to determine when, where and for how long programs must be cached, so as to minimize the cumulative storage and network costs borne by users. Preliminary simulations demonstrate that the Greedy caching strategy performs exceptionally, in most cases providing near- optimal solutions. Keywords: personalized information caching video-on-demand, multimedia With the integration of digital audio, video and computing, the conventional broadcast mode of enter- tainment and news delivery to residential homes, typified by radio and television, is paving the way for a new variety of video-on-demand services that provide users with a refreshing freedom to choose and procure only what they desire, and no more132. By permitting users to exercise fine-grain control over the programs they view and the times at which they view, video-on- demand services are expected to usher in an exciting era of interactive multimedia entertainment3. At the University of California, we are developing architectures and technologies that are targeted at the practical realization of interactive video-on-demand services to user communities4. We believe that a key deciding factor for the feasibility of video-on-demand Multimedia Laboratory, Department of Computer Science and Engineering, University of California at San Diego, La Jolla, CA 92093, USA (Email: {christos,sramanat,venkat,srihari}@cs.ucsd.edu) services is their economic viability. In this paper, we propose information caching as a fundamental means of optimizing the per user costs of multimedia distribution, and present simple, yet efficient, techniques that we have developed for multimedia distribution and caching over metropolitan area broadband networks. These techniques consider the individual costs of storage and transmission of multimedia programs, and compute caching schedules for programs by determining when, where and for how long multimedia programs must be stored at strategic locations in the network, so as to optimize the service costs borne by users. This paper is organized as follows. In the next section we describe the network and service architectures that are evolving for multimedia delivery to users. For customizing multimedia services to suit the individual preferences of users, we propose intelligent personal service agents (PSA) that interpret and learn about users’ preferences, and schedule programs that match these preferences. We then propose information caching as the primary function of PSAs, and outline various trade-offs that must be evaluated by PSAs at the time of caching. We develop simple, yet near-optimal caching strategies that PSAs can use to determine when, where and for how long programs must be cached, so as to minimize the cumulative service costs borne by users. Performance results indicate that the caching strategies we propose can significantly reduce per user service costs, thereby making personalized video-on-demand services more attractive for users. We conclude by comparing our work with other efforts in the area of multimedia storage and distribution. MULTIMEDIA SERVICES ARCHITECTURE Network architecture In the recent past, hybrid fibre-coax technology has emerged as the preferred near-term technology for broadband community networks, as illustrated by the 204 01403664/95/$09.50 0 1995 Elsevier Science B.V. All rights reserved computer communications volume 18 number 3 march 1995

Multimedia information caching for personalized video-on-demand

Embed Size (px)

Citation preview

Multimedia information caching for personalized video-on-demand Christos Papadimitriou, Srinivas Ramanathan, P Venkat Rangan and

Srihari SampathKumar

The synergy between computing and information systems promises to herald a new epoch in which users have access to an entirely new variety of entertainment services that are customized to suit their individual needs. In this paper, we explore the architectural considerations that underlie the efficient realization of personalized video-on-demand enter- tainment services over metropolitan area broadband networks. To deliver video programmes to users’ homes at attractive costs, we present intelligent caching strategies that judiciously store media information at strategic locations in the network. For information caching in a metropolitan area network, we devise a simple, yet effective caching strategy that uses a Greedy heuristic to determine when, where and for how long programs must be cached, so as to minimize the cumulative storage and network costs borne by users. Preliminary simulations demonstrate that the Greedy caching strategy performs exceptionally, in most cases providing near- optimal solutions.

Keywords: personalized information caching

video-on-demand, multimedia

With the integration of digital audio, video and computing, the conventional broadcast mode of enter- tainment and news delivery to residential homes, typified by radio and television, is paving the way for a new variety of video-on-demand services that provide users with a refreshing freedom to choose and procure only what they desire, and no more132. By permitting users to exercise fine-grain control over the programs they view and the times at which they view, video-on- demand services are expected to usher in an exciting era of interactive multimedia entertainment3.

At the University of California, we are developing architectures and technologies that are targeted at the practical realization of interactive video-on-demand services to user communities4. We believe that a key deciding factor for the feasibility of video-on-demand

Multimedia Laboratory, Department of Computer Science and Engineering, University of California at San Diego, La Jolla, CA 92093, USA (Email: {christos,sramanat,venkat,srihari}@cs.ucsd.edu)

services is their economic viability. In this paper, we propose information caching as a fundamental means of optimizing the per user costs of multimedia distribution, and present simple, yet efficient, techniques that we have developed for multimedia distribution and caching over metropolitan area broadband networks. These techniques consider the individual costs of storage and transmission of multimedia programs, and compute caching schedules for programs by determining when, where and for how long multimedia programs must be stored at strategic locations in the network, so as to optimize the service costs borne by users.

This paper is organized as follows. In the next section we describe the network and service architectures that are evolving for multimedia delivery to users. For customizing multimedia services to suit the individual preferences of users, we propose intelligent personal service agents (PSA) that interpret and learn about users’ preferences, and schedule programs that match these preferences. We then propose information caching as the primary function of PSAs, and outline various trade-offs that must be evaluated by PSAs at the time of caching. We develop simple, yet near-optimal caching strategies that PSAs can use to determine when, where and for how long programs must be cached, so as to minimize the cumulative service costs borne by users. Performance results indicate that the caching strategies we propose can significantly reduce per user service costs, thereby making personalized video-on-demand services more attractive for users. We conclude by comparing our work with other efforts in the area of multimedia storage and distribution.

MULTIMEDIA SERVICES ARCHITECTURE

Network architecture

In the recent past, hybrid fibre-coax technology has emerged as the preferred near-term technology for broadband community networks, as illustrated by the

204 01403664/95/$09.50 0 1995 Elsevier Science B.V. All rights reserved computer communications volume 18 number 3 march 1995

Multimedia

recent trials and ventures announced by leading cable and telephone carriers. Figure 1 depicts a typical hybrid fibre-coax community network, the main components of which are:

Cable headend is the location of high-capacity multimedia servers that are repositories of program source material for serving a community (typically, a community includes several tens of thousand users). The headend transmits program signals over the network to the users. Fibre trunk carries signals from the cable headend to clusters of a few thousands of homes. The libres typically carry an amplitude modulated vestigial side-band (AM-VSB) signal to a local node, called the fibre node, where conversion between optical and electrical signals takes place. Coaxial cables carry signals over the short distances from the fibre nodes to users’ homes. The coax networks are typically tree structured, and serve neighbourhoods of a few hundred users. Set-top boxes at the users’ homes terminate the hybrid fibre-coax network. Each set-top device contains a tuner that tunes to the appropriate channel requested by the user, a demodulator for converting the broadband signal to baseband, a descrambler and a decompression engine for con- verting an encrypted and compressed digital signal to an analogue signal that can be displayed at the user’s television set.

A large metropolis may span several communities, and hence may have several cable headends. We envision that programming for the cable headends will be provided by a super-headend that is connected to all the community headends. As interactive multimedia services will be prevalent, it is likely that the super- headend itself may be configured as a collection of

Fiber Fiber

Fiber Node /\ Fiber Node

Coaxial bed drop

Figure 1 Architecture of a broadband community network. Multi- media programs are retrieved from a multimedia server and transmitted by a cable headend to users via a hybrid fibreecoax network

rmation caching for personalized VOD: C Papadimitriou et al.

several programming stations that are interconnected, with a hierarchical configuration being the most natural interconnection topology.

Service architecture

Multimedia services are evolving from a symbiosis of different enterprises: whereas information storage and management at multimedia servers is expected to be controlled by storage providers, media transport over the broadband network is likely to be the responsibility of network provider?. The primary function of service providers, such as entertainment houses, is expected to be electronic publishing; for instance, a news provider, such as CNN, composes different news segments pertaining to political, domestic, international, financial and sports news into a complete news document. The published documents are made available at multimedia servers, and advertised via Network Directory Servers, which function similarly to the ‘Yellow Pages Direc- tories’ of today: based on descriptions provided by the service providers about their program offerings, the directory servers maintain catalogues classifying the service providers under different categories. These catalogues serve as first-level indexes in the program selection process. Once the service providers who offer services that a user desires are identified, further searches to select programs relevant to the user are performed via content-bases. These content-bases, maintained by indi- vidual service providers, are intelligent knowledge bases that contain detailed descriptions of multimedia programs offered by the service providers, and permit various types of queries for program selection.

To tailor the fabric of multimedia services to suit the individual needs and preferences of users, we envision a new class of service agents, the persona1 service agents (PSA)6. For doing so, a PSA navigates through the directory server’s catalogues and service providers’ content-bases to compose a list of programs that match a user’s needs. Once the user makes a selection of programs to view, the PSA continuously interacts with the content-bases and chooses portions of programs (and even their levels of detail) to display to the user. The PSA then arranges with the service, storage and network providers to deliver the desired programs (or portions of programs) to the user at the preferred viewing times. PSAs may execute at users’ set-tops, or alternatively, service providers may implement PSAs to deliver personalized programs to users.

PERSONALIZED VIDEO-ON-DEMAND

The leitmotif of our research in personalized video-on- demand has been to amortize the enormous network and storage costs. This is supported by a recent survey which showed that while personalized video-on-demand has a high appeal among 44% of the respondents who were willing to pay for it, only a miniscule 14% were

computer communications volume 18 number 3 march 1995 205

Multimedia information caching for personalized VOD: C Papadimitriou et al.

willing to pay more than the existing cable rates7. Since nearly two-thirds of the respondents owned personal computers (with nearly half of them equipped with a modem), we can infer that a majority of respondents were well aware of the advantages that video-on- demand services can offer to them. The above example illustrates the significant immediate need to amortize the cost of multimedia storage and distribution, so as to reduce the per user costs, and make personalized video- on-demand attractive. Since PSAs are aware of the needs and preferences of users, they are in a strategic position to negotiate with the network and storage providers, to optimize the cost of multimedia distribu- tion. In the next section, we focus on program retrieval and caching by a PSA.

Program retrieval and caching

A PSA is responsible for scheduling retrieval of the selected programs from storage providers, and subse- quent transmission via network providers to the user (see Figure 2). The retrieval schedule must be judiciously determined, taking into account the user’s requirements as well as storage and network conditions. For instance, when the PSA has a priori knowledge of the user’s preferred viewing time, it can retrieve relevant programs from a storage server in advance, during a period when the network and server are relatively underutilized, and store them at the user’s site. In practice, however, the storage capacities necessary for downloading all the

programs that a user views daily on a personal channel are prohibitively large.

Resource optimizations are most likely when several users indicate similar preferences for programs (e.g. a popular movie) on their personal channels. In the best case, all the users may choose to view the same programs, at the same times. For such users, the PSA can arrange with the service providers to multicast programs of the users’ common choice, thereby amor- tizing network transmission costs amongst all those users. More often than not, however, users’ preferred viewing times may not match. In such cases, multiple independent transmissions of the requested program may have to be incurred, all the way from a storage server located at one end of a network to users’ sites located at another end. As an example, consider the metropolitan-area network shown in Figure 3a. Suppose that a program P, created by a service provider at lpm and stored at the metropolitan server Sr (located at the super-headend), is requested by users Ui, Uz and U, for viewing at lpm, 1:30pm, and 2 pm, respectively. Further, suppose that (i) the storage cost for program P at the server increases with duration of storage (the rate of increase depending upon program P’s storage space requirement), (ii) the cost of each transmission of P over the network links is fixed (depending upon the bandwidth required for transmission of program P), and (iii) the delay between the transmission of program P from the servers and its reception at the users’ sites is negligible compared to the total duration of program P. For servicing users Ui, 17, and 173, program P must be

tions

User

Figure 2 Service architecture of a personalized video-on-demand service. Whereas the dotted lines represent interactions at the time of creation and storage of new programs, the solid lines denote program selection and retrieval. The numbered arrows indicate the chronology of interactions. The PSA first accesses the directory service and content-bases to identify a list of relevant programs. When the user makes a specific selection, the PSA verifies the user’s access rights with the service provider, determines a caching schedule for the program, and then arranges with the storage and network providers for the program (or portions of the program) to be delivered to the user at the appropriate time

206 computer communications volume 18 number 3 march 1995

Multimedia information caching for personalized VOD: C Papadimitriou et al.

transmitted thrice over the network, at lpm, 1:30pm, decrease as we move down the metropolitan network, and 2pm, respectively, incurring a cost of $1.80 each because the storage requirements are proportional to time. In fact, the network transmission cost increases the number of users served by each server. Conse- linearly with the number of users who request the same quently, the higher-level servers are multiplexed better, program P, even though there may be only a slight shift and yield lower storage costs. Likewise, transmission in their viewing times. Retransmissions of the same costs of programs may also differ from one network program, since they require a higher network link to another, depending upon the available link bandwidth, result in an increase in service costs borne bandwidth, and the desired real-time performance: end- by users, and are hence, undesirable. to-end delay, delay jitter and reliability of transmission.

With a view to reducing service costs, we propose that metropolitan area networks be configured with storage servers of suitable capacities installed at various strategic locations; cable headends, local nodes and switching offices are obvious choices for locating the servers. Figure 3b illustrates a possible configuration for the metropolitan area network of Figure 3a. Intermediate servers S, and S, are used as neighbourhood stores for programs requested by the users Ui, U2 and U3; by caching programs at servers Sz and Ss, PSAs can avoid the costs of retransmissions from the distant metropo- litan area server S,. For instance, caching program P at

server S, when the program is transmitted to user Ui reduces the overall service cost to $2.50, from its previous value of $5.90 (a 58% reduction).

In practice, there are interesting trade-offs between the cost of renting storage space at neighbourhood servers and the cost of repeated transmissions from a metro- politan server, which must be evaluated by PSAs and service providers before deciding to cache programs. In the following section, we elaborate on all of these trade- offs and devise simple, yet effective strategies for information caching in metropolitan area networks.

To illustrate the storage-network trade-offs, as before, consider the simple, tandem configuration of storage servers shown in Figure 4~. Suppose that users Ui, U2 and U3, all belonging to the same neighbourhood (and hence, connected to the same neighbourhood server Ss) request a program P that is created and stored at the metropolitan server, by a service provider at 1 pm (see Figure 4~2). Clearly, no optimizations are possible for servicing the first user U1 at 1 pm: the program must be transmitted all the way from server Si to user U,. However, during this transmission, any of the servers & and S3 can cache the program for future usage. Whereas caching at S3 only entails a storage cost, caching at S2 entails an additional network cost, but a lower storage cost. Considering such trade-offs, for the period [lpm, 1 lpm], it is cheaper to cache program P at

server SZ, entailing a cost of $5.40, as compared to a cost of $6.80 if the program P were cached at server S3. The caching schedule thus derived (see Figure 4b), although it amortizes storage costs between iJ2 and U,, does not constitute the overall optimum. For instance, a schedule that besides caching program P at server Si for the period [lpm, 1 lpm], also caches program P at server S3 for the period [lpm, 2pm] (see Figure 4c) entails a lower overall service cost, amounting to $5.10.

MULTIMEDIA INFORMATION CACHING

At the time of caching, PSAs or the service providers must evaluate the various storage and network alter- natives. The storage costs may differ from server to server, depending upon the storage technologies used by the servers, the capacities of the servers, their data transfer rates and current demands. Typically, the maximum storage capacities of multimedia servers

In general, the storage-network optimization is carried out at the time of scheduling progams on users’ personal channels. PSAs, together with the service providers, must determine when, where and for how long progams must be cached, so as to minimize the costs borne by users. Towards this end, in the next section we devise an efficient dynamic programming technique that can be used by a PSA to derive an optimal caching schedule for a program P that is

Metropolitan server S Storage/Network cost Network cost

$O.ti/hr Metropolitan sewer S $0.5/hr

Figure 3 (a) A metropolitan-area server St at which a program P that is requested by users Ii,, Uz and Us is stored; in this case, since users’ view- ing times do not match, three indepen- dent transmission of program P are necessary to satisfy the users’ needs. To avoid the cost of retransmissions from the metropolitan server, we propose to employ intermediate ser- vers S, and S,; (b) program P can be cached at server Ss while it is trans- mitted down the network, to service user Ut. Doing so introduces a 58% reduction in the overall service cost a

$1.80 Intermediate !$ ’ sarver Q $O.Whf

Viewing Times

1 pm 1:3Opm 2pm 1 pm 1:3Opm 2pm

b

computer communications volume 18 number 3 march 1995 207

Multimedia information caching for personalized VOD: C Papadimitriou et al.

requested for viewing by several users located in the same neighbourhood of a metropolis. Later, we investi- gate how this very same methodology can be extended for a more general scenario, in which the same program has to be delivered to users in various neighbourhoods of a metropolis.

Optimal caching during delivery of a program to a metropolitan neighbourhood

Assumptions and implications Consider a program P that is requested for viewing by a group of users Ui, . . . , U, located in the same neigh- bourhood of a metropolis. Suppose that the metro- politan-area network N consists of m servers, Sr , . . . , S,,, (see Figure 6a). Let SCj denote the storage cost per unit time for caching program P at a server Sj, and NC’,h denote the cost of transmission of program P over the network link interconnecting servers Sj and S,,. Suppose program P is initially available at the metropolitan server Si and that users UI, . . . , U, request to review program P at times Cl,. . . , t, respectively. Clearly if tr = t2 = . . . = I,, a single transmission of program P from S, to the users at time tl will be optimal. In general, however, the viewing times of all the users may differ. For such a case, Proposition 1 below guarantees that in an optimal caching schedule, changes in the storage location(s) of program P only occur at users’ viewing times.

Proposition 1 Consider a pair of users Vi and Uk, who prefer to view program P at times ti and tk, respectively. Suppose that (i) ti < tk, and (ii) no other user requests to view program P at some time t E [ti, tk]. Then, it is not optimal to change the storage location(s) of program P at any time other than the viewing times of users Ui and Uk.

The above proposition can be validated in a straight- forward manner, using a proof by contradiction. It is important to note that Proposition 1 does not preclude program P from being cached at multiple servers during the same time period; all that Proposition 1 states is that it is not optimal to change the storage locations of program P at any time other than the viewing times of users. Interestingly, it turns out that at a user’s viewing time, it may not always be possible to cache a program as it is transmitted from a server to service a user. Rather, in some cases, a PSA may have to arrange for the program to be explicitly transmitted up the network for caching at a ‘higher-level’ server, so as to entail lower storage cost at the cost of a higher network cost. This relatively ‘counter-intuitive’ observation is proved by Proposition 2.

Proposition 2 In an optimal caching schedule, at some user’s viewing time, a PSA may be required to arrange for the explicit transmission of a program P up the network and cache it at a higher-level sever.

Proof The validity of this proposition is demonstrated by means of the example in Figure 5, which demon- strates the sequence of program transmissions in a metropolitan network. Figure .5a depicts the network configuration under consideration. While servicing user U,, no optimizations are possible. For users U, and U,, sever S, is the optimal cache (Figures 5b and c). For servicing user Uq, there are two feasible alternatives: either (i) to cache the program at server Sr for the entire period [5am, 7pm], entailing an overall cost of $16 ($14 for storage at Si and $2 for transmission from S,), or (ii) to transmit program P up the network, from S2 to Si at 7am and cache it at Si for the period [7am, 7pm], incurring a total cost of $15 ($1 for transmission from S2 to Sr, $12 for storage at Si, and $2 for transmissions from Sr to user S,). Clearly, the latter alternative is

Servers Storage/Network costs

Sl

‘)

SO.lh Sl

t Caching of program P

Transmission of program P

Users UP % lpm 2pm 11 pm

Viewing 1 pm Times

2pm 11 pm

a b

1P m

Viewing lime

C

Figure 4 Illustration of storage-network tradeoffs. (a) Network configuration: S, is the metropolitan-area server, S, and Ss are the neighbourhood caches. Program P, created and stored at St at lpm is requested by users (I,, Uz and Us for viewing at Ipm, 2pm and 1 lpm, respectively. The storage and transmission costs for Pare indicated in the figure; (b) a caching schedule that amortizes program storage between users Uz and Us. The total cost of this schedule is $5.40; (c) overall optimal schedule, the total cost of which is $5.10

208 computer communications volume 18 number 3 march 1995

Multimedia information caching for personalized VOD: C Papadimitriou et al.

on the ordinate of P represent storage servers St, . . . , S,,, (see Figure 6b). Suppose that the horizontal edges of grid 0 represent caching of program P at the storage servers and that the vertical edges of B represent media transmission through the network N, at users’ viewing times. Then, weights on the horizontal edges denote the storage costs for program P at the corresponding servers, and the weights on the vertical edges denote transmission costs.

Since Proposition 1 guarantees that any changes in the allocation of program P to storage servers occur only at users’ viewing times, an optimal caching schedule must be a sub-graph of grid G. Moreover, under the assumption that service providers make their programs available at the metropolitan server St, we can deduce that the sub-graph of grid Q that represents an optimal caching schedule must be connected (a disconnected sub-graph implies the existence of at least two servers that initially had access to program P). Furthermore, each pair of vertices in this sub-graph must be connected via unique paths; otherwise, by deleting select edges along the duplicate paths, the cost of the schedule can be further reduced. Accounting for all of the above constraints, we infer that the sub-graph of 6 that represents an optimal caching schedule must be a tree. Moreover, since the program P can be made available to users only via server S,,, (program P can be either cached at S, itself, or can be transmitted by one of the other servers to the users, via S,), all the vertices that lie on the bottom edge of grid G (which corresponds to server S,) must be included in the tree that represents an optimal caching schedule.

Determining an optimal caching schedule for network N is, therefore, equivalent to determining a minimum cost tree in grid Q, under the constraints that: (i) the tree must be rooted at vertex St (since the program P is initially available only at St); (ii) the tree must span all the n vertices in the bottom edge of grid 6; and (iii) all d e

Figure 5 Illustration of the result of Proposition 2. (a) Metropolitan network, with users Ui, Us, Us and Ua requesting for a program P at Spm, 6pm, 7pm and 7am, respectively: (b) (c) (d) (e) sequence of program transmissions and caching operations necessary for optimally servicing the four users. The arrows indicate program transmissions, and the shaded circles indicate caches for program P. Observe that in (d), it is optimal to explicitly transmit program P up the network from server & and cache it at server S,

optimal, illustrating that in some cases, depending upon the relative viewing times of users, and the costs of storage and transmission of program P, it may even be optimal to explicitly transmit a program up the network and cache it at a higher-level server.

Based on the above results, in the next section we devise an efficient technique for computing an optimal caching schedule for program P in a metropolitan area network.

Urn u, up vb*mo I lhnel 1 ‘2

a b

Figure 6 Illustration of the graphical technique for determining an optimal caching schedule. (a) Metropolitan-area network N under consideration; (b) a grid 0 corresponding to the network N. Weights on the horizontal edges denote storage costs at the servers, and those on the vertical edges represent the network transmission costs. The directed edges in grid f.7 denote a caching schedule for network N

Determining an optimal caching schedule Corresponding to a network N, consider a rectilinear grid G, such that the vertices on the abscissa of G represent users’ viewing times 11, . . . , tn, and the vertices

computer communications volume t8 number 3 march 1995 209

Multimedia information caching for personalized VOD: C Papadimitriou et al.

horizontal edges of the tree must be directed from left to right (since the time coordinate advances from left to right). This problem, of determining a minimum cost, constrained Steiner tree is known to be intractable for general graphs’. Fortunately, however, this problem is tractable for the grid Q constructed above. Next, we elaborate upon a dynamic programming technique that we have developed for determining the constrained minimum-cost Steiner tree.

To determine a minimum cost caching schedule for network hl, consider a subset of users, Ui, . . . , Uk, arranged in chronological order of their viewing times, i.e. ti C ti+ 1 < . . . < tk (see Figure 8). Assume that program P is available at a server S’ at user Uis viewing time, Ii. TO service user Vi, Sj transmits program P down the network N at time ti (this transmission is represented by a vertical from Sj at ti in grid 9. At this time, one or more of the servers, SI,..., S,,, may cache program P to service any subset of the subsequent users, Ui+l,. . ., uk. By caching program P at a server, S,, r E [j, . . . , m] that is located at least as close to the users as server S’, PSAs and the service providers attempt to save the cost of retransmis- sion of the program all the way from server 5“ at a later time. In some other cases, when the storage cost at some higher-level server S,, I E [ 1, . , . ,j - l] is much smaller than the storage costs at servers Sj, . . . , S,,,, surprisingly, it is more profitable to transmit program P and cache it

‘h

ti th tr

b

‘h

sr

‘h

‘r

9

1.1

9 1. ti tr lh ti tr th

C d

Figure 7 Illustration of the statement of Proposition 3. Servers S,, and S, cache program P at time fi; whereas server S, transmits program P to a user at time t),, server S, transmits the program at time 1,. (a) (c) Cases when th < t, and th > t,, respectively; (b) (d) represent less expensive caching schedules than those depicted in (a) (c), respectively, thereby illustrating that in an optimal caching schedule, at most one of the higher-level servers [Sr, . . . , Sj_ ,] may cache program P at time ti

e Sl

f,

ii

sr

, SI

sY

Vllwing Times

Figure 8 Determination of the optimal caching schedule for servicing n users. When server S, transmits program P to service user tii, any of the other servers may cache program P. Users Vi,. . . , U, are serviced at times ti, . . _ , tx by caching program P at one or more of the servers S,, , S,,, at time tj, with S, being the highest of the caches. Users u x+ I,. . , U, are serviced by caching program P at server S,, r 1 E [I,. . , j - I] for the period [t,, t,+ I]. Considering all possible choices for user U,, and servers S, and S,, Algorithm 1 computes the optimal caching schedule

at S,. Interestingly, it turns out that in an optimal caching schedule, at time ti, on/y one of the higher-level servers; Si , . . . , Sj_ 1 can cache program P, whereas one or more of the other servers, S’, . . . , S,,, may cache program P. Proposition 3 proves this result:

Proposition 3 Suppose that program P is transmitted to user Vi by server Sj at time ti. Then, (i) one or more of the servers Sj, . . . , S,,, may cache program P at time ti, but (ii) only one of the higher-level servers SI, . . . , Sj_ 1

can cache the program P at the same time.

Proof The validity of the first part of this proposition is illustrated by the example of Figure 4: as indicated by Figure 4c, caching program P at both the servers Si and & at lpm turns out to be optimal.

To prove the second part of this proposition, we resort to a proof by contradiction. Towards this end, suppose that in an optimal schedule, servers &, and S, (h<r,andh,rE[l,.. . ,j - 1] ) cache program P at time ti (see Figure 7). As illustrated by Figures 7a and c, server SJ, can either transmit program P earlier than server S, or vice.versa:

In the first case, depicted in Figure 7a, program P

transmitted by server Sh at time th is routed via server S,. By caching program P at server S, at time th, rather than at time tj itself, it is possible to construct a schedule (see Figure 7b) that is less expensive than the assumed optimal schedule. Figure 7c depicts the second possibility: server S, transmits program P at a time t, that is prior to the transmission time th of server S,. The cost of this schedule is the sum of(i) the transmission cost from S’ to S,; (ii) the transmission cost from S, to Sh; (iii)

210 computer communications volume 18 number 3 march 1995

cost of caching program P at server S, for the period [ti, t,]; (iv) cost of caching program P at server &, for the period [t;, th]; and (v) cost of transmissions from servers S,, and S, to the users. Since the network transmission cost between servers &, and S, (the second term above) remains the same throughout the period [ti, t,], it is cheaper to cache program P at server &, starting from time t,, rather than from time ti itself (see Figure 74.

From the above arguments, it is clear and irrespective of the values of th and t,., it is not Optimal to tranSmit

program P and cache it at two or more higher-level servers, such as Sh and S,..

Proposition 3 limits the number of caching schedules that need to be considered in order to arrive at an optimal schedule.

To determine an optimal caching schedule, suppose that at time ti, when server Sj transmits program P, among the users Ui, . . . , Uk, one group, say Ui, . , U,, is serviced by caching program P at one or more of the servers Sj, . . , S, at time ti, and the other group, u X+1,..., Uk is serviced by caching at a higher-level server, S,r E [l,. . . ,j - I] for the period [ti, t,y+ I] (see

Figure 8). The total cost of servicing users Ui, . , Uk is the sum of

l The optimal cost of servicing Vi,. . . , U, with any of the servers S’, . . . , S,,, as caches: This cost comprises the cost of transmitting program P down the network, from 5” to the highest of the caches, say S, (which amounts to CzIi,_, NC,,,V+r), plus the overall optimal total cost of servicing users Ui, . . . , U,, with server S, functioning as the cache for program P at time ti; let C [i, x, y] denote this latter cost.

l The optimal cost of servicing U,, ,, . . . , U, with at most one of the higher servers S, as cache. This cost is the sum of (i) the cost of explicitly transmitting program P from Sj to S, at time ti (represented by C:z’,- ’ NC,,,+ i); (ii) the cost of caching program P at server S, for the period [ti, t,+,] (which is SC, * (t.x+ I - t;); and (iii) the optimal cost of servicing users Ux+l,. . , uk, with program P being available at server S, at time t,+ 1 (denoted by C[x + l,k,r]).

For servicing users Ui, . . . , uk optimally, it is necessary to compute the total service cost for each of the possible values of U,, x E [i, . . . ,k], S’, S, and S, (there are k - i

possibilities for U,, m for Sj, j for S,., and m -j for S,). The minimum of the service costs thus computed serves as the basis for determining when, where and how long program P must be cached, so as to service users Uj, . . , C!fk in the least expensive manner. Repeating the above procedure for all values of i, k (i.e. i, k E [ 1, . . . , n] ), and j E [l, . . , m] in grid 6, we can determine the caching schedule for servicing all the users UI , . . . , U, optimally. Algorithm 1 formalizes the above method.

Observe that Algorithm 1 has a run-time that is a function of the number of distinct times at which

program P must be delivered to users. For popular programs, which attract a large number of viewers, applying Algorithm 1 and accounting for each and every viewer’s preference may make a fast, on-line computation of the optimal caching schedule infeasible. Consequently, for such programs, rather than consider all the possible viewing instants of program P, we propose to partition the time-axis into viewing periods:

for instance, each day can be conceived as 24, one hour- long viewing periods, and changes in the caching locations of programs can take place only at the boundaries of successive viewing periods. The above transformation facilitates an efficient implementation of Algorithm 1: with this transformation, the run-time of Algorithm 1 is a function of the number of viewing periods only.

As alluded to earlier, Algorithm 1 is targeted at optimizing the cost of delivery of a program P to one

neighbourhood of a metropolis. In many cases, however, programs may be of common interest to users in several neighbourhoods. While arranging for the delivery of such programs, PSAs must not only consider the different viewing preferences of users, but also the relative locations of those users. Assuming a hierarchical configuration of storage servers that mirrors the hierarchical structure of the underlying network (see Figure 9a), in the next section we extend the methodology of Algorithm 1 to develop a simple to implement, yet effective, strategy for computing caching schedules.

Caching programs during delivery to multiple neighbourhoods

As before, assume that storage costs for a program P increase linearly with duration of its storage, and that the network costs for transmission of that program are time-invariant. The result of Proposition 1, that it is not profitable to change the location of a program between users’ viewing times, is derived above for a general network configuration, and is therefore valid even for a hierarchical metropolitan area network in which users located in different neighbourhoods request for the same program, P. Consequently, as before, for devel- oping a technique for optimal caching in a hierarchical network, we first construct a graph structure that reflects the storage and transmission costs of program P at the various servers and network links, respectively. During this construction, the branches of the hierarchy are each represented by different rectilinear planes, as depicted in Figure 9b. Considering each plane in isolation and applying the dynamic programming technique of the previous section to each plane indivi- dually may not suffice. Unfortunately, it turns out that considering all the possible servers and all the possible time instants at which program P can be cached results in an exhaustive search strategy that is overly complex. In view of this adverse result, we are motivated to consider heuristics. A simple and natural heuristic is one

Multimedia information caching for personalized VOD: C Papadimitriou et al.

computer communications volume 18 number 3 march 1995 211

Multimedia information caching for personalized VOD: C Papadimitriou et al.

V&wing Times

Storage storage/ seryet5 Networks costs

sl

x

$lhr

$1

% $lhr

I $1

“1 “3 Y “4

Wm 6pm 3pm 1tpm

b

Figure 9 (a) Hierarchical configura- tion of a metropolitan-area network; (b) multi-planar grid that results from applying the graphical technique de- scribed to the hierarchical network

Transform the network N into a grid G[m, n]; Initialize all optimal costs being computed to zero;

for each server Sj E [Sl , . . . , S,,,] do for each user Vi E [ U1, . . . , U,,] do

C[i, i,j] = ~~~~-’ NC,,,+,; /* In the case of a solitary user, there is only one way to service that user. */

/* Next, determine all the optimal caching costs, one by one */ for each server 5” E [S,,,, . . . , Sl] do {

for each subset of s users, s E [ 1, . . . , n] do { for each user Vi E [VI,. . . , U,_,] do { /* We are interested in the contiguous sequence of s users, starting with user Vi */

k=i+s; /* User U, is the last user in this subset */

/* For determining the best way to service Vi,. . . , Uk, first find the optimal cost of servicing Vi,. . . , Uk, when server S, E [S’, . . . , S,,,] caches program P at time Ii */

cd[i, kjl = minvs,EISj,...,SnlIC:~r-’ NG,,+I +C[i,kvl);

/*Next compute the optimal cost of servicing Vi,. . . , uk, using a server S, E [Si, . . . , S’] */

for each split point x E [i, . . . , k] do {

CZ ii, k, A = C[i, x, jl + minvs,E [s,,...,s,]{ E:Il,-’ NC,,,+~+SC,*(t,+l-ti)+C[x+l,k,rl}; 1

c[i, k jl = min {Cd[i, k, jl,min~/,,[i,...,k]{ cc [i,kjl)); >

1 1

Optimal Caching Cost = C [ 1, n, 11;

Output the caching schedule that entails the above minimum cost;

Algorithm 1 Determination of an optimal caching schedule for a metropolitan-area network N comprising m multimedia servers r and n user requests, (II,. . , CJ’. slated for times tl, . , tn, respectively

212 computer communications volume 18 number 3 march 1995

Multimedia information caching for personalized VOD: C Papadimitriou et al.

that takes a Greedy approach: at all time instants, a caching strategy that employs this heuristic makes choices that ‘appear optimal’ at that time.

Greedy Caching Strategy

1.

2.

3.

4.

5.

6.

Consider all users, Ut , . . . , U,, in chronological order of their viewing times of program P. The Greedy strategy executes n steps, one step for each user. At the ith step, the Greedy strategy attempts to determine the least expensive manner in which user Vi can be serviced, given the allocation of program P for servicing users Ui , . . . , Vi _ 1. For doing so, the Greedy strategy determines the incre- mental storage and transmission costs that would be incurred if server Sj E [Sl, . . . , S,] were to transmit program P at time ti to user Vi* The incremental transmission cost incurred when server Si transmits program P to user Ui is simply the minimum cost of any path from server S’ to user Vi. Such a path can be easily determined using Djikstra’s shortest path algorithm’. Notice that this minimum cost is a characteristic of the network topology (and not of the viewing times of users); hence, the incremental transmission costs for all user-server pairs can even be precomputed off-line. Unlike the incremental transmission cost, the incre- mental storage cost incurred if server Sj caches program P to service user Vi is dependent upon the decisions taken by the Greedy strategy during its previous steps, i.e. for servicing users U,, . . . , Vi_ 1. Specifically, the incremental storage cost is the product of the cost SC’ of renting storage at server Sj and the elapsed period since the last time when program P was available to server S’ The latter term is the greater of the two times: (i) the latest time at which program P was cached at server Sj; and (ii) the latest time at which program P was routed via server Sj during transmission to one of the preceding users, Ul, . . . , Vi_ 1. The Greedy strategy iterates through Steps (3) and (4) for all m possible values of server Sj, and chooses to cache program P at the server S, E [S1, . . . , S,] that entails minimum incremental cost for servicing user Vi. The Greedy strategy then proceeds to the next step, to determine the caching server for user

Ui+l.

Repeating the above procedure for each of the n users, the Greedy strategy determines when, where and for how long program P must be cached in order to service users UI, . . . , U,,.

It is possible that in Step (3) of the above procedure, the Greedy strategy detects a tie for the network path that incurs minimum incremental cost between a server S’ and user Vi. Likewise, in Step (5), the Greedy strategy may encounter two or more servers that incur the same incremental cost for servicing user Vi. The tie-breaking heuristics we propose for such cases are targeted at enabling

future optimizations (i.e. for servicing subsequent users Vi+ ,, . . . , U,, better):

l We observe that routing the program P via a server S, at time ti lowers the incremental service cost that would be incurred if server S, were to function as a cache for any of the subsequent users. Therefore, in the event of a tie among candidate network paths, the Greedy strategy chooses the path that includes the largest number of storage servers, so as to maximize the possibility of lowering the service costs for subsequent users.

l To resolve a tie amongst storage users, during the ith step, the Greedy strategy considers all the subsequent users Vi+ I,. . . , U, and chooses the candidate server that has the maximum number of users in its sub-tree. Such a choice is based on the premise that for any user Uk, in many cases, a server located in the same sub-tree of the network as user Uk functions as the cache; employing a server that is not in the same sub- tree of Uk as a cache incurs an additional transmission cost.

Since the above Greedy caching strategy minimizes the incremental cost for servicing each user, starting from the user’s chronological predecessors, rather than determining the global optimum by considering all the users simultaneously, this strategy may not lead to the best overall amortization. Surprisingly, however, even in the worst-case, the greedy strategy performs no worse than twice the optimal; an elaborate analytical proof of this result is presented in the Appendix. Preliminary performance simulations indicate that in practice, the Greedy strategy performs even better. Several thousand configurations of storage servers, together with several combinations of users’ viewing schedules were generated and the performance of the Greedy strategy was compared with that of the optimal. Figure 10 illustrates the relative performance of the Greedy strategy with respect to the optimal strategy. On average, the Greedy strategy performs within 1% of the optimum, with the worst-case deviation being at most 10%.

Discussion

There are some realistic generalizations of the above caching problem. Until now, we have assumed that entire programs are cached between storage servers. In many cases, users’ interests pertain only to selected portions of programs offered by service providers. Moreover, the storage-network trade-offs may differ for the different media components of a program. For example, due to the larger bandwidth and storage requirements of the video component, it may be profitable to cache the video component at a neigh- bourhood server. The audio component, on the other hand, requires much smaller bandwidth and storage

computer communications volume 18 number 3 march 1995 213

Multimedia information caching for personalized VOD: C Papadimitriou et al.

Number of network configurations

Figure 10 Performance of Greedy caching strategy relative to an optimal caching strategy. The worst case deviation in service costs is about.lO%

space, and hence repeated transmissions of the audio component of the programs from a metropolitan server may be preferred. To exploit these trade-offs, we have proposed3 a system architecture for personal- ized multimedia services, in which we define sequences of media units (audio samples or video frames), recorded and stored at the same multimedia server as media strands. Strands of different media tied together with simultaneity relations constitute multimedia ropes, and multiple ropes together with the higher-level temporal relations (such as meets, follows, equals, during, etc.) amongst them constitute mzdtimedia chains. Users’ personal channels can be represented as chains, with ropes referring to program offerings of service providers, or portions of these offerings. The Greedy caching strategy we have proposed can be employed for caching individual strands, ropes and chains. Since strands represent the lowest granularity of data sharing amongst programs, caching individual strands (rather than ropes or chains) affords the possibility to amortize storage costs of each strand amongst all the users who are interested in viewing that strand.

At the time of caching, PSAs must not only consider heterogeneity in storage costs and transmission costs, but they must also adapt to any changes that may occur in these costs. For instance, as in today’s telephone networks, there may be ‘peak hours’ when the networks are in greater demand (this holds for storage servers too), and hence more expensive to use. Cost changes may occur even on-the-fly; for example, a storage provider, upon anticipating an overcommitment of server resources, may increase the storage cost so as to discourage further demand. The Greedy caching strategy adapts easily to dynamic changes in storage and network costs. With time-variant costs, changes in allocation of programs may occur not only at user request times (as per Proposition 1) but also at instants when the network or storage costs are changed. Consequently, these cost change times must be incorporated into the dynamic programming technique described earlier. The complexity of determining efficient caching schedules is now a function of both the number of distinct viewing times of users and the number of cost-change times (and of course, the number of servers).

RELATED WORK

The problems in efficient storage and distribution of multimedia information have attracted much attention in the recent past. While Lougher and Shepherd” and Tobagi et al.” have addressed problems in guaranteeing timely retrieval of multimedia, Ferrari et aZ.‘*, Kurose13 and Lazar et aLI have investigated approaches to real- time multimedia network protocol design. All of the above efforts have focused individually on either the media storage sub-system or the networking sub- systems. In contrast, in this paper we have developed strategies that evaluate the various network and storage alternatives in conjunction, and determine optimal caching schedules for retrieving multimedia informa- tion. The caching strategies we have developed rely on optimal methods for retrieving media from storage servers and for transmission of media.

Little et al.* have developed methods for optimally replicating media storage amongst multimedia servers that are co-located. For distributed video on-demand systems, Federighi and Rowe” have developed a software architecture and application protocols for managing distributed storage. However, both of these efforts do not address the network-storage trade-offs that arise in a distributed multimedia network, in which multimedia servers are geographically distrib- uted at different sites in a metropolitan area network.

Our efforts also differ from the work of Ramarao and Ramamurthyi6, who develop an algorithm for optimal distribution of movies among multimedia servers in a hierarchical network. Ramarao and Ramamurthy have considered the state allocation problem, in which given a set of user requests for a time period, the system determines the optimal number of copies of each media program that must be stored so as to minimize overall cost. In comparison, in this paper we have addressed the dynamic caching problem, in which the caches of media programs are determined not only as and when new requests arrive, but they are also determined based on the transmission routes of programs to different users, and their required service times. Moreover, Ramarao and Ramamurthy have restricted their focus to three- level hierarchies, whereas the caching strategies we have

214 computer communications volume 18 number 3 march 1995

Multimedia information caching for personalized VOD: C Papadimitriou et al.

a b

developed in this paper are applicable to any hierarch- ical network topology.

CONCLUDING REMARKS

In this paper, we have investigated some of the challenges that underlie the practical realization of personalized video-on-demand services. For supporting varying sizes and types of clientele, we have proposed that metropolitan area broadband networks be configured with multimedia storage servers located at various neighbourhoods, such as residential complex basements, university libraries, etc. To offer personalized services we have proposed personal service agencies, that are responsible for customizing multimedia services to suit users’ interests and for intelligent information caching. The caching strategies we have described in this paper represent an important first step towards providing personalized video-on-demand services to users at attractive costs. Using these strategies, PSAs can determine when, where and for how long video programs must be cached, so as to minimize the overall service costs borne by users.

APPENDIX

Theorem 1 The cost of a caching schedule determined by the Greedy caching strategy is guaranteed to be no more than twice the cost of an optimal schedule.

Proof To quantify the performance of the Greedy strategy, we first devise a variant of this strategy, one that is provably less efficient that the Greedy strategy. Next, we demonstrate that the Greedy-variant strategy is guaranteed to derive a schedule that is no more than twice as expensive as an optimal schedule. Combining these two observations, we establish that a schedule produced by the Greedy strategy can cost no more than twice as much as an optimal schedule. Both of the above steps are described next:

Figure 11 Illustration of the two steps in the proof of Theorem l.(a) Directed acyclic graph that the Gree- dy-variant caching strategy constructs to determine a caching schedule for the network shown in Figure 9~; (b) how an optimal caching schedule for the network in Figure 90 can be modified into a graph that contains direct paths between users’ nodes. Theorem 1 proves that the cost of a schedule derived by a Greedy strategy is no more than the cost of the schedule shown in (b), which in turn costs no more than twice as much as an optimal schedule

l Greedy- Variant Caching Strategy: corresponding to network N, the Greedy-variant strategy constructs a multi-dimensional grid G, as illustrated in Figure 9. Like the Greedy strategy, its variant executes n steps, one for each user. At the ith step, the Greedy-variant strategy attempts to determine the minimum cost path in grid Q through which the program P can be made available to user Vi by one of its chronological predecessors, iJk E [Ul , . . . , Vi_ I]. In comparison, at the same step, the Greedy strategy determines the cheapest way in which user Vi can have access to program P, via the m servers, based on the caching schedule determined for the preceding users U,,..., Vi- I. Notice that at each step, the path in grid E that is chosen by the Greedy-variant strategy for inclusion in the caching schedule is one of the options considered by the Greedy strategy. Hence, the Greedy strategy always outperforms its variant, in terms of the caching schedule that it determines.

l Quantifving the performance of the Greedy strategy: suppose that we construct a directed graph G that has (n + 1) vertices, one for each of the n users, and a start vertex that marks commencement of retrie- val of program P by the users. For every pair of users, (Vi, Uk) such that user U:s viewing time is prior to that of user U,, add a directed edge (Vi, Uk) in graph G, (see Figure Zla), with the cost associated with edge (Vi, Uk) being the minimum total cost of any path between users Ui and Uk in grid G. The chronological ordering of vertices imposed by the above construction guarantees that graph G is a directed acyclic graph (DAG) if users’ viewing times are unique. Figure Zla illustrates a DAG corresponding to the hierarchical network of Figure 9a. For a DAG, Papadimitriou and Steiglitz’ have shown that a greedy heuristic (such as the one used by the Greedy-variant caching strategy) pro- duces a minimum cost spanning tree*. This tree not only spans all the users, but also only includes

*To be more branching of G9.

precise, a greedy heuristic produces an optimum

computer communications volume 18 number 3 march 1995 215

Multimedia information caching for personalized VOD: C Papadimitriou et al.

direct paths between them. To quantify the devia- tion of the resulting caching schedule from the optimum, we observe that an optimal caching schedule, although it is a tree T that spans all the viewing times of users Vi,. . . , U,, may not consist entirely of direct paths between the users. We observe that by merely adding to the tree T, edges from each user Vi to a server Sj that transmits program P to that user at time ti (see Figure Ilb), we can construct a schedule that is guaranteed to be at most twice as expensive as the optimal schedule (since we are only duplicating the edges that represent program transmissions to users). More- over, since this schedule includes direct paths between users’ nodes in grid 8, it must cost at least as much as the minimum cost spanning tree of graph G, thereby implying that the schedule derived by the Greedy-variant caching strategy is no more than twice as expensive as the optimal schedule. From the above result, it follows that the Greedy caching strategy too performs no worse than twice the optimal.

4

5

6

7

8

9

10

11

12

REFERENCES 13

1 Hodge, W, Mabon, S and Powers Jr., J T ‘Video on demand: architecture, systems, and applications’, SMPTE J. (September 1993) pp 791-803

14

2 Little, T D C and Venkatesh, D ‘Probabilistic assignment of movies to storage devices in a video-on-demand system’, Proc. 4th Int. Workshop on Network and Operating System Support for Digital Audio and Video, Lancaster, UK (November 1993)

15

3 Ramanathan, S and Venkat Rangan, P ‘Architectures for personalized multimedia’, IEEE Multimedia, Vol 1 No 1 (February 1994) pp 3746

I6

SampathKumar, S, Ramanathan, S and Venkat Rangan P ‘Technologies for distribution of interactive multimedia to residential subscribers’, Proc. 1st Znt. Workshop on Community Networking: Integrated Multimedia Services to the Home, Millbrae, CA (July 1994) pp 151-160 Delgrossi, L, Halstrick, C, Hehmann, D, Herrtwich, R G, Krone, 0, Sandvoss, J and Vogt, C ‘Media scaling for audiovisual communication with the Heidelberg transport system’, Proc. ACM Multimedia, Anaheim, CA (August 1993) pp 99-104 Papadimitriou, C H, Ramanathan, S and Venkat Rangan, P ‘Information caching for delivery of personalized video program on home entertainment channels’, Proc. IEEE Int. Conf Multimedia Computing and Systems, Boston, MA (May 1994) pp 214223 Jessel, H A ‘Cable ready: the high appeal for interactive services’, Broadcasting & Cable (May 23 1994) Garey, M R and Johnson D S Computers and Intractability-A Guide to the Theory of NP-Completeness, Freeman, San Francisco, CA (1979) Papadimitriou, C H and Steiglitz, K Combinatorial Optimization: Algorithms and Complexify, Prentice-Hall, Englewood Cliffs, NJ (1982) Lougher, P and Shepherd, D ‘The design of a storage server for continuous media’, The Computer J. Vol 36 No 1 (February 1993) pp 32-42 Tobagi, F A, Pang, J, Baird, R and Gang, M ‘Streaming RAID a disk storage system for video and audio tiles’, Proc. ACM Multimedia, Anaheim, CA (August 1993) pp 393400 Ferrari, D and Verma, D C ‘A scheme for real-time channel establishment in wide-area networks’, IEEE J. Selected Areas in Commun., Vol8 No 3 (April 1990) DD 368-379 Kurose, J F ‘On computmg per-s&ion performance bounds in high-speed multi-hop computer networks’, Proc. SIGMETRZCS, Performance Evaluation Review (June 1992) pp 128-l 39 Amenyo, J T, Lazar, A A and Pacifici, G ‘Proactive cooperative scheduling and buffer management for multimedia networks’, Multimedia Systems J., Vol 1 No 1 (June 1993) pp 37-50 Federighi, C and Rowe, L A ‘A distributed hierarchical storage manager for a video-on-demand system’, Proc. IS&T/SPIE Symposium on Electronic Imaging Science and Technology, Storage and Retrieval for Image and Video Databases II, San Jose, CA (February 1994) Ramarao, R and Ramamoorthy, V ‘Architectural design of on- demand video delivery systems: the spatio-temporal storage allocation problem’, Proc. ICC (1991) pp 17.6.1-17.6.5

216 computer communications volume 16 number 3 march 1995