17
Refining Media Delivery Intelligent Caching Comes of Age A Transitions in Technology Series White Paper Assessing Juniper Networks’ Media Flow Technology Timothy Siglin Co-Founder Transitions, Inc. September 2010

Refining Media Delivery - Juniper Networks · Refining Media Delivery ... Some companies are moving away from reliance on CDN delivery. ... Both companies partnered in mid-2009 announcing

Embed Size (px)

Citation preview

Refining Media DeliveryIntelligent Caching Comes of Age

A Transitions in Technology Series White Paper

Assessing Juniper Networks’ Media Flow Technology

Timothy SiglinCo-Founder

Transitions, Inc.

September 2010

Refining Media DeliveryIntelligent Caching Comes of Age

Executive SummaryRich media delivery grew at unprecedented rates last year. A recent Federal Communications

Commission (FCC) paper notes that patterns of rich-media consumption—primarily around long-

form SD and all forms of HD video—are driving significant bandwidth consumption.

Mobile broadband consumption is growing at a rapid pace, with the FCC likening its growth as

“similar to fixed broad-band and mobile voice growth in the mid- and late 1990s. . . The historical

experience of fixed broadband [mobile broadband usage] would imply that high growth could

continue for some time.”

Converting the rich media challenge from a loss leader into a profit center, though, requires careful

attention to the systemic challenges of rich media delivery — and a solid plan to address delivery

solutions at both the core and the edge.

As service providers face rapid growth of the various forms of rich media, available solutions allow

a service provider to do more than just stack a few extra servers in a disjointed fashion and,

instead, allow the provider to address a core delivery solution issue that can only be solved with a

move beyond general purpose operating systems and generic hardware.

Software and hardware solutions purpose-built for rich-media delivery are needed to address

inherent differences between standard data delivery and time-sensitive media delivery. These

solutions must also address intelligent caching for content delivery that has all too often been the sole domain of CDNs.

New solutions are emerging, purpose-built to address business models and network constraints

while also providing a path toward very high scalability. One of the more elegant and efficient

solutions is Juniper Networks' Media Flow technology. An example near the end of this white paper

shows potential cost savings from intelligent caching can easily exceed several million US dollars.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 2

Big Iron Meets Rich MediaCan service providers cost-effectively move from servicing one screen—desktop, living room or

mobile—to delivering the same content to all three screens? If so, what impact does the expansion

of service have on delivery and asset management costs?

These questions are top of mind for many service providers, including cable MSOs, wireless and

wireline providers, for good reason: an explosion in online and over-the-top video content is

increasing demand curves for bandwidth well beyond projected levels.

In some cases, the growth in rich media content is so extreme that it runs the risk of saturating

available bandwidth and spectrum.

“Mobile data usage is projected to grow at very high rates,” the FCC stated in its recently released

Broadband Performance: OBI Technical Paper #4. “Data usage more than doubled from 2008 to 2009, and projections estimate that growth may continue at 40–100% a year through 2015. . . .

Wireless cards account for 75% of all mobile data consumption, while phones and wireless-

enabled devices comprise only 25%. Going forward, smartphones and wireless cards are

expected to see 30–40% growth rates.”

Data connectivity speeds on devices such as the MiFi or on Palm WebOS -based handsets,

doubling as multi-user WiFi access points, allows use of a single connectivity device, whether at

home or on the road.

The number of laptop computer users moving towards mobile computing, via these multi-user,

WiFi-enabled mobile data modems, is trending towards much higher data usage rates than to the

average wireline cable modem or DSL user. The typical data consumption of a wireless broadband user, according to the FCC’s technical paper, is 7 GB/month, almost on par with wireline users in

the Advanced category the FCC uses to denote the top-most consumer of wireline broadband.

What’s the common denominator? The FCC and others agree that the viewing of rich media

content, primarily HD video, is driving significant consumer bandwidth consumption.

So what's a service provider to do?

This paper will examine three potential obstacles to cost-effective three-screen delivery: device and

format proliferation, delivery platform limitations and the cost of transit bandwidth versus hardware

and software scalability.

Along the way, we’ll use one proposed solution, Juniper Networks' Media Flow technology, to

compare these obstacles against industry claims. The company has a hardware-software strategy, leveraging the purpose-built software Juniper obtained earlier this year by acquiring Ankeena

Networks and its Media Flow technology, alongside just-released Juniper VXA series hardware.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 3

Increased Caching or Increased Transit Bandwidth? Before we dive into the three potential

obstacles, let’s first step back and examine the most prevalent way of lowering bandwidth and

operating expense (opex) growth rates for traditional web content: caching.

Whether it is used at the edge, the mid-tier

or the core of the network, caching is

designed to serve multiple requests for a

single piece of content. Rather than

serving each request from an off-network

location, caching serves the content from a

location within the network.

The need for in-network caching is critical,

since Tier 1 providers each only cover an

average of 15% of a country’s internet users. Cache hit ratios of 85% are not

uncommon for HTML-based content and

other small files.

When it comes to rich media, though, the

cache hit ratio within a service provider’s

network falls off considerably. This is partly

due to the size of the files: the need to

place the content at various edge locations,

closer to the user, yields a storage penalty

in terms of increased storage/storage types (hard drives or RAM or solid-state drives).

To meet the growing demand for caching of

highly popular content, content delivery

networks (CDNs) emerged to cache popular web content (primarily small files, although this later

grew to include the delivery of large files, such as software updates and rich media content).

Commercially available since 1999, CDNs handle traffic for almost 100% of the world’s busiest

websites. Providers shied away from acquiring CDNs; as a result, most CDN cache traffic still

traverses a provider's edge and access network.

Some companies are moving away from reliance on CDN delivery. Microsoft, for instance, noted its

intent to shift the majority of traffic for video, software downloads and small object delivery to its internal edge-computing platform, reducing dependence on third-party CDNs from 95% of

Microsoft's total traffic in 2007 to an estimated 40% by the end of 2010.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 4

Why Juniper-Ankeena?

Why is a "big iron" company moving into content

management and distribution? Why would Juniper

acquire Ankeena instead of partnering?

Both are good questions. Ankeena, which has been

in the public eye only since early 2008, emerging

from several years of stealth mode, provides a

technology that is complementary to Juniper's "big

iron" strategy of enabling providers to optimize their

Juniper-based networks for online media delivery.

Both companies partnered in mid-2009 announcing

plans to develop a Juniper-specific version of the

Media Flow technology. Based on the 2.0 version,

Juniper’s Media Flow—and purpose-built hardware

discussed in the paper—were intended to assist

mobile operators in reducing transit traffic costs.

In other words, Ankeena's technology fit Juniper's

larger strategy; when initial partnership projects

proved successful, Juniper chose to bring Ankeena

in house.

Yet service providers do not see any significant benefit from corporations launching in-house

deployments, as consumer requests for content still must traverse the service provider’s own access network.

CDNs also tend to jettison cached content far faster than a service provider may choose to, with

the end result being that—for both long-form or short-form content—spikes in viewership of

previously cached CDN content will also be borne directly by the service provider. As a result,

service providers find themselves compensating for the shortcomings of CDNs, while still hesitating

to bring CDN services in house.

Unless service providers find an alternate approach, towards more intelligent caching of rich media

and video, including the wide swath of user-generated content only minimally cached by today’s

CDNs, the status quo of increasing traffic across access networks will remain.

This, then, is the backdrop against which companies like Juniper are courting service providers.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 5

2010 21

US Mobile Internet Video Users*, 2010-2015 (millions)

Note: *via mobile phones only, excludes laptops/netbooksSource: Coda Research Consultancy, “Mobile Video and TV in the US” as cited in press release, January 12, 2010

2011 28

2012 36

2013 48

2014 60

2015 74

Devices, Formats and ProtocolsDespite a marked uptick in content proliferation, it’s worth noting that the primary challenge for

most service providers today has more to do with another type of proliferation: devices, formats

and protocols.

Let’s look briefly at devices and protocols, to gain insight into possible cost savings.

Device proliferation. Rather than slowing to a trickle, the number of video-playback devices

show few signs of abating. Mobile handsets, in particular, are both a problem—and a potential

solution—for the service provider dilemma of content delivery.

Traditionally, rich media had been constrained to desktop and laptop computers, where interactive

content was played in a standalone player or via a plug-in on most popular browsers.

Mobile browsers, on the other hand, were better at repackaging a few key websites than they were

at replicating the look and feel of a desktop browser. They also didn't support the same plug-in players favored by computer users, such as Adobe Flash Player or Microsoft's Silverlight.

As a result, video on the mobile handset was limited to content transferred from the desktop or

very niche video delivery segments such as affluent sports fans (ESPN's MVNO) or breaking news

highlights.

The advent of the iPhone and its mobile browsing experience on par with a laptop, including fully

rendered web pages, shifted the field of mobile browsing—and mobile video consumption—into

high gear.

Not only does the iPhone support standards-based H.264 video delivery via HTTP but competing

devices—from the Research In Motion Blackberry to newer WindowsMobile handsets and

Android-based devices—also support desktop-equivalent browsing.

The end result is that the consumption of mobile video has risen exponentially.

Format and protocol proliferation. Simultaneous with device proliferation is an expansion of

rich-media protocols and formats.

While the industry has trended towards H.264 and the MP4 container format as a standard for

three-screen delivery, the reality is that new formats and protocols are being introduced at a rapid

pace: in the last three months alone, we've seen RTMFP and WebM / VP8 emerge.

Even the flavors of H.264 delivery have multiplied, most apparent in adaptive bitrate (ABR) delivery

schemes that the big three—Adobe, Apple and Microsoft—have created for HTTP delivery of H.

264 content. Each offers an enhanced delivery option, yet each is incompatible with the other.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 6

Faced with proliferation of devices, delivery format and protocols, is there a way for service

providers to use the same piece of content across multiple devices?

More to the point, is there a way to eliminate one of the biggest content management and

transcoding bottlenecks—the need to maintain multiple discrete versions of the same asset, one

for each device?

Two approaches hold promise: protocol conversion and format re-containerization of ABR content.

Protocol conversion. As the H.264 codec has risen in popularity, it has garnered support from each

of the major video players on the market: Adobe’s Flash Player, Apple’s QuickTime, Microsoft’s

Silverlight and WindowsMedia players, as well as a number of additional open-source and

customized players, all support H.264 playback.

Many of these players support different delivery protocols, however, so protocol conversion

emerged as a way to convert between the various protocols used for H.264 content delivery: IP transport streams, the Adobe Flash Real Time Media Protocol (RTMP), the Real Time Streaming

Protocol (RTSP) and the more generic web HTTP protocol.

For instance, service providers who use MPEG-2 transport streams to deliver IPTV solutions, the

ability to convert the protocol to RTMP or HTTP is a critical step in providing a three-screen delivery

solution.

All solutions offering protocol conversion can ingest a raw H.264 stream (or an MP4 file containing

H.264 video) and then convert the protocol through which the content is delivered.

Much of the content is delivered in an MP4 container format, as this is a common container format

for H.264 video. A few solutions also change the container format for delivery to particular players,

however, which opens the door for adaptive bitrate (ABR) streaming.

Adaptive Bitrate. ABR is a newer method of delivering rich media content based, in part, on an

older streaming method called Multi Bitrate (MBR).

Unlike older multi-bitrate (MBR) streaming solutions, which housed all of multiple bitrates within a

single file, ABR separates each of the various bitrates in discrete files.

ABR uses multiple discrete H.264 files, each one at a different nitrate, on average 3-5 different files

per piece of rich media content. For a three-screen scenario, however, it is possible to use many

more bitrates, from mobile all the way up to HD.

Typically these files are housed in an MP4 container format, a common denominator between all

ABR solutions. Each file is broken down into fragments of 2-10 seconds in length. The fragments,

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 7

also known as segments or chunks, have synchronized ending points across each file, so that the

fragments are equal in length and also occur at the exact same time in each file.

These files are then "streamed" via HTTP as a series of progressive-download fragments. As

network conditions change, the stream with the most appropriate bitrate is served for that give

chunk of time. In other words, as network conditions improve or deteriorate between the HTTP

server and the end user's player, the server delivers the next set of fragments from a higher or

lower bitrate file, respectively

This ability to dynamically adjust increases the likelihood that a viewer can watch the entire video

free of interruptions, since the video seamlessly switches from bitrate to bitrate—without

rebuffering—at the end of any given fragment.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 8

Vide

o Qu

ality

Time

Best

High

Med

Low

00.10 00.20 00.30 00.40 00.50 01.00 01.10 00.20

Network Congestion

The ability to work with multiple discrete files provides a more refined content management

separation between mobile and web delivery platforms.

A good example of this refined content management is iPad versus iPhone delivery. Mid-level

iPhone streaming uses the same pixel size (400x224) and bandwidth settings (400 kbps) as low-

end iPad streaming, so a single 400 kbps file can be used for both.

ABR can either assign this 400 kbps file to a single manifest, or playlist, along with the other

bitrates, if the service provider wants all iPad and iPhone delivery to occur equally. Conversely,

should the service provider wish to separate iPhone and iPad delivery, two manifests would be

created, both referring back to the same 400 kbps file as part of their respective ABR delivery.

Multiple ABR solutions exist in the marketplace today, including one each from the Big Three—

Adobe, Apple, Microsoft—as well as several proprietary solutions, such as Move Networks. All act

in a fairly similar manner, and all now allow delivery via HTTP.

Yet each ABR solution is optimized for its own respective player and until an ABR standard—de

facto or otherwise—emerges, a robust ABR delivery strategy must support as many of the ABR

technologies as possible.

An ideal solution would store a single version of content, across multiple bitrates to accommodate

the various end user devices on a service provider's network.

Translating the format into the various ABR solutions mentioned above, at the time of delivery,

would address the diversity of a three-screen universe that contains hundreds of varying end-user

playback devices.

Measuring up. So how does Juniper’s Media Flow technology measure up in both protocol

conversion and adaptive bitrate? Surprisingly well.

First, the Media Flow technology supports a variety of delivery protocols, including the standard RTSP and HTTP, as well as all versions of Adobe’s Real Time Media Protocol (RTMP): RTMP

Tunneled (RTMPT) and Encrypted RTMP (RTMPE). It also supports Apple’s MOV, the MP4 format

and WindowsMedia Video (WMV and ASF).

Second, Media Flow supports MPEG-2 (MPG) and 3GPP / 3GPP2, as well as providing a path

towards HTML5 video delivery, which shuns a browser player plug-in in favor of built-in video

playback directly within an HTML5-compliant browser.

Third, Media Flow supports a variety of ABR solutions and can convert MP4 files both into various

protocols as well as into each flavor of ABR—Adobe, Apple, Microsoft, Move—for progressive

download delivery over HTTP.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 9

Delivery Innovations and PlatformsThe whole discussion of robust ABR delivery, multi-protocol delivery and virtualized server-side

players is beneficial, but it’s not practical with most media server solutions, due to the limitations in

general purpose operating systems (GPOS). As such, service providers need purpose-built

software mated with purpose-built hardware.

Limitations of General Purpose Operating Systems (GPOS). Most operating systems were

designed to handle legacy file retrieval of small objects, offering content up to one (desktop OS) or

hundreds (server OS) of users. When it comes to delivering a file such as a PDF or a Word

document, the GPOS perform reasonably well; click on a file on a file sharing server and it's often

cached to your local machine and then loaded into your desktop's application.

Yet when it comes to delivering large objects in a timely manner, over a sustained time period—the

very requirement for streaming or progressive downloads—kernel-level scheduling of a GPOS fights against efficient delivery. The scheduler processes the various requests for memory, network

connections, disk input/output and core processing—all critical elements for timely media delivery.

The most frequent fix put forward by GPOS-based media delivery vendors is to throw more

hardware at the solution. This works to a limited extent, but the threads and processes from the

scheduler for one machine's CPUs (whether single- or dual- or quad-core CPUs) add significant

switching overhead, resulting in limited CPU scalability, which I’ve found to be true in a variety of

transcoding and encoding tests I’ve performed over the past few months [will give details here].

Researchers conclude that scalability models based on throwing hardware dependent on a GPOS

at the problem merely masks the dependency and scheduling issues.

"Current GPOSes have large non-preemptible sections and use large kernel buffers to improve CPU throughput," researcher Ashvin Goel reports. "Increased latency in the kernel conflicts with

the timing requirements of low-latency applications because it reduces control over the precise

times at which applications can be scheduled."

"The underlying problem with today's operating systems is its handling of I/O," states researcher

Daniel Taranovsky. He continues:

I/O has been treated as a series of unrelated data movement operations on a set of independent devices. For example, each device driver . . . is left to coordinate one device queue independent of each other. Typically this is done on a first-come, first-serve basis. With this design, attempting to coordinate input from several devices (e.g. a video camera and a microphone) is impossible as an operating system primitive. The devices are unable to coordinate themselves with other devices to effectively schedule themselves. This is further complicated in distributed systems where devices and storage can be located at different geographical locations.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 10

If this is true for a desktop OS, it is exponentially true for a server OS, which must coordinate and

time hundreds or thousands of simultaneous requests.

The last line in the second quote—regarding storage—is worth noting, as many video delivery

solutions are, indeed, geographically dispersed, for both operational and financial reasons. GPOS

file systems and storage devices are a bottleneck in efficient media delivery. The drives themselves

lack content awareness; coupled with the operating system's coarse scheduling, means that these

storage devices are inefficient for handling rich content.

Worse yet, the file fragmentation inherent to a GPOS and its file system often results in enough of a

slow-down in disk I/O that results in increased latency.

Again, a GPOS solution would say throw more hardware at the problem, in this case RAM. Serving

all content, even long-tail content, out of high-speed RAM isn't the solution, though. The costs

associated with delivering all content out of RAM is prohibitive, and the management of significant amounts of RAM is also an issue. We'll look at potential solutions in the next section.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 11

Perfo

rman

ce

Capacity

RAM

SSD

SAS

SATA

NAS

Cont

ent

Cold

Hot

Seeking GPOS AlternativesSo how does a service provider address the issues surrounding GPOS limitations, and even the

limitations of general-purpose hardware?

We suggest that, rather than relying an OS that is good at everything but media, or that seeks to

minimize the impact on other applications running on the same server, service providers need a

one-two punch: purpose-built hardware and a protocol stack designed to handle media delivery.

To best address scalability, a media-aware operating system needs to be marry purpose-built

software to a purpose-built server solution, capable of generating high throughput while also

handling a large number of simultaneous transactions.

Purpose-built software. As mentioned in the previous section, Media Flow supports multiple

ABR methods, so that any ABR-encoded content loaded into Media Flow can be delivered to its

corresponding video player: Apple iPhone Streaming, Microsoft Smooth Streaming, Move Adaptive Streaming, and two flavors of Adobe Dynamic streaming—RTMP and HTTP.

The ability to cover the major ABR solutions is impressive, but how does Media Flow differentiate

between the ABR and non-ABR requirements for optimal delivery to each device?

To handle all these requests, Media Flow Controller relies on a server-side player. The player is a

server-side plug-in that is established for each and every incoming request (or session) that first

identifying popular formats (FLV, MP4, MOV, MPG, WMV, F4V, 3GPP and 3GPP2) and then

optimizes delivery for each individual session.

According to Juniper, the server-side player can also enforce media-specific logic for each session.

This covers both inbound requests and also outbound delivery.

“The SSP interprets a URL, detects the format of the media being requested, determines what bit rate is to be used for the requested content, handles what is to be sent, and calculates from where

to play the media,” the company notes, adding that this logic enforcement also occurs when a

specific media stream is heading out for delivery, verifying business rules and validating encryption

or subscription criteria provided by third-party encryption license servers.

The server-side player concept is intriguing because it potentially allows a service provider to create

a tiered approach to content delivery: if every session can be uniquely identified and addressed via

a virtual, server-side player, service providers can define individualized policies, or customized

group delivery policies, to address subscriber tiers. Enforcing policies on the virtual, server-side

player mimics—and potentially guarantees—policy compliance on the viewer’s client-side player.

Purpose-built hardware. Some companies recommend increasing the use of caching proxy units toward the edge of the network, delivering content to a smaller group of users in a specific

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 12

geographic location. Yet these units also face scalability issues, especially if content is spread

across a variety of storage types. Not only does the cost of hardware go up dramatically, but the cost of energy and rack space also rise, limiting the number of locations into which edge devices

can be placed.

The Media Flow Controller software, coupled with Juniper's VXA Series of hardware platforms can

support tens of thousands of simultaneous sessions.

VXA is also capable of significant throughput (up to 10 Gbps sustained). In other words, up to

40,000 sessions of 250 kbps each could be processed each second, a phenomenal number of

HTTP progressive downloads served from a single unit.

Juniper has built these VXA series units as carrier-grade devices with up to 8 TB of cache per units

using RAM, solid state drives or hard drives. VXA units are also Network Equipment Building

System (NEBS)-compliant devices can be DC or AC powered, allowing deployment in either a data center or a service provider’s central office.

Refined protocol stack. The ideal candidate would handle a large number of sessions, using a

lightweight switching mode that doesn't add unnecessarily to CPU overhead. To do so, it would

need to use a fine-grained scheduler in the protocol stack, rather than relying on the host OS's

coarse-grain scheduler.

In addition, rate management and capacity-based admission control are key elements of this type

of solution, scaling with limited latency and jitter, but also limiting the number of connections when

defined bandwidth levels are saturated.

This balance between the number of transactions per second (TPS) and bandwidth levels is key, as

a system with a low TPS rate may not handle the amount of throughput needed to saturate the

server’s network connectivity; conversely, a system with potential for a high TPS rate—if not properly assessing the amount of available bandwidth—may over-saturate the server’s network

connectivity.

The outcome would be holistic drop in bandwidth rates for each user session. As such, connection

rates should be independent for each session, allowing the delivery of bandwidth-specific ABR

content as the middle-mile or last-mile connectivity fluctuates. Independent connection

manipulation for each session sounds like common sense, yet is difficult in a GPOS as utilization

percentages climb.

Besides the ability to eliminate silo networks for specific protocols or applications and instead to

deploy a single converged solution enabling multi-screen delivery, Juniper’s Media Flow is a media-

aware delivery system. The Media Flow Controller appears to provide all of the alternative feature

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 13

requirements noted here, but also offers the ability to assign tasks asynchronously, allowing it to

handle a large number of connections with lower CPU overhead than a GPOS.

This asynchronous approach is key for time-sensitive media workloads, since it eliminates the need

within GPOS-based media servers to underutilize system resources in order to scale. Without an

asynchronous task model, the OS scheduling primitives (threads and processes) add significant

switching overhead, resulting in inadequate CPU scaling.

In addition, the combination of the server-side player (SSP) and an asynchronous switching model

provides one more critical element: guaranteeing that the bandwidth required by each session is

available. Through a combination of rate management and capacity- based admission control,

Media Flow’s Controller can assure a quality viewing experience for each session, as it tracks

currently available bandwidth in real time.

For communication with storage devices, Media Flow allows FTP, HTTP and NFS connectivity.

Media Flow Controller is also aware of both the number of sessions and the amount of aggregated

bandwidth available to the server. As noted earlier, Media Flow purpose-built devices can handle a

very high number of sessions (up to 100,000) and also scale up to 10 Gbps network connectivity,

but has the ability to limit both TPS and additional bandwidth requests when the need allows.

Bandwidth Savings Through Intelligent CachingGreater session density and efficient system resource utilization can be achieved when all aspects

of the solution are optimized for video and rich media content, yielding a reduction in the number of

servers, yet still accommodating an equivalent amount of traffic volume.

While caching for non-rich-media web content has been readily available for years, purpose-built servers running a media aware OS that both caches and serves content need to be deployed to

yield significant bandwidth savings through intelligent rich-media caching solutions.

The best approach to this appears to be through a combination of purpose-built software along

with purpose-built hardware used to cache content at the network’s core and edge.

Popular long-form content, the type that a CDN would typically serve up, would be placed at

multiple locations along the edge. User-generated, short-form content requested by on-network

users would be retrieved and cached closer to the network’s core, until such a time as it grows in

popularity and is pushed out to the network edge. Since it’s retrieved into an on-network cache,

however, even these subsequent requests are served without reverting to off-network playback.

Automation. The last aspect of content scalability, beyond caching and purpose-built hardware,

has to do with automated content management and promotion. To reduce administrative costs, the ideal solution should not only monitor system resources and performance metrics. It should

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 14

also be able to automatically place content in the device most optimally suited for a given piece of

content, whether that be a core, mid-tier or edge caching appliance.

Juniper says its Media Flow Controller can be used to “track content according to its type and

popularity and can intelligently promote content between the cache storage tiers. This unique

hierarchical cache storage capability enables providers to rapidly adapt to the evolving preferences

of subscribers, and tightly align their caching resources with their network resources.”

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 15

Example: Intelligent Caching

To illustrate the potential cost savings of intelligent video caching, we’ll assume five criteria, on the basis that

reducing rich media content’s role in transit traffic may also lessen the need for capacity upgrades:

• A cache hit ratio of 50%

• Cacheable bandwidth of 5 Gbps

• Cost per Mbps (per month): $50 (global average)

• Customers: approximately 500,000

• Growth rate of bandwidth of 150% (based on video growth)

On a monthly basis, 5 Gpbs of cacheable bandwidth may cost the service provider $250,000. Achieving a

50% cache hit ratio, then, would reduce bandwidth costs to $1,500,000 annually. Translated into average

revenue per user (ARPU) for 500,000 users, cost savings would be $3 per customer per year.

Of the $1,500,000 annual savings, a significant amount would be offset against purchase of caching system

equipment as well as ongoing operational expenses. This yields a cost of $1.21 ARPU or $605,000 with 22%

of first year costs being opex costs and the remainder being capex costs. Savings in the first year, then, would

be $895,000 or an annual ARPU of $1.79.

The second and third years show significant growth rates of bandwidth consumed—but also show much

higher savings from the equipment expenditures in year 1. Bandwidth costs, without caching, for years 2 and

3, would be $7,500,000 and $18,750,000, respectively.

At a 50% cache ratio, cost savings of $3,750,000 and $9,375,000 could be achieved for these same years.

Since no additional equipment capex is required, opex for years 2 and 3 are the only offsets to savings: using

year one’s of $133,000 and factoring in a 10% increase in opex for each of the next two years, we’d see an

annual opex cost of $146,000 for year 2 and $161,000 for year 3. This means that savings for each year are

$3.6 million for year 2 (ARPU of $7.21 annually) and $9.2 million for year 3 (ARPU of $18.43).

The aggregate savings, then, in bandwidth costs alone, is $13,713,000 ($895k+$3604k+9214k) over 3 years.

Given different costs across differing geographies, it’s quite possible that the cost savings will be lower, solely

dependent on the cost per Mbps per month in your specific geography.

ConclusionTraditional service coverage has either defaulted heavily to the mobile screen or to the two in-home

screens: the desktop and the living room. In addition, most laptops were assumed to use wireless

data cards for basic data delivery (emails or LinkedIn) but opt for WiFi for heavy lifting of video.

As the past few years have shown, however, rich-media delivery to all three screens—including

mobile broadband delivery—is an undeniable consumer trend. Consumers who are ready to 'cut

the cord' of cable still want the ability to watch popular television shows. Even for consumers who

choose to keep their cable subscriptions, the trend towards catch-up viewing on the laptop is

unmistakable, even within the home.

Using mobile data services on a laptop to catch up on entertainment is a far cry from the original

intent of the data card as a mobile work solution, is also growing alongside the trend toward IPTV, set-top boxes and over-the-top (OTT) content on the living room screen. While the trend for large-

scale IPTV roll-outs has been more muted in North America, OTT content delivery is on the rise,

accounted for in the growth of set-top boxes from Apple, Roku, Sling and others.

It's clear that video delivery to all three screens is thriving, meaning providers must proactively

engage in intelligent caching and updated delivery methods to accommodate all three screens.

To address the growing three-screen market, an ideal caching and delivery system should be

agnostic as to the delivery protocol, so that content in the cache can be shared by multiple delivery

solutions, such as the various ABR streaming formats from Adobe, Apple and Microsoft. It should

also allow content to span various storage types, so that hot content can span RAM and less-

costly solid state drives (SSDs) while long-tail content spans traditional hard drives and SSDs.

Without optimization of the total solution for rich media delivery—hardware, software, and the

operating system media stack—the number of servers needed to deliver to scale rises rapidly, but

almost as rapidly reaches a level of diminishing returns.

Given today's economics for delivering HD content to the living room or desktop, plus high-quality

SD content to mobile devices, the ideal intelligent caching solution should be capable of 1-10

Gbps of real throughput, without hardware bottlenecks for networking and storage I/O limiting

either a single device or cluster of devices.

To achieve this level of optimization, the need for a purpose-built solution is clear, regardless of

whether the implementation is at the network edge, mid-tier, core—or a combination of all three—

yielding server consolidation into a smaller number of intelligent caching devices reduces bandwidth and energy consumption as well as administrative costs.

In short, a purpose-built solution aligns caching and network resources to work in unison.

Refining Media Delivery: Intelligent Caching Comes of Age Transitions, Inc. 16

About the Author

Tim Siglin has been involved with visual communications for over fifteen years, including market analysis, im-

plementation and product launches. He has provided consulting services to numerous application providers

and network operators, including design services for carrier-grade media asset management systems.

Siglin has also provided research and consulting services to several Big 5 consulting firms as well as internal

skunk works projects for several Fortune 10 and numerous Fortune 500 clients.

Siglin holds an MBA with emphasis in entrepreneurship, is a co-founder of Transitions, Inc. and also serves as

a contributing editor for a variety of tech publications.

About the Company

Transitions, Inc., is a technology and business development firm with extensive experience in technology de-

sign and go-to-market strategy consulting. Transitions specializing in assistance to businesses seeking to

identify “transition points” that hinder growth or a return to profitability.

Repeat customers account for more than 80% of all on-going business, but Transitions also takes on select

project challenges assisting startups, distressed and expanding small businesses. Based in Tennessee, Transi-

tions’ business strategy and marketing consulting clients include companies in Silicon Valley, Boston, London,

Milan, Mumbai, New York and Switzerland.

Ongoing Transitions’ projects of particular interest include established businesses and startups in the digital

media, financial services and global marketing industries.

© 2010 Transitions, Inc.