Upload
instart-logic
View
55
Download
1
Tags:
Embed Size (px)
Citation preview
THE EDGE IS NOT THE END –
WHY CLOUD AND MOBILE MAKE
CDNS OBSOLETE
BY PADDY GANTI
Recently Shane Lowry, our VP of Engineering, wrote a blog post on how the next
disruption in application delivery is about eliminating human middleware.
I wanted to provide some more context and also share some data nuggets to expand
on the facts laid out in that article.
It’s no surprise that mobile adoption and the advent of cloud computing are the two
biggest disruptions we have seen in the Internet service delivery space. In this post,
we consider the implications for both client and server side given these disruptions.
We will also show that content sizes are increasing, device diversity is exploding and
the new choke point for application delivery is now the Radio Resource Controller
(RRC) and Radio Access Network (RAN). These challenges dictate a solution space
that's different from the previous approaches we have seen and Instart Logic is
specifically focused here.
First, let’s start by talking about the two key disruptions – mobile and its impact on the
client/device front; and cloud-based computing and what that means for the back end.
Globally, mobile traffic is about 30% of all Internet
activity today and is increasing rapidly, with an
additional 6% of activity generated from tablets.
The Cisco Visual Networking Index (VNI)
provides the following quantitative estimates in
mobile data growth, which shows that we expect
to see an 18x increase in 5 years (2011-2016).
MOBILE
20
15
10
5
0
Mobile Data Growth in Exabytes Per
2012 2013 2014 2015 2016
ExaBytes per Month
This growth is fueled by demand for better applications
and more content (mostly video) from a variety of
mobile devices. The reason that this growth differs from
what we’ve seen historically is that previously desktops
primarily consisted of Wintel-based platforms using
wired line access to the Internet – which made it easy
to optimize for a homogeneous workload. Today’s
plethora of smart phones and tablets makes it an
entirely different ballgame.
While it's tempting to bundle all mobile growth into
a single bucket, in reality the demand for content
emanates from a wide variety of devices. The variety
starts with platforms. Let’s consider the following
treemap of Android devices that are out there (Android
owns 72% of the market, while iOS accounts for 26%).
GT-I9100
Device Model
From our own logs, we see the following distribution
of device platforms:
Android 4.4
Android 4.2
Android 4.1
iOs 7.1
iOs 8.0
Other Android
Other iOs
Miscellaneous
To add to that, we also need
to consider the screen size
diversity, which ranges from
320x480 pixels (smart phones)
all the way up to 1920x1080
pixels (HD displays).
The bottom line is that mobile
data is growing exponentially
and is being consumed by a
greater variety of screens and
device platforms.
While the client side is exploding, on the
server side we see a trend towards cloud
adoption. The manifestation of cloud
computing for web pages is the presence
of a lot of third party components such as
widgets doing A/B testing, providing feed-
back via beacons, and tracking user behavior
apart from providing analytics. This increases
the number of components on a given web
page while not exactly contributing much
to the overall payload. We saw that roughly
48% of the requests in http archive are
classified as third-party.
CLOUD SERVICE
ADOPTION
2005 2007 2009 2011 2013 2015
Interest over time
With the explosion of mobile devices and consolidation of cloud
services, and the perennial expectation that compute and network just
keep getting better and faster, the logical conclusion is that this must
mean that the mobile web is faster. But the reality is quite different.
When we say faster, we mean visually/perceptually faster.
So the question then boils down to – what metric should we
choose that best correlates with visual perception of a page
load? OnLoad isn't a good metric, since a page load event
can be artificially triggered by sites even when no visual
content is present, and neither is Start Render, which can
be triggered after onLoad. So we finally settled on Speed
Index, which is a WebPagetest measurement of how quickly
the screen paints (perceived load time). The faster you paint
the whole screen, the lower the score. A Speed Index of
less than a second is the holy grail in web performance.
THE MOBILE WEB IS IN FACT
GETTING SLOWER OVER
TIMETop 1K URLs speedIndex
2000
7/1/2012 1/1/2013 7/1/2013 1/1/2014 7/1/2014
4000
6000
8000
10000
+
Top 10K URLs speedIndex
12000
Top 100K URLs speedIndex
12000
07/1/20121/1/20137/1/20131/1/20147/1/2014
3000
6000
9000
07/1/20121/1/20137/1/20131/1/20147/1/2014
3000
6000
9000
We tracked Speed Index for the top
1,000, top 10,000, and top 100,000
sites as cohorts to check if any
apparent trend is uniform, or if it
differs over the various groupings.
From what we can see, it's a uniform
trend that mobile websites over the
last 2 years are getting slower not
faster, despite all the advances that
have been made.
(Note: the collection of data changed a bit in the middle, when the
throughput of the mobile device measurement was altered to use an
emulated 3G network in June 2013. However these changes do not
affect our conclusion in any meaningful way.)
So why is the Mobile Web getting slower?
The first fairly obvious reason is the growth in richer and more
content-intensive web sites. To substantiate this claim, we took
a look at the Page weight metric.
CONTENT IS
GETTING
FATTER
200000
7/1/2012 1/1/2013 7/1/2013 1/1/2014 7/1/2014
300000
400000
500000
600000
7/1/2012 1/1/2013 7/1/2013 1/1/2014 7/1/2014 7/1/2012 1/1/2013 7/1/2013 1/1/2014 7/1/2014
200000
300000
400000
500000
600000
200000
300000
400000
500000
600000
Median_byteVolume Median_byteVolume Median_byteVolume
Top 1K URLs PageWeight Top 10K URLs Pageweight Top 100K URLs PageWeight
As you can see, the uniform trend across all cohorts is a marked
increase in page bytes.
Next we wanted to see if we could pin this increase to particular
types of web traffic, so we separated out the Page weight data
by content types:
Median_ImgBytes
Median_CssBytes
Median_JsBytes
Median_HtmlBytes
2250020000175001500012500
1200000
1000000
800000
600000
18000
14000
12000
10000
250000
200000
150000 Median_ImgBytes
Median_CssBytes
Median_JsBytes
Median_HtmlBytes
2250020000175001500012500
1500000
1250000
1000000
750000
18000
14000
12000
10000
250000
200000
150000
Median_ImgBytes
Median_CssBytes
Median_JsBytes
Median_HtmlBytes
2250020000175001500012500
1500000
1250000
1000000
750000
18000
14000
12000
10000
250000
200000
150000
Top 1K – Size by Content Type Evolution Over Time Top 10K – Size by Content Type Evolution Over Time
Top 100K – Size by Content Type Evolution Over Time
Again the uniform trend shows that
content sizes are bloating across all
content types, ranging from a few
percent in HTML to a near-doubling
of Image bytes.
A quantitative study performed by Mike Belshe (one of the creators
of the SPDY protocol) on the impact of varying bandwidth vs. latency
on page load times for some of the most popular destinations on the
Web showed the following:
NETWORK LATENCY
OF THE ACCESS MEDIUM
200 ms 180 ms 160 ms 140 ms 120 ms 100 ms 80 ms 60 ms 40 ms 20 ms
3000
2500
2000
1500
1000
3500Page load
Time (ms)
1 Mbps 2 Mbps 3 Mbps 4 Mbps 5 Mbps 6 Mbps 7 Mbps 8 Mbps 9 Mbps 10 Mbps
3000
2500
2000
1500
1000
3500Page load
Time (ms)
Page Load Time as bandwidth increases
Page Load Time as latency decreases
Looking at this graph,
one would question
any provider touting
bandwidth increase
as a panacea for web
page performance.
“As you can see from the data above, if users
double their bandwidth without reducing their RTT
significantly, the effect on Web Browsing will be a
minimal improvement. However, decreasing RTT,
regardless of current bandwidth always helps make
web browsing faster. To speed up the Internet at
large, we should look for more ways to bring down
RTT. What if we could reduce cross-atlantic RTTs
from 150ms to 100ms? This would have a larger
effect on the speed of the internet than increasing
a user’s bandwidth from 3.9Mbps to 10Mbps or
even 1Gbps.” – Mike Belshe
So we ask, what is the trend in RTTs across
the world? Let's consult an active measurement
database maintained by Les Cotrell to see what
the trend is there.
Avg
rtt
in m
sto
re
st
of
the
wo
rld
2500
3750
5000
0
1250
1/1/2011
7/1/2011
1/1/2012 1/1/2013 1/1/2014
7/1/2012 7/1/2013 7/1/2014
Date/time of measurement
South Asia
S.E. Asia
Russia
Oceania
North America
Middle East
Latin America
Europe
East Asia
Central Asia
Balkans
Africa
Average RTT over Time
As you can see, in the last couple of years there has been a small improvement
in RTTs, but by and large nothing meaningful.
Since the majority of e-commerce and hosting providers happen to be in the US,
let's look for FCC reports on latencies across DSL, Cable and Fiber.
1 Mbps 3 Mbps 6 Mbps 10 Mbps 15 Mbps 20 Mbps 24 Mbps 30 Mbps 40 Mbps 75 Mbps
Advertised Speed ( Mbit/s )
Ave
rag
e L
ate
nc
y
( M
illise
co
nd
s ) 60
50
40
30
70
20
10
0
DSL Cable Fiber
In 2014, fiber-to-the-home
services provided 24 ms round-
trip latency on average, while
cable-based services averaged
31 ms, and DSL-based services
averaged 48 ms. Compare this
to 2013, where fiber-to-the-home
services provided 18 ms round-
trip latency on average, while
cable-based services averaged
26 ms, and DSL-based services
averaged 43 ms.
Overall latency is not getting any better – if anything, it's getting worse. The average RTT to Google
is pretty much the same as it was in 2010, despite all the innovations brought to us by this awesome
company. An alternate study by M-Lab stresses this point of degradation in latency due to interconnections
between providers.
So far all the above data is desktop alone, so let's focus on latency numbers from AT&T:
LTE HSPA + HSPA EDGE GPRS
AT&T core
network
latency
40-50 ms 50-200 ms 150-400 ms 600-750 ms 600-750 ms
To put those latencies in context, also consider the bandwidth available by technology:
Generation Data rate
2G 100-400 Kbit/s
3G 0.5-5 Mbit/s
4G 1-50 Mbit/s
Since we are talking about mobile data, let’s see the overall path
a packet has to traverse to get service over the internet:
Sectors
# of Directions
Per Cell Site
Radio Access Backhaul Core Network
Technology
2G / 3G / 4G
/ Wi-Fi
Sectors
# of Directions
Per Cell Site
Wireline Internet
Backbone
Carriers
# of Spectrums
In Use
As you can see, it’s the confluence of a lot of technologies that helps
bring information to your fingertips.
While the middle mile was the bottleneck in the
desktop world, in the mobile world the Radio
Access Network (RAN) is the new bottleneck for
mobile browsing. More specifically, let's take
a look at the capacity of a typical cell tower:
Each Sector
Has 2 Carriers
Each Carrier
Has 3.6 Mbps
of Capacity
Major Market Cell Tower = 21.6 Mbps CapacityTypically these towers are provisioned and operate
at 75% utilization, which means we have only 16.2Mbps
to use. The average voice call takes 12Kbps, which
means a maximum of 1350 calls are supported before
degrading. Add the average fat webpage to this mix
and you are looking at a maximum of 8 webpages
holding the tower at capacity. This is the new bottleneck
in the whole mobile user experience, and there is not
much a user or content publisher can do about this –
except send the most important bits of the application
in the first few packets.
So we’ve talked about a lot of different elements in this article. To summarize, we saw that web content is getting
richer while device diversity is exploding, and that we cannot pin our hopes on faster lanes, given that network
access times have been stagnant for over a decade (and will likely continue to be so in the near future). All these
forces combine together to create a new pressure point on RAN congestion, which is already at capacity.
While I have mostly dwelt on the problems in this post, the solution space for mobile web applications is to
• make things smaller (without losing quality of experience)
• move them closer to the user (I mean in the browser, not some server in the cloud given the RTT)
• cache them as long as we can (existing solutions do not)
• loading the application resources intelligently (loading the most significant resources first)
Sounds easy enough, yet it requires a very different approach to application delivery – one that we at Instart
Logic, with our Software-Defined Application Delivery platform, are focused on.
CONCLUSIONS
• HTTP Archive
• Cisco VNI
• Ilya Grigorik's blog
• Android Fragmentation
• Why Mobile Apps are Slow
• More Bandwidth Doesn't Matter Much
• M-Lab Interconnection Study
• FCC Broadband America
• Netflix ISP Speed Index
• PingER Project
• High Performance Browser Networking
• Bessemer Cloud
REFERENCES