17
MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS- SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC- HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO R BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC- HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI Other RIM 18.5% 2.9% 35.7% Demo Cat 8 (1Rx) HSPA Demo Phone - Cat 8 (1Rx) PH Y Layer P PH Y Laye Through T otal DC-HS 000 000 000 000 000 000 000 000 kbps Other No Other HTC Other RIM 18.5% 2.9% 35.7% Demo Cat 8 (1Rx) HSPA Demo Phone - Cat 8 (1Rx) PHY Layer P PHY Laye Through Total DC-HS 000 000 000 000 000 000 000 000 kbps Other No Other HTC How not to spend your summer vacation Redefining Research THE MOTHER OF ALL NETWORK BENCHMARK TESTS October 19, 2011, Vol. 7 No. 13 PREVIEW VOLUME 2 PREVIEW EDITION

SA 101911 Quantifying the User Experience - REPORT PREVIEW

Embed Size (px)

Citation preview

Page 1: SA 101911 Quantifying the User Experience - REPORT PREVIEW

MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Other Nokia Kyocera Palm LG

RIM HTC Motorola Samsung Apple

18.5%

52.9%35.7%

17:2

9.0

17:37

.0

17:4

5.0

17:53

.0

18:0

1.0

18:0

9.0

18:17

.0

18:2

5.0

18:33

.0

18:4

1.0

18:4

9.0

18:57

.0

19:0

5.0

19:13

.0

19:2

1.0

19:2

9.0

19:37

.0

19:4

5.0

19:53

.0

20:0

1.0

20:0

9.0

20:17

.0

20:2

5.0

20:33

.0

20:4

1.0

HSPA Demo Phone - Cat 8 (1Rx)

HSPA Demo Phone - Cat 8 (1Rx)

20:4

9.0

20:57

.0

21:0

5.0

21:13

.0

21:2

0.0

21:2

7.0

21:35

.0

21:4

3.0

21:51

.0

-50 -40 -30 -20

PHY Layer Primary Throughput (Kbps)

PHY Layer Secondary Throughput (Kbps)

Total DC-HSDPA PHY Layer Throughput (Kbps)

35:4

5.0

35:57

.0

36:0

9.0

36:2

1.0

36:33

.0

36:4

5.0

36:57

.0

37:0

9.0

37:2

1.0

37:33

.0

37:4

5.0

37:57

.0

38:0

9.0

38:2

1.0

0

1,000

2,000

3,000

4,000

5,000

6,000

7,000

8,000 kbps Other Nokia Kyocera Palm LG

Other HTC Motorola Samsung Apple

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Other Nokia Kyocera Palm LG

RIM HTC Motorola Samsung Apple

18.5%

52.9%35.7%

17:2

9.0

17:37

.0

17:4

5.0

17:53

.0

18:0

1.0

18:0

9.0

18:17

.0

18:2

5.0

18:33

.0

18:4

1.0

18:4

9.0

18:57

.0

19:0

5.0

19:13

.0

19:2

1.0

19:2

9.0

19:37

.0

19:4

5.0

19:53

.0

20:0

1.0

20:0

9.0

20:17

.0

20:2

5.0

20:33

.0

20:4

1.0

HSPA Demo Phone - Cat 8 (1Rx)

HSPA Demo Phone - Cat 8 (1Rx)

20:4

9.0

20:57

.0

21:0

5.0

21:13

.0

21:2

0.0

21:2

7.0

21:35

.0

21:4

3.0

21:51

.0

-50 -40 -30 -20

PHY Layer Primary Throughput (Kbps)

PHY Layer Secondary Throughput (Kbps)

Total DC-HSDPA PHY Layer Throughput (Kbps)

35:4

5.0

35:57

.0

36:0

9.0

36:2

1.0

36:33

.0

36:4

5.0

36:57

.0

37:0

9.0

37:2

1.0

37:33

.0

37:4

5.0

37:57

.0

38:0

9.0

38:2

1.0

0

1,000

2,000

3,000

4,000

5,000

6,000

7,000

8,000 kbps Other Nokia Kyocera Palm LG

Other HTC Motorola Samsung Apple

How not to spend your summer vacation

Redefining Research

THE MOTHER OF ALLNETWORK

BENCHMARKTESTS

October 19, 2011, Vol. 7 No. 13 PREVIEW

VOLUME 2 PREVIEW EDITION

Page 2: SA 101911 Quantifying the User Experience - REPORT PREVIEW

InItIal feedback receIved In the fIrst few hours after releasIng volume 1 of our three-part serIes Included the followIng comments:

“This is a real differentiator…well worth it.” –Marketing Director, infrastructure supplier

“Very detailed and above and beyond my expectations.” –CTO, tier two operator

“Great report as always.” –CTO, leading mobile operator

“High impact stuff…Nobody even comes close to delivering this kind of data. Nice, nice move.” –Marketing Director, infrastructure supplier

“YOU kick $@tt…This is great stuff.” –Managing Director, financial institution

“Great report again.” –Senior Technical Fellow, infrastructure supplier

“Great stuff…well thought out and most of all – fair.” –Network Services, leading mobile operator

This document contains a highly-redacted executive summary, a complete table of contents, and our test methodology (Chapter 8 from the main report) for a signals ahead research product that we published on October 18th. All three reports were done with the support of Accuver, who provided us with access to its complete set of network benchmarking tools and post-processing software. Volume 1 of our special three-part series was published in late September and Volume 3 will be published in November. Additionally, this report preview provides a summary of past topics that we have covered in Signals Ahead and a list of likely topics that we plan to tackle in the coming months. The 66-page report contains 53 figures and tables, with many of the figures and tables consisting of multiple parts. This report can be purchased separately for $1,495 or it is included with any paid corporate subscription to signals ahead.

Page 3: SA 101911 Quantifying the User Experience - REPORT PREVIEW

3 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

In Volume 1 (Network and Technology Performance) of our special three-part series of reports, we provided a thorough examination of how the leading next-generation mobile broadband wireless technologies performed, as exemplified by deployments in North America. To a large degree, the results contained in that report were somewhat contrived since no one in their right mind drives around large metropolitan areas at all times of the day and night trying to suck as much capacity from the network as possible. If for no other reason, this activity is prohibitively expensive in many markets due to existing rate plans and data usage caps.

Instead, what really matters to consumers and [hopefully] operators is the impact of the underlying network/technology on the typical user experience while consuming mobile data. In the infancy of a new commercial network launch, technology neophytes and pundits flock to their speed testing website de jour (e.g., www.speedtest.net) and they bask in the glory of never-before-seen data rates. However, this activity and the use of more sophisticated network benchmark studies quickly give way to more typical user behavior.

Long-term sustained FTP downlink transfers get replaced by interactions with video delivery services (Netflix, YouTube, etc), content providers (e.g., Apple’s iTunes) or service enablers (e.g., Google’s email). Uplink FTP transfers give way to more challenging scenarios involving the trans-mitting of video content from the mobile device to the network (e.g., Skype video). Ongoing 32 byte ping tests to a test server get replaced by basic web browsing of popular mobile web sites where the throughput potential of the network and in-network latency take a backseat to the end-to-end latency of the entire connection and how quickly the mobile device and the host site can communi-cate with each other and pass data back and forth.

In Volume 2 (Quantifying the User Experience), we expose readers to a large number of never-before-seen test results that we collected during the data collection portion of this exercise – how we spent our summer vacation. Although our conclusions from the first report remain intact, there are several nuances to those observations which need to be considered. To summarize, throughput still matters, but only to a certain point since most applications require only a fraction of the throughput offered by today’s mobile broadband networks while in many cases the chokepoint in the broadband connection is the wired Internet/host server and not the operator’s next-generation mobile broad-band network. Latency, including both the time to connect to the network and start receiving data, as well as the subsequent interactions between the device and the network/host server, can never be too low. In that regard, too much attention is paid to the “my pipe is bigger than your pipe” mantra when the real focus should be on delivering a more compelling user experience, regardless of how it is achieved.

Our ability to collect and analyze the user experience data would not have been possible without the support of Accuver, who allowed us to use its suite of network drive test tools, including its recently released XCAL-MO network benchmarking tool and XCAL-M drive test solution, as well as its XCAP post-processing software to analyze the results. We have used the Accuver tools

what really matters to consumers and [hopefully] operators is the impact of the underlying network/

technology on the typical user experience while

consuming mobile data.

throughput still matters, but only to a certain

point while latency can never be too low.

we were able to capitalize on many of the unique

features associated with the suite of accuver tools

when we conducted our user experience tests.

Page 4: SA 101911 Quantifying the User Experience - REPORT PREVIEW

4 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

several times in the past for various Signals Ahead reports and we have grown quite fond of their capabilities and their ease of use. In particular, we were able to capitalize on many of the unique features associated with the suite of test tools when we did our user experience tests. We look forward to leveraging other innovative capabilities associated with their suite of tools that pertain to quantifying the user experience in the near future.

For the first time, we are now able to offer their tools with our services for commissioned-based projects on behalf of operators, government regulators, vendors, trade associations or other inter-ested parties on a global basis. We look forward to discussing such opportunities with anyone that is interested.

As we discuss in Chapter 3 of this report, operators should re-evaluate how they promote and market their services. To some extent it is a bit too late since they’ve sold consumers on the concept that speed is all that matters. However, it is possible to replace “speed” with “fast,” and then equate fast with the time required to connect to the network, as well as the time required to access and/or download content. As an analogy, operators don’t advertise if they use half-rate AMR, full-rate AMR or wideband AMR, but they do advertise voice quality, call completion rates, dropped call rates, etc. In a similar fashion, handset manufacturers don’t promote lower current drain; instead they advertise longer battery lives. Those metrics truly define the user experience.

In this context, an improved user experience can be achieved with a better handset design and an improved operating system GUI that make it easier to enter commands or navigate the web, just as it can be achieved by improvements to the network. Further, operators do not necessarily need to rush out and deploy the latest technology on the market to make their networks “faster.” Instead, these operators could take an existing technology and simply make it better and more efficient. This goal could be achieved through a combination of network optimization initiatives, by taking full advantage of the network’s capabilities, including the widespread use of handsets that support the same set of robust features (e.g., Enhanced_FACH, CPC, CELL_PCH/URA_PCH, Cat 14, etc), and by moving popular content closer to the edge of the network.

Chapter 2 of this report contains the introduction and Chapter 3 contains the key conclusions and observations from our tests. It is by far the most important chapter in this report. The remaining chapters and the appendix document the basis for our conclusions. Chapter 4 presents results from numerous tests involving HTTP sessions where we analyze how long it takes to load a popular web page for a given combination of throughput and latency, not to mention other considerations. Chapter 5 explains why throughput doesn’t [always] matter and Chapter 6 explains why latency [almost] always matters. Chapter 7 focuses on bandwidth-intensive applications, such as Gmail, iTunes, Netflix, YouTube and to a lesser extent Skype Video. Chapter 8 provides our test method-ology and Chapter 9 provides some closing remarks. The Appendix provides results from a few test scenarios that didn’t find their way into the main body of the report.

Our three-part series of reports is included with a subscription to Signals Ahead or it can be purchased on an individual basis – the former option is far more economical since it includes at least 14 additional Signals Ahead reports. A summary of Volume 1 and Volume 3 follows in the subsequent paragraphs.

operators should re-evaluate how they promote and market

their services, starting by replacing the focus

on “speed” with the focus on “fast.”

Page 5: SA 101911 Quantifying the User Experience - REPORT PREVIEW

5 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

volume 1 (network and technology performance)Next-generation network technologies are not created equal. On paper, there exist meaningful performance differences, some of which are due to channel bandwidth considerations, but also to the underlying technology itself. Further, operator deployment philosophies and the maturity of the solutions can have an over-arching impact on the results.

As operators around the globe struggle to make crucial strategic decisions regarding their network technology evolution, it is imperative that they fully understand and appreciate the potential of these technologies as well as their limitations. While this report is intended to address the needs of operators worldwide by focusing on the performance of the technologies, as a secondary feature it also provides valuable insight into the performance of each major network deployment in the United States. Everyone claims that they have the best network, but only one operator can be right.

Specific topics addressed in Volume 1 include, but are not limited to, the following:

➤➤ Application and/or Physical Layer Throughput

➤➤ Mean, median and CDF plots

➤➤ Geo plots of throughput for all test scenarios using Google Earth

➤➤ Technology comparisons, including

➤➤ DC-HSDPA versus LTE (2x10MHz) with 2x2 MIMO

➤➤ LTE (2x20MHz) versus LTE (2x10MHz)

➤➤ Mobile WiMAX versus HSPA+, LTE and DC-HSDPA

➤➤ DC-HSDPA versus HSPA+

➤➤ EV-DO versus LTE, HSPA+, etc

➤➤ Single User Spectral Efficiency Results

➤➤ Throughput normalized for channel bandwidth and duplex scheme

➤➤ Does LTE with MIMO really outperform narrow bandwidth solutions

➤➤ LTE network performance with multiple devices

➤➤ DC-HSDPA and HSPA+ devices in the same 10MHz channel allocations

➤➤ Side-by-Side operator network coverage maps for drive routes used in each market

➤➤ Downlink throughput

➤➤ Uplink throughput

➤➤ Network Latency

➤➤ Variance based on time of day and network loading

➤➤ LTE network deployment philosophies (LTE cell site density relative to the legacy network) and their implications for coverage and capacity

➤➤ AT&T HSPA+ versus AT&T LTE

➤➤ Verizon Wireless EV-DO versus Verizon Wireless LTE

➤➤ AT&T LTE versus Verizon Wireless LTE

➤➤ Mobile WiMAX (2500MHz) versus LTE (700MHz)

volume 1 is critical for operators around the

globe who are currently making strategic decisions

regarding their network technology evolution.

Page 6: SA 101911 Quantifying the User Experience - REPORT PREVIEW

6 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

volume 2 (Quantifying the user experience)Although mobile operators, industry pundits and most well-informed consumers understand the notion that a higher megabit-per-second throughput is preferable, the typical consumer is generally clueless when it comes to understanding what these obscure marketing messages really mean for the mobile Internet experience. Most operators recognize that they need to move away from the “my pipe is bigger than your pipe” marketing mentality, but it is easier said than done.

Further, it is readily apparent that the capabilities of these next-generation networks frequently exceed the requirements of the application and/or the capabilities of the Internet itself. Very few applications and/or web site servers support high double-digit megabit-per-second throughput. Instead, it may actually be the combination of relatively high throughput and low network latency

– offset by transport latency – that really defines the user experience. But to what degree do these relationships provide the most benefit to the user?

Mobile video, be it YouTube or Netflix, is driving mobile data growth and the capabilities of next-generation networks will only serve as an impetus to even higher data usage. The perceived quality of the video playback also matters, not only for consumers, but also for mobile operators, content owners, and video hosting services. As higher resolution video formats with higher encoding rates become more mainstream, this issue becomes even more important, especially if the performance of next-generation networks fails to keep up with the requirements of the video content.

This report is critical for operators trying to understand how to market their broadband wireless service offering as well as how they should prioritize their network optimization activities in order to achieve the best possible user experience for their subscribers. In addition to mobile operators, this report provides invaluable insight to application developers and content providers who require a greater appreciation for how network performance characteristics impact the user experience.

Specific topics addressed in Volume 2 may include, but are not limited to, the following:

➤➤ Quantifying the user experience based on HTTP web page download times

➤➤ Popular websites, including Yahoo, CNN, iTunes, Amazon, YouTube, etc.

➤➤ Results down to the millisecond, based on device/chipset signaling messages

➤➤ Network/technology comparisons

➤➤ Comparisons based on network loading – same location over a 12-15 hour period of time

➤➤ Determining if perceived differences in network/technology performance have more to do with network loading than the actual capabilities of the network/technology itself

➤➤ Determining how the combination of throughput and latency impact the HTTP web page download time results

➤➤ 3 axis plot, showing maximum achievable throughput and network latency versus webpage load time and required throughput

➤➤ Which matters most – latency or throughput

➤➤ Does DC-HSDPA really offer a quantifiable benefit over HSPA+

➤➤ Determining the crossover point when higher throughput become irrelevant

➤➤ Quantifying the user experience based on downloading Google email attachments

➤➤ Quantifying the user experience based on downloading video and audio content from iTunes

➤➤ Determining the crossover point when higher throughput become irrelevant

➤➤ Netflix video streaming requirements

➤➤ Determining the chokepoints in the network (from end user to the original source of the content), how they vary as a function of loading, and their impact on the user experience

volume 2 is critical for operators trying

to understand how to market their broadband wireless service offering

as well as how they should prioritize their network optimization activities.

Page 7: SA 101911 Quantifying the User Experience - REPORT PREVIEW

7 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

volume 3 (detailed performance analysis)In our third and final installment we delve much deeper into the KPIs that we captured with the Accuver suite of drive test tools. As we have witnessed in the past there are discernible differences in how each vendor implements a technology. Frankly, some vendors do a much better job than their peers.

Given that we collected network performance data in a number of key markets and that we knowingly included virtually every single vendor + technology combination that exists in North America, this report provides invaluable competitive intelligence for vendors, subsystem suppliers and mobile operators.

Further, by peeling back the layers of the proverbial technology onion it becomes possible to gain a greater appreciation for how each technology delivers its results.

Specific topics addressed in Volume 3 may include, but are not limited to, the following:

➤➤ Modulation Utilization (QPSK, 16QAM, and 64QAM) – by primary and/or secondary carriers as appropriate

➤➤ MIMO RI 1 and RI 2 – how MIMO performs at 700MHz

➤➤ CQI (average and median) – by primary and secondary carriers

➤➤ HS-PDSCH Codes (average, % > 10, distribution) – by primary and secondary carriers

➤➤ HS-SCCH Scheduling Success Rate – by primary and secondary carriers

➤➤ Average PHY Layer Served Rate – by primary and secondary carriers

➤➤ Maximum PHY Layer Scheduled/Served Rate – DC-HSDPA only

➤➤ UL Transmit Power (average and median)

We will also leverage the capabilities of the XCAP-M tool to analyze these KPIs by several different means, potentially including, but not limited to the following:

➤➤ MAC-HS Throughput versus RSCP scatter plot

➤➤ MAC-HS Throughput versus Reported CQI Values scatter plot

➤➤ Reported CQI Values versus 64QAM Availability scatter plot

➤➤ MAC-HS Throughput versus Cell ID (real time)

➤➤ MAC-HS Throughput versus # of Assigned HS-PDSCH Codes (real time)

➤➤ MAC-HS Throughput (primary, secondary, and combined)

➤➤ HSPA+ MAC-HS Throughput versus DC-HSDPA MAC-HS Throughput (primary, secondary and combined)

➤➤ CINR versus RSSI (scatter plot and real-time)

➤➤ Throughput versus CINR (scatter plot and real-time)

➤➤ Throughput versus Cell ID (e.g., handover performance)

➤➤ CINR versus RSSI (scatter plot and real-time)

➤➤ CINR versus Modulation Scheme and/or MCS

➤➤ UL Transmit Power versus Cell ID

➤➤ UL Throughput versus Transmit Power

➤➤ Modulation Scheme (antenna 1 and antenna 2)

volume 3 provides invaluable competitive intelligence while also

allowing readers to obtain a greater appreciation

for how each technology delivers its results.

Page 8: SA 101911 Quantifying the User Experience - REPORT PREVIEW

8 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

potential topics for the coming year include:

➤➤ The Mother of all Network Drive Tests (LTE, DC-HSDPA, HSPA+, Rev A and Mobile WiMAX)

➤➤ The challenges of delivering video in a mobile network

➤➤ How network performance (throughput and latency) impacts the user experience

➤➤ Embedded modules/netbooks

➤➤ TD-LTE network performance benchmark results

➤➤ CoMP and LTE Advanced

➤➤ Going Green – financial implications and challenges

➤➤ Smartphone signaling implications and LTE

➤➤ LTE chipset performance benchmark test results

➤➤ The impact of Type 3i receivers on UE performance (includes chipset benchmark tests of leading solutions)

➤➤ Whatever happened to IMS?

➤➤ LTE Americas

➤➤ 4G World and GSMA MAC

➤➤ HSPA+ (MIMO) network performance benchmark results

➤➤ The impact of latency

➤➤ Public Safety Options with 700MHz

➤➤ EV-DO Rev B network performance benchmark results

➤➤ LTE chipset landscape

MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Other Nokia Kyocera Palm LG

RIM HTC Motorola Samsung Apple

18.5%

52.9%35.7%

17:2

9.0

17:37

.0

17:4

5.0

17:53

.0

18:0

1.0

18:0

9.0

18:17

.0

18:2

5.0

18:33

.0

18:4

1.0

18:4

9.0

18:57

.0

19:0

5.0

19:13

.0

19:2

1.0

19:2

9.0

19:37

.0

19:4

5.0

19:53

.0

20:0

1.0

20:0

9.0

20:17

.0

20:2

5.0

20:33

.0

20:4

1.0

HSPA Demo Phone - Cat 8 (1Rx)

HSPA Demo Phone - Cat 8 (1Rx)

20:4

9.0

20:57

.0

21:0

5.0

21:13

.0

21:2

0.0

21:2

7.0

21:35

.0

21:4

3.0

21:51

.0

21:59

.0

22:0

7.0

22:15

.0

22:2

3.0

22:31

.0

22:39

.0

-60 -50 -40 -30 -20

0

3,000

6,000

9,000

12,000

15,000

PHY Layer Primary Throughput (Kbps)

PHY Layer Secondary Throughput (Kbps)

Total DC-HSDPA PHY Layer Throughput (Kbps)

35:0

9.0

35:2

1.0

35:33

.0

35:4

5.0

35:57

.0

36:0

9.0

36:2

1.0

36:33

.0

36:4

5.0

36:57

.0

37:0

9.0

37:2

1.0

37:33

.0

37:4

5.0

37:57

.0

38:0

9.0

38:2

1.0

0

1,000

2,000

3,000

4,000

5,000

6,000

7,000

8,000 kbps Other Nokia Kyocera Palm LG

Other HTC Motorola Samsung Apple

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Other Nokia Kyocera Palm LG

RIM HTC Motorola Samsung Apple

18.5%

52.9%35.7%

17:2

9.0

17:37

.0

17:4

5.0

17:53

.0

18:0

1.0

18:0

9.0

18:17

.0

18:2

5.0

18:33

.0

18:4

1.0

18:4

9.0

18:57

.0

19:0

5.0

19:13

.0

19:2

1.0

19:2

9.0

19:37

.0

19:4

5.0

19:53

.0

20:0

1.0

20:0

9.0

20:17

.0

20:2

5.0

20:33

.0

20:4

1.0

HSPA Demo Phone - Cat 8 (1Rx)

HSPA Demo Phone - Cat 8 (1Rx)

20:4

9.0

20:57

.0

21:0

5.0

21:13

.0

21:2

0.0

21:2

7.0

21:35

.0

21:4

3.0

21:51

.0

21:59

.0

22:0

7.0

22:15

.0

22:2

3.0

22:31

.0

22:39

.0

-60 -50 -40 -30 -20

0

3,000

6,000

9,000

12,000

15,000

PHY Layer Primary Throughput (Kbps)

PHY Layer Secondary Throughput (Kbps)

Total DC-HSDPA PHY Layer Throughput (Kbps)

35:0

9.0

35:2

1.0

35:33

.0

35:4

5.0

35:57

.0

36:0

9.0

36:2

1.0

36:33

.0

36:4

5.0

36:57

.0

37:0

9.0

37:2

1.0

37:33

.0

37:4

5.0

37:57

.0

38:0

9.0

38:2

1.0

0

1,000

2,000

3,000

4,000

5,000

6,000

7,000

8,000 kbps Other Nokia Kyocera Palm LG

Other HTC Motorola Samsung Apple

Page 9: SA 101911 Quantifying the User Experience - REPORT PREVIEW

9 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI RSRQ RSCP CINR ACK NACK PDSCH HS-PDSCH HS-SCCH MAC-HS E-DPCH FTP UDP HTTP RTT TCP dBm dB Mbps ms MOS LTE HSPA+ DC-HSDPA TD-LTE EV-DO Rev A/B Mobile WiMAX MCS MCS SNR MIMO RI BLER 64QAM 16QAM QPSK CQI L1 DL RSSI

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Other Nokia Kyocera Palm LG

RIM HTC Motorola Samsung Apple

18.5%

52.9%35.7%

17:2

9.0

17:37

.0

17:4

5.0

17:53

.0

18:0

1.0

18:0

9.0

18:17

.0

18:2

5.0

18:33

.0

18:4

1.0

18:4

9.0

18:57

.0

19:0

5.0

19:13

.0

19:2

1.0

19:2

9.0

19:37

.0

19:4

5.0

19:53

.0

20:0

1.0

20:0

9.0

20:17

.0

20:2

5.0

20:33

.0

20:4

1.0

HSPA Demo Phone - Cat 8 (1Rx)

HSPA Demo Phone - Cat 8 (1Rx)

20:4

9.0

20:57

.0

21:0

5.0

21:13

.0

21:2

0.0

21:2

7.0

21:35

.0

21:4

3.0

21:51

.0

21:59

.0

22:0

7.0

22:15

.0

22:2

3.0

22:31

.0

22:39

.0

-60 -50 -40 -30 -20

0

3,000

6,000

9,000

12,000

15,000

PHY Layer Primary Throughput (Kbps)

PHY Layer Secondary Throughput (Kbps)

Total DC-HSDPA PHY Layer Throughput (Kbps)

35:0

9.0

35:2

1.0

35:33

.0

35:4

5.0

35:57

.0

36:0

9.0

36:2

1.0

36:33

.0

36:4

5.0

36:57

.0

37:0

9.0

37:2

1.0

37:33

.0

37:4

5.0

37:57

.0

38:0

9.0

38:2

1.0

0

1,000

2,000

3,000

4,000

5,000

6,000

7,000

8,000 kbps Other Nokia Kyocera Palm LG

Other HTC Motorola Samsung Apple

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

Other Nokia Kyocera Palm LG

RIM HTC Motorola Samsung Apple

18.5%

52.9%35.7%

17:2

9.0

17:37

.0

17:4

5.0

17:53

.0

18:0

1.0

18:0

9.0

18:17

.0

18:2

5.0

18:33

.0

18:4

1.0

18:4

9.0

18:57

.0

19:0

5.0

19:13

.0

19:2

1.0

19:2

9.0

19:37

.0

19:4

5.0

19:53

.0

20:0

1.0

20:0

9.0

20:17

.0

20:2

5.0

20:33

.0

20:4

1.0

HSPA Demo Phone - Cat 8 (1Rx)

HSPA Demo Phone - Cat 8 (1Rx)

20:4

9.0

20:57

.0

21:0

5.0

21:13

.0

21:2

0.0

21:2

7.0

21:35

.0

21:4

3.0

21:51

.0

21:59

.0

22:0

7.0

22:15

.0

22:2

3.0

22:31

.0

22:39

.0

-60 -50 -40 -30 -20

0

3,000

6,000

9,000

12,000

15,000

PHY Layer Primary Throughput (Kbps)

PHY Layer Secondary Throughput (Kbps)

Total DC-HSDPA PHY Layer Throughput (Kbps)

35:0

9.0

35:2

1.0

35:33

.0

35:4

5.0

35:57

.0

36:0

9.0

36:2

1.0

36:33

.0

36:4

5.0

36:57

.0

37:0

9.0

37:2

1.0

37:33

.0

37:4

5.0

37:57

.0

38:0

9.0

38:2

1.0

0

1,000

2,000

3,000

4,000

5,000

6,000

7,000

8,000 kbps Other Nokia Kyocera Palm LG

Other HTC Motorola Samsung Apple

lIcense costs (pre-publishing and post-publishing prices) Volume 1 – avaIlable now! Network and Technology Performance ($1,995)Volume 2 – avaIlable now! Quantifying the User Experience ($1,495)Volume 3 – Detailed Performance Analysis ($1,295, $1,495)full report – all 3 volumes ($3,300, $3,995)

contact InformatIonYou may call us at +1 (510) 273-2439 or email us at [email protected] and we will contact you for your billing information or respond to any further inquiries that you may have. Subscription information for our Signals Ahead research product, which includes these reports, can be found on the last page of this report. You can also visit our website at www.signalsresearch.com or write us at

Signals Research Group, LLC10 Ormindale CourtOakland, CA 94611

pre-order your report lIcense now (included as part of a Signals Ahead subscription)

Coming Soon!

Page 10: SA 101911 Quantifying the User Experience - REPORT PREVIEW

10 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

➤➤ 9/27/11 “The Mother of all Network Benchmark Tests” In Volume 1 of a special three-part series of reports we leverage the capabilities of Accuver’s complete suite of drive test tools to analyze the performance of all next-generation wireless technolo-gies as exemplified by operator deployments in North America. In addition to looking at basic parameters, such as throughput and latency, the report looks at end-user spectral efficiency and the different network deployment philosophies of the leadng opera-tors (e.g., their cell grid density).

➤➤ 7/6/11 “Mobile Platforms – the center of mobile networks” In this report we discuss the recent trends impacting the various mobile platforms that exist and what has transpired since our piece from three years ago on Web 2.0. We address the state of the mobile platforms that exist, provide our thoughts on the current and future prospects and look at the various trends that are driving the industry.

➤➤ 6/8/2011 “United we stand, fragmented we fail” We provide the key takeaways from the LTE World Summit, held in Amsterdam. Spectrum fragmentation tops the list of key LTE topics, although a growing focus on the use of 1800MHz for those operators that have access to it is encouraging. VoLTE, or the lack thereof, is still on everyone’s minds, but in the interim CSFB isn’t even working as promised. Finally, there was a lot of talk about Mobile WiMAX, but the emphasis seemed to be on how to best move away from the technology and adopt TD-LTE.

➤➤ 5/16/2011 “HetNet: When big cells and small cells collide” In addition to covering the basics of heterogeneous networks (HetNet), a key LTE-Advanced (R10) feature, we present a compelling series of analytical studies which demonstrate the need for macro network offload, starting as early as 2015. We also get into the technical details of how HetNet works, including discussions on eICIC, ABS and the importance of interference cancellation in the handset. Finally, we look at what is being done with legacy 3G femtocells to limit interference-related problems that they introduce, both with the macro network and between each other.

➤➤ 4/26/2011 “Chips and Salsa XIII: Now Seasoned with Soy Sauce” In collaboration with Spirent Communications we provide results from the industry’s only independent performance benchmark tests of HSPA+/HSPA chipsets. In the most recent benchmark study we tested 16 different device configurations, representing chipsets from 9 different suppliers, including new entrants, such as Samsung (HSPA+), Intel (HSPA+), MediaTek

and HiSilicon. We provide the results, based on a total of 42 HSPA+ test scenarios and 26 HSPA test scenarios.

➤➤ 3/15/2011 “Looking beyond HSPA+: keeping up with the Joneses” Based on interviews with 3GPP member compa-nies and a thorough review of 3GPP submissions, we offer an in-depth look at the future of HSPA+ (Release 11 and beyond). Ultimately, we conclude that many of the features that are being incorporated into LTE will find their way into HSPA+, thus blur-ring the performance differences between the two technologies. Latency and the impact of new features on legacy devices are two areas of prime importance where HSPA+ could face challenges relative to LTE.

➤➤ 1/12/2011 “DC-HSDPA: Double the Bandwidth, Double the Pleasure, Part II” In collaboration with Accuver, who provided us with its XCAL-W drive test tool and XCAP-W post-processing software, we provide results and analysis from an extensive drive test of Telstra’s DC-HSDPA network. We compare DC-HSDPA with HSPA+ performance in a number of side-by-side tests.

➤➤ 1/12/2011 DC-HSDPA: “Double the Bandwidth, Double the Pleasure, Part I” In collaboration with Accuver, who provided us with its XCAL-W drive test tool and XCAP-W post-processing software, we provide results and analysis from an extensive drive test of Telstra’s DC-HSDPA network. We compare DC-HSDPA with HSPA+ performance in a number of side-by-side tests.

➤➤ 12/10/2010“Can you schedule me now?” In collabora-tion with Sanjole we examine how some of today’s commercial LTE eNodeBs allocate network resources when serving multiple devices. We determine that while LTE may deliver a compelling user experience, it is largely due to an empty network and the large channel bandwidths, and that further improvements are necessary if LTE is going to support multiple users in an efficient manner.

➤➤➤12/3/2010 “A Perspective from LTE Americas and the GSMA Mobile Asia Congress” We provide and discuss various data points which stem from our participation at the LTE Americas event in Dallas and the GSMA MAC event in Hong Kong. We provide an LTE market update, including TD-LTE, discuss the debate about a smart or dumb pipe strategy, and the impact of smartphones and social networking services, including the use of cloud computing, intelligent networks, network offloading and data caching.

In Case You Missed It

Come join us!4g world, Chicago, IL Oct. 24-27lte americas, Dallas, TX Nov. 8-9 Invited speakerrcr wireless oc event, Orange County, CA Nov. 10 Invited Speaker

consumer electronics show, Las Vegas, NV Jan. 10-13mobile world congress, Barcelona, Spain Feb. 20-23

Page 11: SA 101911 Quantifying the User Experience - REPORT PREVIEW

11 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

table of contents of volume 21.0 Executive Summary ……………………………………………………………………………………………………………………………………… 3

2.0 Introduction ……………………………………………………………………………………………………………………………………………… 10

3.0 Key Conclusions and Observations …………………………………………………………………………………………………………… 12

4.0 HTTP Web Browsing Analysis …………………………………………………………………………………………………………………… 20

4.1 downtown san francisco – July 29th, 0430 - 1730 ……………………………………………………………………………………… 20

4.1.1 Throughput and Latency Measurements (Downtown San Francisco – July 29th, 0430 – 1730) ……………………… 21

4.1.2 HTTP Web Page Load Time Results (Downtown San Francisco – July 29th, 0430 – 1730) ……………………………… 23

4.2 http web page load time results (srg headquarters – deep within the oakland hills – august 8, 1800-2200) ………………………………………………… 27

4.3 downtown oakland – august 14th, 1330 - 1530 ………………………………………………………………………………………… 28

4.3.1 Throughput and Latency Measurements (Downtown Oakland – August 14th, 1330-1530) …………………………… 28

4.3.2 HTTP Web Page Load Time Results (Downtown Oakland – August 14th, 1330-1530) …………………………………… 29

4.4 santa clara – august 29th – august 31st ………………………………………………………………………………………………… 31

4.4.1 Throughput and Latency Measurements (Santa Clara – August 29th – August 31st) …………………………………… 31

4.4.2 HTTP Web Page Load Time Results (Santa Clara – August 29th – August 31st) …………………………………………… 32

4.5 ev-do rev a and lte ………………………………………………………………………………………………………………………………… 33

4.5.1 Throughput, Latency and HTTP Web Page Load Time Results – EV-DO Rev A and LTE (Kansas City, June 17th, 1700-1900) ………………………………………………………………………………………………………………… 33

4.5.2 Throughput, Latency and HTTP Web Page Load Time Results – EV-DO Rev A and LTE (SRG Headquarters, August 8th, 1800-2200) …………………………………………………………………………………………………… 34

5.0 Why Throughput Doesn’t [Always] Matter ……………………………………………………………………………………………… 36

6.0 Why Latency [Almost] Always Matters …………………………………………………………………………………………………… 40

6.1 web page 101 …………………………………………………………………………………………………………………………………………… 40

6.2 tracing the “real latency” to various web sites – by network ………………………………………………………………… 43

6.3 connection time latency ………………………………………………………………………………………………………………………… 46

6.4 low latency and its Impact on the user experience ………………………………………………………………………………… 47

7.0 The User Experience and High-Bandwidth Applications …………………………………………………………………………… 49

7.1 itunes and gmail ……………………………………………………………………………………………………………………………………… 49

7.2 youtube and netflix ………………………………………………………………………………………………………………………………… 52

7.3 time of day considerations ……………………………………………………………………………………………………………………… 56

7.4 skype video ……………………………………………………………………………………………………………………………………………… 57

8.0 Test Methodology …………………………………………………………………………………………………………………………………… 58

9.0 Concluding Remarks ………………………………………………………………………………………………………………………………… 61

10.0 Appendix 1 ……………………………………………………………………………………………………………………………………………… 62

Page 12: SA 101911 Quantifying the User Experience - REPORT PREVIEW

12 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

Index of figuresfigure 1. Downlink Application Layer Median Throughput – San Francisco (0430 – 1730) ………………………………………… 21

figure 2. Downlink Application Layer Maximum Throughput – San Francisco (0430 – 1730) …………………………………… 21

figure 3. Network Latency – San Francisco (0430 – 1730) ……………………………………………………………………………………… 22

figure 4. Amazon Web Page Load Times – by throughput + latency combinations ……………………………………………………24

figure 5. Yahoo Web Page Load Times – by throughput + latency combinations ………………………………………………………24

figure 6. iTunes Web Page Load Times – by throughput + latency combinations ………………………………………………………26

figure 7. iTunes Web Page Load Times – by operator ……………………………………………………………………………………………26

figure 8. CNN Web Page Load Times – by throughput + latency combinations (SRG HQ) ………………………………………… 27

figure 9. Throughput and Latency Results – by Operator (Oakland) ………………………………………………………………………28

figure 10. CNN Web Page Load Times – by throughput + latency combinations (Oakland) ………………………………………29

figure 11. Amazon Web Page Load Times – by throughput + latency combinations (Oakland) ……………………………………30

figure 12. Yahoo Web Page Load Times – by throughput + latency combinations (Oakland) ……………………………………30

figure 13. Throughput and Latency Results – by operator (Santa Clara) …………………………………………………………………… 31

figure 14. Yahoo Web Page Load Times – by throughput + latency combinations (Santa Clara) ………………………………… 32

figure 15. Amazon Web Page Load Times – by throughput + latency combinations (Santa Clara) ……………………………… 32

figure 16. Throughput and Latency Results – EV-DO Rev A and LTE (Kansas City) …………………………………………………… 33

figure 17. Web Page Load Times – by throughput + latency combinations (Kansas City) …………………………………………… 34

figure 18. Throughput and Latency Results – EV-DO Rev A and LTE (SRG Headquarters) ………………………………………… 35

figure 19. Wikipedia Web Page Load Times – by throughput + latency combinations (SRG Headquarters) ………………… 35

figure 20. iTunes Web Page Load Times – by throughput + latency combinations (SRG Headquarters) ……………………… 35

figure 21. DC-HSDPA MAC Layer Throughput – HTTP Web Browsing (San Francisco, July 29th, 0500) ……………………… 36

figure 22. DC-HSDPA MAC Layer Throughput – HTTP Web Browsing (San Francisco, July 29th, 1700) ………………………… 37

figure 23. HTTP Application Layer Throughput – by technology (Oakland, August 14th) ………………………………………… 38

figure 24. LTE Application Layer Requirements – an entire user experience test script …………………………………………… 39

figure 25. The Theoretical Impact of Varying Throughput and Latency on Web Page Load Times ……………………………… 41

figure 26. Connection Latency – by technology ……………………………………………………………………………………………………46

figure 27. The Impact of Ultra Low Latency and Low Throughput – YouTube ………………………………………………………… 47

figure 28. The Impact of Ultra Low Latency and Low Throughput – PornHub ………………………………………………………… 47

figure 29. The Impact of Ultra Low Latency and Low Throughput – MSN ………………………………………………………………48

figure 30. Gmail and iTunes Throughput Requirements – Clearwire’s 2x20MHz LTE Network ……………………………………49

figure 31. Application Layer Throughput while using iTunes – with and without a background FTP session ………………50

figure 32. Gmail and iTunes Throughput Requirements – T-Mobile’s HSPA+ Network (San Francisco, July 29th) ………… 51

figure 33. Gmail and iTunes Throughput Requirements – AT&T’s LTE Network (Houston, September 7th) ………………… 51

figure 34. YouTube and Netflix Throughput Requirements – Verizon Wireless LTE Network (Oakland, August 14th) … 52

figure 35. Netflix Throughput Requirements – T-Mobile DC-HSDPA Network (Oakland, August 14th) ……………………… 53

figure 36. YouTube and Netflix Throughput Requirements – AT&T LTE Network (Houston, September 7th) ……………… 53

figure 37. Netflix Throughput Requirements – T-Mobile DC-HSDPA Network (Santa Clara, August 29th, 2100) ………… 55

figure 38. iTunes Throughput Requirements as a Function of Time of Day – Verizon Wireless LTE Network (San Francisco, July 29th) ……………………………………………………………………………………………………………………………………… 56

Page 13: SA 101911 Quantifying the User Experience - REPORT PREVIEW

13 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

figure 39. Skype Downlink Throughput Requirements – Device 1 and Device 2 on the Verizon Wireless LTE Network (Oakland, August 18th) ………………………………………………………………………………………………………………………………………… 57

figure 40. Skype Uplink Throughput Requirements – Device 1 and Device 2 on the Verizon Wireless LTE Network (Oakland, August 18th) ………………………………………………………………………………………………………………………………………… 57

figure 41. XCAL-M Drive Test Tool in Action – DL performance ……………………………………………………………………………… 58

figure 42. XCAL-M Drive Test Tool in Action – UL performance …………………………………………………………………………… 59

figure 43. iTunes Web Page Load Times – by throughput + latency combinations (SRG Headquarters) ……………………… 62

figure 44. iTunes Web Page Load Times – by operator (SRG Headquarters) …………………………………………………………… 62

figure 45. Amazon Web Page Load Times – by operator (SRG Headquarters) ………………………………………………………… 63

figure 46. Yahoo Web Page Load Times – by throughput + latency combinations (SRG Headquarters) ……………………… 63

figure 47. LTE Application Layer Throughput – HTTP Web Browsing (San Francisco, July 29th, 0601) …………………………64

figure 48. LTE Application Layer Throughput – HTTP Web Browsing (San Francisco, July 29th, 1635) …………………………64

figure 49. Gmail Throughput Requirements as a Function of Time of Day – Verizon Wireless LTE Network (San Francisco, July 29th) ……………………………………………………………………………………………………………………………………… 65

Index of tablestable 1. URL Analysis – Size and HTTP Requests …………………………………………………………………………………………………… 42

table 2. HTTP Activity Count – by URL ………………………………………………………………………………………………………………… 42

table 3. Real World Latency Calculations to Target URLs – by network/technology ……………………………………………… 43

table 4. Trace Route results – by operator, technology and URL ……………………………………………………………………………44

Page 14: SA 101911 Quantifying the User Experience - REPORT PREVIEW

14 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

8.0 test methodologyFor the drive tests that we have been conducting this summer we primarily used the Accuver XCAL-MO network benchmarking tool along with the Accuver XCAL drive test tool to collect the underlying performance indicators and to conduct the user experience tests. For purposes of our tests, we “limited” the XCAL-MO to only four dongles – one dongle for each network/tech-nology that we wanted to test. In theory we could have installed multiple dongles for each network/technology.

We used the Accuver XCAP post-processing tool to analyze the data and to help us create the figures which appear in this summary report. Thanks to a combination of the powerful tool and countless hours spent on the road, we are convinced that we have witnessed network performance – both good and bad – that would have otherwise not been observed.

Figure 41 and Figure 42 illustrate a typical user display that we used when collecting the data. We have included two figures since they also help prove that we observed downlink data rates greater than 61Mbps (Figure 41) and uplink data rates in excess of 23Mbps (Figure 42).

we used the accuver Xcal-mo and Xcal-m tools

to collect the underlying performance indicators

and the accuver Xcap-m post-processing tool to

do the analysis of the data that we collected.

figure 41. Xcal-m drive test tool in action – dl performance

Source: Accuver XCAL and SRG

Page 15: SA 101911 Quantifying the User Experience - REPORT PREVIEW

15 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

Each operator provided us with at least two dongles, although in the case of operators, such as Clearwire, with multiple network/technology deployments (e.g., 2x20MHz LTE and 1x10MHz Mobile WiMAX), we received multiple dongles.

Unlike the drive test results which we presented in Volume 1, the results shown in this report were collected from a stationary position. We took this approach in order to reduce the variability in performance that can occur when moving throughout the network. Eliminating, or at least reducing, this variability was important since the objective of the user experience tests was to determine various user experience metrics for a known set of inputs – primarily throughput and latency.

Testing in each market took place from as early as 4AM local time until the early evening hours. We also did a lot of user experience testing during the dead of the night when we suspect the networks were wide open. Since we were using test equipment we had the ability to determine whether or not network loading was impacting the results. Suffice it to say that in the early morning hours network loading was not a concern for any of the networks. Later in the day, network loading impacted the performance of certain networks/technologies while it was not even a consideration with other networks/technologies. We take this phenomenon into consideration when doing our analysis.

Each user experience test in involving HTTP web browsing consisted of the following series of tests:

➤➤ A 60 second multi-session FTP test to determine the maximum throughput potential of the network at the time.

➤➤ A 60 second latency test in which we pinged a local server to determine the median latency.

➤➤ Accessing a series of popular websites and downloading their contents with an Internet Explorer browser. In order to ensure a degree of consistency, we downloaded each web page up to 50 times and then used the median download time. The download time calculations leveraged very detailed information captured by the Accuver drive test tools.

figure 42. Xcal-m drive test tool in action – ul performance

Source: Accuver XCAL and SRG

Page 16: SA 101911 Quantifying the User Experience - REPORT PREVIEW

16 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

michael thelanderMichael Thelander is the CEO and Founder of Signals Research Group. In his current

endeavor he leads a team of industry experts providing technical and operator economics anal-ysis for clients on a global basis. Mr. Thelander is also responsible for the consultancy’s Signals Ahead research product, including its widely acclaimed “Chips and Salsa” series of reports that focus on the wireless IC industry.

Previously, Mr. Thelander was an analyst with Deutsche Bank Equity Research. Prior to joining Deutsche Bank, Mr. Thelander was a consultant with KPMG (now known as BearingPoint) and a communications officer with the United States Army. Mr. Thelander has also published numerous articles for leading trade publications and engineering journals throughout his career.

He has been an invited speaker at industry conferences around the world and he is frequently quoted by major news sources and industry newsletters, including The Economist, The Wall Street Journal, Investors Business Daily, Reuters, Bloomberg News, and The China Daily. Mr. Thelander earned a Masters of Science in Solid State Physics from North Carolina State University and a Masters of Business Administration from the University of Chicago, Graduate School of Business.

➤➤ A 60 second multi-session FTP test to determine the maximum throughput capabilities of the network at the time. The results from this test were combined with the results from the first FTP test to determine the median throughput. It was our assumption that this potential throughput was relatively constant throughout the entire test, although this assumption may not have always been correct.

For the high-bandwidth application tests, such as iTunes, Netflix and YouTube, we conducted a multi-session FTP test to determine the maximum throughput potential of the network. This test was followed by interacting with these applications (watching a video, downloading an email attachment) and using the Accuver tools to record the underlying interactions between the mobile device and the network.

Like all Signals Ahead reports, we received no sponsorship or funding from the participating companies, in order to maintain our independence. As such, we foot the bill for all of our travel expenses not to mention an inordinate amount of time and effort collecting the data and writing these series of reports.

We also could not have done this report without the support of Accuver who provided us with its suite of drive test tools and post-processing software. SRG takes full responsibility for the analysis and conclusions that are documented in this report and in our forthcoming report that analyzes and compares technology- and vendor-specific performance characteristics.

like all Signals Ahead reports, we received no sponsorship or funding from the participating companies, in order to

maintain our independence.

Page 17: SA 101911 Quantifying the User Experience - REPORT PREVIEW

17 October 19, 2011 | Signals Ahead, Vol. 7, Number 13 PREVIEW

please note disclaimer: The views expressed in this newsletter reflect those of Signals Research Group, LLC and are based on our understanding of past and current events shaping the wireless industry. This report is provided for informational purposes only and on the condition that it will not form a basis for any investment decision. The information has been obtained from sources believed to be reliable, but Signals Research Group, LLC makes no representation as to the accuracy or completeness of such information. Opinions, estimates, projections or forecasts in this report constitute the current judgment of the author(s) as of the date of this report. Signals Research Group, LLC has no obligation to update, modify or amend this report or to otherwise notify a reader thereof in the event that any matter stated herein, or any opinion, projection, forecast or estimate set forth herein, changes or subsequently becomes inaccurate. If you feel our opinions, analysis or interpretations of events are inaccurate, please fell free to contact Signals Research Group, LLC. We are always seeking a more accurate understanding of the topics that influence the wireless industry. Reference in the newsletter to a company that is publicly traded is not a recommendation to buy or sell the shares of such company. Signals Research Group, LLC and/or its affiliates/investors may hold securities positions in the companies discussed in this report and may frequently trade in such positions. Such investment activity may be inconsistent with the analysis provided in this report. Signals Research Group, LLC seeks to do business and may currently be doing business with companies discussed in this report. Readers should be aware that Signals Research Group, LLC might have a conflict of interest that could affect the objectivity of this report. Additional information and disclosures can be found at our website at www.signalsresearch.com. This report may not be reproduced, copied, distributed or published without the prior written authorization of Signals Research Group, LLC (copyright ©2011, all rights reserved by Signals Research Group, LLC).

Signals Ahead Subscription The Signals Ahead newsletter is available on a subscription basis. We offer four distinct packages that have been tai-lored to address the needs of our corporate users. The Group License includes up to five users from the same company. The Global License is the most attractive package for companies that have several readers since it is offered to an unlimited number of employees from the same organization. Finally, the Platinum package includes the Global License, plus up to five hours of analyst time. Other packages are available.

Corporate Rates (18 issues)❒➤Group License ($3,995) ❒➤Global License ($7,995) ❒➤Platinum ($9,495) Payment Terms❒➤American Express ❒➤Visa ❒➤MasterCard Credit Card # Exp Date / / ❒➤Check Check Number ❒➤Purchase Order PO Number Name: Title: Affiliation: Phone: ( ) Mailing Address:

Mailing AddressSignals Research Group, LLC – ATTN: Sales10 Ormindale CourtOakland, CA 94611

Our fax number is (510) 338-1284.

Alternatively, you may contact us at (510) 273-2439 or at [email protected] and we will contact you for your bill-ing information. We will not process your payment until after the trial subscription period is completed.

Terms and Conditions: Any copying, redistributing, or republishing of this material, including unauthorized sharing of user accounts, is strictly prohibited without the written consent of SRG.