SERVICE DIFFERENTIATION USING MANAGED SLEEP INCSMA/CA NETWORKS
By
CHRISTOPHER JAMES WEITZEN
A Thesis Submitted to the Graduate Faculty of
WAKE FOREST UNIVERSITY
in Partial Fulfillment of the Requirements
for the Degree of
MASTER OF SCIENCE
in the Department of Computer Science
August, 2009
Winston-Salem, North Carolina
Approved By:
Errin W. Fulp, Ph.D., Advisor
Examining Committee:
Stan J. Thomas, Ph.D., Chairperson
V. Paul Pauca, Ph.D.
ii
Acknowledgments
Thank you Dr. Fulp for your astounding patience. To the Wake Forest Computer
Science department, I have appreciated and enjoyed these two years. Thank you to
Kristin for helping me proofread during my busiest hours. Thanks to my mother and
father for their endless love and support. Special thanks to Junko and Kenkichi for
providing such a loving home away from home.
1
Abstract
Performance of multimedia and real-time applications such as streaming video
and voice over IP is easily degraded by a high network traffic load. Current Internet
infrastructure provides no special assistance for these sensitive applications. One
solution to this problem is to introduce Quality of Service (QoS) into today’s most
common networking protocols.
Many different mechanisms for bringing quality of service to computer networks
have been proposed, but less research has been aimed specifically at the data link
layer of wireless networks. Implementing quality of service in wireless networks is an
especially daunting challenge, due to the dynamic medium of wireless communication.
Wireless networks are becoming an increasingly important component of modern
computer networks, so this shortcoming cannot be ignored.
This thesis proposes a new protocol called Carrier Sense Multiple Access with Col-
lision Avoidance and Managed Sleep, or CSMA/CA/MS. This protocol uses admis-
sion control techniques to bring better than best effort quality of service to wireless
networks. A lightweight microeconomic-based pricing model makes use of wireless
power management features to determine which stations have access to the network
resources. The entire system is practical to implement based on current wireless
technologies and protocols. CSMA/CA/MS is shown to provide dynamic service dif-
ferentiation while maintaining high utilization of network resources.
2
Table of Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
List of Figures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Chapter 1 Wireless Networking and the Need for Quality of Service 5
1.1 Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1 Ad Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.2 Infrastructure Networks . . . . . . . . . . . . . . . . . . . . . 6
1.2 802.11’s Location in Network Models . . . . . . . . . . . . . . . . . . 7
1.2.1 The OSI Model . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Multimedia Applications Networking Requirements . . . . . . . . . . 10
1.4 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 2 Medium Access Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1 Channel Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.1 Static Allocation . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.2 Dynamic Allocation . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Wireless Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.1 Dynamic Physical Medium . . . . . . . . . . . . . . . . . . . . 16
2.2.2 Collision Detection and the Hidden Node Problem . . . . . . . 17
2.2.3 CSMA/CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.4 RTS/CTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Chapter 3 Quality of Service in Wireless Networks . . . . . . . . . . . . . . . . . 21
3.1 Performance Requirements . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2 Deterministic and Statistical QoS . . . . . . . . . . . . . . . . . . . . 23
3.3 Mechanisms for Wireless QoS . . . . . . . . . . . . . . . . . . . . . . 24
3.3.1 Service Differentiation . . . . . . . . . . . . . . . . . . . . . . 24
3.3.2 Admission Control . . . . . . . . . . . . . . . . . . . . . . . . 26
Chapter 4 Wireless Network Power Management . . . . . . . . . . . . . . . . . . 28
4.1 On vs. Off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 Role of the Access Point . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 Periods of Sleep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3
Chapter 5 Microeconomic-Based Differentiated Service . . . . . . . . . . . . 31
5.1 Economic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.1.1 Pricing Models . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2 Pricing Models for Allocating Network Resources . . . . . . . . . . . 32
5.2.1 Pricing in Ad Hoc Networks . . . . . . . . . . . . . . . . . . . 33
5.2.2 Power Management via Pricing . . . . . . . . . . . . . . . . . 33
Chapter 6 Microeconomic-Based Wireless Service Differentiation. . . 35
6.1 A New Approach: Pay to Wake Up . . . . . . . . . . . . . . . . . . . 35
6.2 Dynamic Price-Based Service Differentiation . . . . . . . . . . . . . . 36
6.2.1 Budget . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.2.2 Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.2.3 Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.3 Analysis of Pricing Sleep . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.3.1 Single Class Case . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.3.2 Multiple Class Case . . . . . . . . . . . . . . . . . . . . . . . . 40
6.4 Access and Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Chapter 7 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.1 Simulation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.2 Standard CSMA/CA . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.3 Single Class Performance . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.4 Multiple Class Performance . . . . . . . . . . . . . . . . . . . . . . . 46
7.4.1 Drop Off Scenario . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.5 CSMA/CA/MS Performance . . . . . . . . . . . . . . . . . . . . . . . 48
7.6 Impact of Distance from AP . . . . . . . . . . . . . . . . . . . . . . . 51
7.7 Results Summarized . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Chapter 8 Further Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4
List of Figures
1.1 The IEEE 802 family [1]. . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 The OSI reference model [3]. . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Frequency division multiplexing [3]. . . . . . . . . . . . . . . . . . . . 13
2.2 Basic algorithm of CSMA/CD. . . . . . . . . . . . . . . . . . . . . . 15
2.3 Stations 1 and 3 are hidden nodes. . . . . . . . . . . . . . . . . . . . 17
2.4 CSMA/CA interframe spacing [1]. . . . . . . . . . . . . . . . . . . . . 18
2.5 Change in the NAV with three data fragments [1]. . . . . . . . . . . . 19
3.1 Application QoS requirements [3] . . . . . . . . . . . . . . . . . . . . 22
4.1 Wireless Transceiver Power Consumption [14] . . . . . . . . . . . . . 29
7.1 Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.2 CSMA/CA total network utilization shown in (a). Individual stations’utilization for CSMA/CA shown in (b). . . . . . . . . . . . . . . . . . 44
7.3 Network utilization overhead introduced by a 50% wake up probability. 45
7.4 Total class network utilization with two static classes shown in (a).Network utilization of individual stations with two static classes shownin (b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.5 Average class delay with one 0% chance sleep class and one 50% chancesleep class. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.6 Total class network utilization during dropoff scenario for two staticclasses shown in (a). Network utilization during dropoff scenario forindividual stations of two static classes shown in (b). . . . . . . . . . 48
7.7 Total class network utilization during dropoff scenario for price-basedCSMA/CA/MS shown in (a). Network utilization during dropoff sce-nario for individual stations of price-based CSMA/CA/MS shown in(b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.8 Network utilization after high class dropoff. . . . . . . . . . . . . . . 49
7.9 Change of price and class chance of normal wake up in the dropoffscenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.10 Impact of distance from AP in CSMA/CA shown in (a). Impact of dis-tance from AP in CSMA/CA/MS shown in (b). Both graphs show onlylow priority stations, and the excessively distant station is representedby a solid line. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.11 Utilization and delay results for each experiment. . . . . . . . . . . . 52
5
Chapter 1: Wireless Networking and the Need
for Quality of Service
Wireless networks have become ubiquitous, and represent a primary method of
connection for many users [1]. Although adequate for some network applications,
wireless networks are unable to provide the service guarantees required by applica-
tions such as streaming video or live voice chat. This thesis describes a protocol to
improve upon these drawbacks. An understanding of many core concepts of network-
ing, wireless communication and power management, and Quality of Service (QoS)
are necessary to fully appreciate the new protocol.
This chapter introduces IEEE 802.11, the wireless standard this thesis focuses on.
We will describe the difference between ad hoc and infrastructure wireless networks.
Then we will show how 802.11 relates to layered network models as well as some of
the most commonly used networking standards.
1.1 Wireless Networks
The protocol proposed by this thesis aims to improve some of the shortcomings in-
volved in wireless networking. There are many different technologies used for wireless
communication. For example, 802.16, or WiMAX is sometimes used in metropolitan
area networks, but rarely is used to connect end users. 802.15, commonly referred
to as Bluetooth, is used by small devices for short range communication. Modern
cellphones rely on wide-area wireless cellular telephone networks for transmission of
data.
The most commonly used wireless computer network standard is IEEE 802.11.
Over recent years, 802.11 networks have greatly increased in usage, and become a
6
vital component of modern networking [2]. Due to their pervasiveness, this thesis
focuses exclusively on 802.11 from this moment forward.
Before addressing shortcomings, we will first examine some of the fundamental
aspects of 802.11 networking. The basic building block of a wireless network is a
group of stations that can communicate with each other. There are two main types
of wireless networks: independent networks and infrastructure networks. Independent
networks are typically referred to as ad hoc networks.
Infrastructure networks are defined by the use of an Access Point (AP). An AP is a
device that allows mobile stations in a wireless network to connect to a wired backbone
network. The AP also plays a large role in managing communication among all
participating stations. We will now briefly highlight some of the important differences
between these two types of wireless networks.
1.1.1 Ad Hoc Networks
Ad hoc, or independent networks, do not rely on an AP for communication, and are
instead decentralized. A gathering of two or more 802.11 stations in communication
range can create an independent network. Generally, ad hoc networks are set up
temporarily for a short period of time with a specific purpose. Typical examples
include business meetings, quick file sharing, and LAN parties. Although ad hoc
networks are useful for many circumstances, infrastructure networks are in far more
common use [1].
1.1.2 Infrastructure Networks
The service area of an infrastructure network is defined by distance from the AP.
Every mobile station must be within communication range of the AP in order to
participate in the network. Distance between individual stations does not place a
restriction on communication. No communication takes place directly between two
stations. All communication must be relayed through the AP. For one station to send
7
a message to a station in the same service area two hops are required: first from the
sending station to the AP, then from the AP to the receiving station.
While the centralized nature of the AP may seem like a waste of transmission
capacity, it actually provides some significant benefits. Since mobile stations do not
communicate directly with their peers, they do not need to maintain a list of distance
relationships for other stations on the network. Also, APs are able to assist all stations
in the network with power management. We will examine these power management
features in greater detail in Chapter 3 of this thesis.
1.2 802.11’s Location in Network Models
The IEEE 802 family is a series of specifications for a variety of local and metropoli-
tan area network technologies. A second number designates an individual specification
in the 802 series. 802.3 defines the commonly used Ethernet standard. Unsurpris-
ingly, 802.11 networks, commonly referred to as wireless networks, are a part of the
IEEE 802 family as well. The IEEE 802 family tree is illustrated in Figure 1.1
Figure 1.1: The IEEE 802 family [1].
IEEE 802.11 is only a small part of the network architecture needed for comput-
ers to intercommunicate. Several other components are needed, such as connection
management, message routing, and congestion control. Given this complexity, net-
work architecture is commonly described using one of two reference models, OSI and
TCP/IP. Although this thesis only concerns one layer of the model, it is still important
to understand how the different layers interact to provide communication.
8
1.2.1 The OSI Model
The Open Systems Interconnection (OSI) Reference Model provides an abstract
description for layered communications. It was developed by the International Or-
ganization for Standardization to promote international protocol standardization [3].
Breaking up network communication into layers provides a powerful amount of ab-
straction and compartmentalization. As shown in Figure 1.2, each of the seven layers
group together conceptually similar functions.
Figure 1.2: The OSI reference model [3].
The most attractive benefit of the layered approach comes into play with new
protocols or modifications. Instead of rewriting every aspect of the communication
standard, a single layer can be targeted for improvement. Wireless networks provide
an excellent example of this. The IEEE 802 specifications focus on the two lowest
9
layers of the OSI or TCP/IP model, the physical and data link layer.
The purpose of the physical layer is to transmit raw bits over a physical medium.
This layer’s major design concerns include mechanical, electrical, timing and physical
aspects of the medium [3]. The primary goal of the physical layer is to ensure that
when one station sends a 0 bit, the receiving station properly receives a 0 bit, not a 1
bit, and vice-versa. In 802.11 networks, a trailing letter (such as b, g, or n) indicates
a difference in only the physical layer.
The data link layer allows communication between stations to be established on
a single link [3]. Data is broken up into data frames and sent sequentially. Acknowl-
edgment frames are used to confirm the correct receipt of each frame. This layer is
also concerned with flow control and frame synchronization. 802.11 splits the data
link layer into two sub-layers: Logical Link Control (LLC) and Media Access Control
(MAC).
The goal of the network layer is to allow heterogenous networks to be intercon-
nected [3]. This is accomplished by controlling how packets are routed from source
to destination. Routes can be static, usually based on rarely changed routing tables.
Routes can also be determined dynamically to reflect current network conditions.
This layer can also employ congestion control to alleviate traffic bottlenecks.
The transport layer provides error recovery and flow control [3]. It receives data
from higher layers and ensures that the data correctly arrives at the destination
station. This is the first layer to provide true end-to-end communication. A program
on the source station can communicate with the destination station program. In
the lower layers communication takes place between neighboring machines, not the
ultimate source and destination station.
The session layer allows communication between users on different stations by
creating sessions between them [3]. Dialog control keeps track of whose turn it is
to transmit, preventing two stations from performing critical operations at the same
time. This layer also provides checkpointing, which keeps track of the communication
10
state between two stations. Checkpointing enables the recovery of long transmissions
in the case of a crash or error. The session layer is also responsible for allowing
stations to gracefully close sessions.
The presentation layer ensures that transmitted information uses proper syntax
and semantics. The goal of the layer is to provide independence from any differences
in data representation. Standard encoding for data types such as names, dates, and
currency can be enforced in this layer. This allows higher-level data structures to be
defined and thus easily exchanged between stations [3].
The application layer is the highest layer of the OSI model. This layer interacts
with with software applications to enable a network communication component.
1.3 Multimedia Applications Networking Requirements
It is important to note that current Internet service provides best effort delivery.
This characteristic holds for the most commonly used 802.11 variants. Best effort
networking means that there are no guarantees that data will be successfully trans-
mitted from sender to receiver. Best effort networking is also notable for its lack of
support for quality of service. The lack of a built-in mechanism for quality of service
can cripple multimedia applications and other real-time services [2]. Applications
such as streaming video and voice over IP require low delay and jitter to provide an
acceptable user experience. Quality of service allows the network to give a different
priority to different applications, transmission flows, or users.
As we will explore in the next two chapters, providing quality of service is espe-
cially difficult in wireless networks. Wireless networks are rising as a primary method
for computer networking, so these shortcomings are an increasingly important issue
[1]. Support for service guarantees has been extensively studied at layers 3 and 4,
but limited work has been proposed at layer 2, especially in wireless networks. Un-
fortunately, effective guarantees are only possible if all layers provide proper support
[2].
11
1.4 Thesis Contributions
The goal of this thesis is to address the lack of quality service in wireless network-
ing. We propose a protocol called Carrier Sensing Multiple Access with Collision
Avoidance and Managed Sleep, or CSMA/CA/MS. It is a new system that provides
statistical based quality of service at layer 2.
CSMA/CA/MS enhances the current functionality of 802.11 networks to provide
a lightweight mechanism for high network utilization and service differentiation in
dynamic environments. This proposal requires very limited changes to the CSMA/CA
protocol. Fundamental to any layer 2 quality of service scheme is channel allocation.
In the next chapter we will discuss the mechanisms used for channel allocation in
both wired and wireless networks.
12
Chapter 2: Medium Access Control
The enhanced protocol proposed by this thesis achieves service differentiation by
expanding upon the Media Access Control (MAC) sublayer of the 802.11 standard.
In both the OSI and TCP/IP models MAC is a sublayer of layer 2, the data link layer.
The MAC sublayer plays a very important role in networks with a shared physical
medium, also referred to as a channel. Since several stations will attempt to use the
same channel, determining which station will send data next is a critical issue. This
thesis focuses on granting the right stations at the right time access to the shared
channel, so proper understanding of the MAC sublayer is of paramount importance.
As multimedia and other rich applications play more of a important role in com-
mon Internet usage, there is a very real need for improved quality of service strategies.
One way to improve network quality of service is to enhance the ways in which sta-
tions access a shared medium. The goal of this thesis is to propose a new technique
for channel allocation in CSMA/CA wireless networks. In the next chapter we will
discuss the mechanisms used for channel allocation in both wired and wireless net-
works.
2.1 Channel Allocation
Deciding which station will have access to a shared channel is called channel allo-
cation. If two stations attempt two send data at the same time, a collision will occur,
ruining both sets of data. The goal of the MAC sublayer is to allow all stations to
communicate on the shared medium in a way that avoids collisions and increases over-
all performance. Channel allocation is often an important part of providing Quality
of Service (QoS), which we will explore in greater detail in chapter 3.
13
If the channel is not properly shared among the stations on the network, the
quality and performance of all stations’ communications will suffer. The MAC can
divide up a shared channel either statically or dynamically. We will briefly review
static approaches. As dynamic channel allocation is highly relevant to this thesis, we
will cover it in more detail.
2.1.1 Static Allocation
The classic example of static channel allocation is the telephone system. Multiple
conversations can be sent over a single phone line, but over different frequencies.
This is called Frequency Division Multiplexing (FDM). The frequency spectrum is
divided into n different equal sized portions, as shown in Figure 2.1, where n is 3.
Time division multiplexing (TDM) is a similar approach, where instead of dividing
the frequency spectrum, n time slots are created.
Figure 2.1: Frequency division multiplexing [3].
There are two significant drawbacks to static channel allocation. If more than
n users attempt to use the shared channel, some users will be denied access. The
opposite scenario reveals an equally undesirable drawback. If less than n users are
14
using the channel, portions of the bandwidth are wasted. This is inherently inefficient,
and leads to poor performance for static channel allocation under common conditions.
This is a significant problem with computer networking traffic, which is rarely
evenly distributed, and often comes in large bursts. Since static channel allocation
does not provide an adequate solution, we will now move on to dynamic approaches.
2.1.2 Dynamic Allocation
There are three general approaches for dynamic medium access control: round
robin, reservation, and contention. The goal of dynamic MACs is typically to allow
multiple stations to better utilize the channel, while avoiding collisions. Dynamic
allocation boasts greater performance when compared to static methods, but require
additional complexity.
IEEE 802.5 Token Ring is an example of a round robin MAC protocol. The overall
idea is that stations will take turns sending data in order to avoid collisions. When no
station is transmitting data, stations circulate a small frame referred to as the token
around the network. If a station wants to send data is must wait until it possesses
the token. As only one station at a time can possess the token, and thus send data,
this approach has the benefit of completely eliminating collisions.
In reservation approaches stations request a slot of time before sending data. Once
a time slot is granted, only that one station may transmit data until the time expires.
Fiber Distributed Data Interface (FDDI) and Distributed Queue Dual Bus (DQDB)
are both examples of networks that use reservation-based allocation [3].
In contention-based MAC networks, stations compete to determine which will send
data. 802.3, generally referred to as Ethernet, is one of the most common contention-
based protocols. Due to its wide use and importance we will examine Ethernet in
greater detail in the following section.
15
Ethernet
The specific MAC protocol Ethernet uses is CSMA/CD (Carrier Sense Multiple
Access with Collision Detection). Carrier sense means that before a station attempts
to transmit data, it will first listen to make sure the channel is not currently in use.
Multiple access means that multiple stations are able to send and receive data on the
shared channel.
Collision detection in CSMA/CD means that a transmitting station can detect
when another station transmits data and a collision occurs. When a station detects a
collision it sends a jamming signal, telling all stations to cease transmission. Stations
begin an exponential back-off process, and then resume transmission attempts. This
procedure is presented in the flowchart shown in Figure 2.2.
Figure 2.2: Basic algorithm of CSMA/CD.
16
Collision detection allows the system to quickly terminate transmission of damaged
frames. This saves time and bandwidth, increasing overall utilization. One of the
drawbacks of Ethernet is that it uses 1-persistent carrier sensing [3]. This means that
if a station wants to use a busy channel, that station will wait until the channel is
free and then immediately attempt transmission. Many stations may also be waiting,
and will all attempt transmission as soon as the channel’s transmission period ends.
This leads to a high number of collisions occurring during the contention period that
occurs whenever a frame finishes transmission.
2.2 Wireless Challenges
802.11 networks, commonly referred to as wireless networks, use a protocol called
Carrier Sense Multiple Access with Collision Avoidance, or CSMA/CA. This protocol
was designed to provide an experience similar to that of CSMA/CD Ethernet. There
are several key differences between the two protocols that we will explore in this
section.
When comparing Ethernet to wireless, only the physical and data link layers differ.
The reason for differences at the physical layer are easy to understand. Whereas
Ethernet networks transmit data over a cable, wireless networks transmit radio waves
through the air. These differences gives birth to several challenges that wireless
protocols must overcome. The details for this section were extracted from [1, 4].
2.2.1 Dynamic Physical Medium
A properly installed wired network tends to act in a highly predictable manner.
Once put in place, cables typically operate the same way day in and day out. The
physical medium rarely experiences any changes, and performance and reliability
will generally remain at a constant level. However, this cannot be said for wireless
networks.
By their very nature, the physical medium of wireless networks is dynamic, and
17
thus less predictable. Radio wave transmission is susceptible to a number of propaga-
tion problems. Radio waves can bounce off objects, may or may not penetrate walls,
and be effected by many forms of interference. Even simple household appliances
such as cordless phones, microwaves, and baby monitors can have significant negative
effects on wireless network performance.
Due to the rather unreliable nature of wireless transmission, a station cannot make
the assumption that the transmitted data will be properly received. To counter this,
CSMA/CA relies on a positive acknowledgment system to make sure data properly
reaches its destination.
2.2.2 Collision Detection and the Hidden Node Problem
The hidden node problem is a classic problem associated with wireless networks.
When compared to wired networks, wireless networks have fuzzier boundaries. A
station may not always be in range to communicate with every other station on the
wireless network. This gives rise to the hidden node problem, illustrated in Figure
2.3.
Figure 2.3: Stations 1 and 3 are hidden nodes.
As shown in Figure 2.3, station 2 is able to communicate with both stations 1
and 3, while stations 1 and 3 are not within range to communicate directly. Station
3 is hidden from station 1, as is station 1 from station 3. If stations 1 and 3 were to
attempt to simultaneously transmit to station 2, a collision would occur and station
2 would be unable to receive either of the transmissions. This problem can occur in
18
both ad hoc and infrastructure networks. Station 2 in Figure 2.3 could instead be an
AP and the hidden node problem could still occur.
To make matters worse, the collision described in the above scenario would only
be detectable by station 2. The collision was local to station 2, so both stations
1 and 3 would have no idea that a collision occurred. Wireless transceivers are al-
most always half-duplex, meaning that they cannot receive and transmit at the same
time. The possibility of hidden stations makes finding complete knowledge of the
network activity impossible. To combat this drawback, 802.11 networks make use of
CSMA/CA.
2.2.3 CSMA/CA
CSMA/CA modifies the CSMA/CD algorithm to better suit the conditions of
wireless networks. CSMA/CA introduces four different interframe spacing durations:
SIFS (Short Interframe Space), PIFS (PCF Interframe Space), DIFS (Distributed
Interframe Space), EIFS (Extended Interframe Space). Figure 2.4 shows the relation-
ship of these different interframe spacings.
Figure 2.4: CSMA/CA interframe spacing [1].
SIFS is the shortest spacing, and allows atomic operations to seize the medium
before any other type of frame. It also allows a station to finish transmitting any
fragmented data once it has begun. PIFS is part of the rarely used Point Coordina-
tion Function ruleset of 802.11 [1]. DIFS defines the minimum wait time preceding
contention based services. Once the DIFS ends any station has the right to attempt
19
to begin transmission of data. The EIFS is used when errors or collisions occur in
frame transmission.
As described in the previous section, true collision detection is not possible in
wireless networks. Therefore, a virtual carrier sensing mechanism called Collision
Avoidance is used. This is accomplished through the use of the Network Alloca-
tion Vector (NAV). Most CSMA/CA frames include a duration field, which reserves
the channel for a set amount of time in milliseconds. Transmitting stations set the
duration field to indicate the amount of time they expect to use the channel.
Figure 2.5: Change in the NAV with three data fragments [1].
Figure 2.5 shows how the transmission of three data fragments affects the NAV
of other stations on the network. When a listening station encounters a frame with
a duration field, that station updates the counter of their NAV, and then begins
counting down until reaching zero. So long as the NAV is nonzero the medium is
assumed to be busy. When the NAV is zero, the medium is assumed to be idle.
This virtual carrier sensing approach does not completely eliminate the hidden node
problem, but it does work well for power conservation, which we will explore in greater
detail in Chapter 4.
20
2.2.4 RTS/CTS
802.11 also provides support for a contention and reservation hybrid system known
as RTS/CTS. In a RTS/CTS network a station wishing to transmit data first sends
a Request to Send (RTS) frame. The destination station will then respond with a
Clear to Send (CTS) frame. Any other station that hears a RTS or CTS frame will
update their NAV based on the frame’s duration field. The starting station is then
free to transmit data for the requested duration, free of worry over collisions. If the
transmitting station does not receive a CTS frame, it will enter the exponential back
off mode.
RTS/CTS extends upon the virtual carrier sensing of normal 802.11. Since sta-
tions who were in range to receive either the RTS or the CTS will not attempt to
transmit for the requested duration, there is little chance of a collision occurring
during transmission. This effectively solves the hidden node problem.
Despite these benefits, most networks do not use RTS/CTS. Support is generally
limited to expensive high-end hardware. The handshake process adds a great degree
of overhead to a networks transmissions. For most general use wireless networks,
RTS/CTS is not worthwhile, or even an option [1].
The CSMA/CA protocol used by 802.11 attempts to overcome the challenges
inherent in wireless networking. However, quality of service is not provided by the
protocol. In the next chapter we will examine why quality of service is difficult in
wireless networks, and some of the proposed solutions.
21
Chapter 3: Quality of Service in Wireless
Networks
Current Internet service is best effort. This means that all Internet data are treated
the same. No preferential treatment is given to any particular user or data flow, and
there are no guarantees provided during transmission. The network will simply make
its best effort to successfully transmit each packet from sender to destination. If a
packet fails to reach its destination, it must be retransmitted.
Unfortunately, this impartial approach has its drawbacks, especially with bursty
or sensitive traffic. Multimedia application in particular are vulnerable to the nature
of best effort transmission. Example applications include streaming video, voice over
IP (VoIP), and interactive multimedia. In these applications, if a packet takes too
long to arrive, that packet is often useless and must be discarded. It is entirely up
to the sender and receiver to compensate for the unreliable best effort nature of the
Internet.
Current Internet infrastructure provides no special assistance for these sensitive
applications. The highly sought after fix for this problem is known as Quality of
Service (QoS). QoS enhances the network with the ability to give different priority to
different applications, transmission flows, or users. The best effort approach used by
most networks today does not provide any QoS, so a new solution is needed.
An important aspect of QoS is that all the layers must participate in maintain-
ing the proper allocation requirements. Most research has focused on the network
layer, which provides guarantees for connections traveling across multiple networks.
Although network layer QoS mechanisms are critical for true end-to-end QoS, they
require QoS guarantees to also be provided by the data link layer. In this chapter we
22
will describe common QoS strategies, and examine some of the wireless QoS research
that focuses on the data link layer.
3.1 Performance Requirements
QoS is essentially an exercise in resource allocation. There is a finite amount of
network resources, while network demand is ever changing. Without QoS, a high
capacity data flow can saturate a channel, preventing other stations from gaining
access. A QoS mechanism must dictate how resources on the network are shared
among stations. Limited network resources must be allocated in a fair and efficient
manner. This controls the performance experienced by individual stations and the
network as a whole.
QoS takes several performance requirements into consideration. These require-
ments include delay, jitter, packet loss, and bandwidth [3]. A summary of the QoS
demands of several applications is shown in Table 3.1.
Figure 3.1: Application QoS requirements [3]
Application Delay Jitter Bandwidth
E-mail Low Low LowFTP Low Low Medium
Web access Medium Low MediumStreaming Audio Low High MediumStreaming Video Low High High
Telephony High High LowVideoconferencing High High High
Delay measures the amount of time for a packet to successfully reach its desti-
nation. Multimedia applications are very sensitive to high delay. The live audio
associated with VoIP is an obvious example of this limitation. If an audio packet has
a delay of more than a few hundred milliseconds, the packet’s information will be too
old to contribute to the live conversation.
23
Jitter, or delay variation, describes the amount of variance experienced in the
delay of received packets. The amount of jitter can change from packet to packet.
When packets are routed across the Internet they are not all guaranteed to follow the
same path, resulting in varying degrees of delay for each packet. This can also result
in packets being received out of order. High jitter and out of order delivery has a
noticeable negative effect on live audio and video applications.
Packet loss measures the number of packets dropped due to lack of buffer space.
This often occurs when a router becomes overwhelmed with too many incoming pack-
ets. Obviously, when packets are dropped, the station that would have received those
packets experiences a drop in quality. Since some degree of packet loss is inevitable
in best effort networks, many applications employ error-recovery mechanisms to com-
pensate [3].
3.2 Deterministic and Statistical QoS
QoS can be broken down into two general categories: deterministic or statisti-
cal [3]. Deterministic approaches provide actual QoS guarantees, but are fixed over
time. These approaches often yield overly conservative results, and can even waste
significant amounts of network resources. An example of deterministic QoS would
be granting a video stream a static resource allocation based on the peak video data
rate.
Statistical QoS tends to be more complex than deterministic mechanisms, but
can provide more efficient allocation of network resources. Statistical mechanisms
can react to changing network conditions. In an attempt to accommodate more
users and provide higher utilization of resources, statistical methods may reallocate
resources at certain periods of time, even with the potential of over-allocation.
Service differentiation is an important form of Statistical QoS. In service differen-
tiation, certain stations or data flows are given a higher priority over another. This
allows a high priority class to experience improved QoS at the expense of the low
24
priority class. This thesis implements a service differentiation-based statistical QoS
approach.
The absence of either deterministic or statistical QoS results in best effort net-
works. As the network traffic of multimedia applications rises, best effort networking
will not work. An alternate solution is required. QoS is a difficult challenge in com-
puter networks, especially in wireless networks. The additional challenges associated
with wireless networks described in the previous chapter make implementing QoS in
wireless networks especially difficult.
3.3 Mechanisms for Wireless QoS
There are a variety of techniques used to implement QoS, each better suited for
certain conditions. Most QoS techniques have been implemented at the network layer,
leaving a need for more data link layer solutions. Unless all layers provide support,
QoS efforts will be hindered.
As this thesis deals with the data link layer, we will now provide an overview of
research that has been conducted targeting that layer. Wireless QoS mechanisms can
be broken down into 2 major categories: service differentiation, and admission control
[5].
3.3.1 Service Differentiation
Service differentiation is one of the more common approaches of statistical based
QoS [5]. These approaches do not make actual guarantees for a fraction of the network
resources. Instead, certain stations or data flows are given a higher priority compared
to their peers. One way to implement these systems is to give high priority stations an
advantage when accessing the shared medium during the contention window preceding
the sending of any data. In this section we will review two examples of this technique:
802.11e and p-persistent CSMA/CA.
25
802.11e
802.11e is one of the most prominent mechanisms QoS in wireless networks.
802.11e is an IEEE standard that is specifically designed to bring QoS to wireless
networks. Like most QoS strategies, these enhancements are achieve by modifying
the MAC sublayer. The standard aims to improve the performance of the kinds of
applications that suffer most from they delay and unreliability associated with best
effort networks. 802.11 does not provide guarantees of service, but does establish
a probabilistic priority mechanism for allocating channel bandwidth [6]. The disad-
vantage is an increased degree of protocol complexity when compared to standard
802.11.
802.11e achieves its QoS through eight traffic categories. When sending high
priority traffic, a station waits a shorter period of time before beginning to send
[4]. This means high priority traffic has a greater chance of being transmitted, since
low priority traffic must wait longer when trying to access the channel. To prevent
collisions within a traffic category, stations of that category wait a small random
period of time before attempting to send.
p-persistent CSMA/CA
Modifying the CSMA/CA protocol to be p-persistant is a proposed mechanism
for bringing service differentiation to wireless networks [7]. In a p-persistant CSMA
protocol, when a station senses that the channel is idle, instead of immediately send-
ing, the station will send based on a probability p. The station will have a 1 − p
chance of deferring and delaying transmission.
p-persistant protocols have been shown to lower collisions and improve network
throughput [3]. Service differentiation can be achieved by associating traffic classes
with different p values. A high priority class will have a higher chance to access the
medium, and thus experience improved performance. However, determining an ap-
propriate value for p based on actual network conditions that does not waste resources
26
is a difficult problem [8].
3.3.2 Admission Control
Admission control provides a method for accepting or rejecting new connections
based on current network conditions and capacity. This can be used as a mechanism
for creating service differentiation-based QoS. An entity on the network (usually the
router or AP) determines whether there are enough resources on the network to handle
a new connection, and then either accepts or rejects the new connection. Admission
control is useful in situations where many stations will attempt to utilize a shared
channel, and if too many stations access the channel, the performance of all stations
will suffer.
Admission control schemes distinguish themselves based on how they monitor the
channel to determine current network conditions. Barry et al propose a mechanism for
admission control that monitors MAC layer frames on the NAV to estimate the local
throughput and delay [9]. Others have proposed using actual data packets to measure
network load [10, 11]. Instead of using direct measurement to determine network load,
Kazantzidis et al propose a heuristic solution that calculates permissible throughput
based on piggybacked basic information about the network [12].
Chaporkar et al propose a queuing system designed specifically for scheduling and
queuing in wireless networks entitled maximal scheduling[13]. Their system uses
distributed scheduling which provides a guaranteed fraction of network throughput.
Their proposal is not specific to 802.11 and can be applied to any wireless network.
The crux of maximal scheduling is to ensure that only transceiver-receiver flows that
do not interfere with each other be allowed to transmit at the same time.
This thesis achieves service differentiated QoS through admission control tech-
niques. These related works show that admission control can provide fair and high
utilization QoS. However, these mechanisms require a high degree of knowledge of the
network topology, which is often impractical or impossible for high mobility wireless
27
networks. These approaches also require a greater amount of modifications to existing
CSMA/CA and 802.11 standards when compared to the mechanism proposed by this
thesis.
28
Chapter 4: Wireless Network Power
Management
The major advantage of wireless networks is mobility. Stations do not have to
stay at one particular location to connect to the wired component of the network.
Obviously, having to stay plugged into a power outlet undermines this mobility a
great deal. However, batteries can only produce so much power before needing to
be recharged. As battery power is a scarce resource, wireless networks must take
measures to conserve energy.
This thesis proposes an original mechanism that utilizes the existing power man-
agement functions of 802.11 to also provide service differentiation. We will now pro-
vide an overview of the power management features wireless networks rely on to
conserve energy without sacrificing mobility or performance. The details for this
overview were extracted from [1].
4.1 On vs. Off
The easiest way for a mobile station to conserver energy is to power down its
wireless transceiver. Wireless transceivers have an on state and off state. The power
consumed when the transceiver is on is significantly higher than when it is off. The
basic power conservation strategy of 802.11 networks is simple: maximize the time
spent in with the the transceiver off without severely sacrificing connectivity. It is
also worth noting that using the transceiver to just listen consumes less energy than
actually transmitting. Figure 4.1 shows the power consumption of various operation
modes of commercial 802.11 transceivers.
A wireless transceiver in the on state is said to be awake, active, or on. The
off state is often referred to as sleeping, dozing, or power saving. In an effort for
29
Figure 4.1: Wireless Transceiver Power Consumption [14]
Mode 802.11b 802.11a 802.11gSleep 132 mW 132 mW 132 mWIdle 544 mW 990 mW 990 mW
Receive 726 mW 1320 mW 1320 mWTransmit 1089 mW 1815 mW 1980 mW
consistency, this thesis will use the terms awake and sleep to describe these two states
from here forward.
802.11 power management provides greater enhancements on infrastructure net-
works [1]. In ad hoc networks, there is no logical central coordinator, and the sender
must ensure that the receiver is active. Power saving potential is much higher in
infrastructure networks than in ad hoc networks. The reason that infrastructure net-
works are better suited for power management boils down to one factor: the Access
Point.
4.2 Role of the Access Point
Access Points (AP) are ideal for overseeing a network’s power management due to
a variety of reasons. APs must remain active at all times, so they are almost always
connected to a continuous power source. Since all traffic must be routed through the
AP, they are an ideal candidate for buffering traffic.
An AP has three responsibilities for facilitating power management on the net-
work. First, the AP maintains information on the power saving state each station
on the network. Second, AP will buffer frames destined for stations it knows to be
asleep. Third, a AP will periodically transmit a beacon frame to announce which
stations have buffered frames waiting for them.
This approach allows mobile stations to conserve a large amount of energy. Sta-
tions can go to sleep knowing that the AP will buffer any traffic meant for them.
Waking stations simply have to listen to a beacon frame, which the AP regularly
30
transmits. If the beacon indicates that there are buffered frames for the station, that
station sends a PS-Poll frame to the AP, requesting the buffered frames.
4.3 Periods of Sleep
There are two main conditions that can cause a station to put its wireless transceiver
into the sleep state. If the station has no data waiting to be sent, it will go to sleep.
A station will also go to sleep if the NAV indicates that the channel will be reserved
for a sufficiently long period of time.
The amount of time a station stays asleep is defined by that station’s listen interval.
The listen interval is one of the parameters a station specifies when associating with
an AP. Each station may specify its own listen interval. It notifies the AP how many
beacon intervals the station will sleep through before waking up. Long listen intervals
allow stations to sleep for longer periods of time, and greatly extend battery life [5].
There are two drawbacks to long listen intervals. APs must buffer frames intended
for sleeping stations, so longer listen intervals require larger amounts AP buffer space.
Also, longer listen intervals increases the delay experienced by the station. In chapter
3, we touched on the impact of delay on quality of service. For some applications the
increased delay may be worth the extended battery life, but for other applications
the delay may be unacceptable.
The mechanisms used by 802.11 for power conservation center around a particular
strategy: maximize the time spent sleeping, without sacrificing performance. This
thesis capitalizes on this concept to bring service differentiation to wireless networks.
In the next chapter we will introduce the microeconomic-based system used by this
new protocol.
31
Chapter 5: Microeconomic-Based Differentiated
Service
As described in chapter 3, QoS is essentially an exercise in resource allocation.
Networks must ensure that resources are allocated in a fair and efficient manner. The
protocol proposed by this thesis utilizes a microeconomic mechanism for multi-user
resource allocation. This chapter begins with a brief discussion of basic economic and
microeconomic principles. Following that is a review of related works that have used
a microeconomic approach to allocate network resources.
5.1 Economic Models
Economics is usually defined as the study of “the allocation of scarce resources
among competing end uses” [15]. This simple definition highlights two fundamental
concepts of economics. First, resources are scarce, and do not exist in amounts large
enough to always satisfy all wants. Second, choices must be made to determine how
available resources are allocated.
Economic theory can be further divided into two categories: macroeconomics and
microeconomics. Macroeconomics deals with the performance, structure, and behav-
ior of an economy as a whole. In contrast microeconomics is concerned with the
behavior of individuals and their interactions and effects on the economy. Microeco-
nomics is commonly referred to as price theory [15].
Three basic components make up an microeconomic model: a finite amount of
resources, a set of agents, and rules specifying their interactions. Agents in the
economy attempt to acquire resources in an attempt to optimize some metric. That
metric is generally defined by a utility function, which maps a resource amount to
32
a satisfaction level. An agent can use the utility function to rank possible resource
allocations that maximize received satisfaction.
5.1.1 Pricing Models
Pricing models use microeconomic theory to allocate network resources [15]. This
approach has several advantages. Assigning resources a price provides a disincentive
to over-allocate those resources. Economic-based techniques allow for distributed
allocation without a central controlling entity, and can scale to support large net-
works. The goal reachable with these models is to achieve efficient and fair resource
utilization.
Four entities are important in pricing models: producers, consumers, price, and
budget. Producers provide and sell resources. Consumers seek to purchase resources
to meet their own needs. Price is used to represent the value of a resource. Consumers
have a budget which governs the amount of resources they are able to purchase.
This thesis models a perfectly competitive market. This means that the producers
and consumers deal with only one type of homogeneous product [15]. Such markets
can reach a point of equilibrium. This occurs when both buyers and sellers are content
with the amount of product available, and the product price [15].
Another advantage is that these models are generally easy to understand. For
example, complex admission control questions can be boiled down to scenarios such
as: “The ringmaster charges $8 for admission to his circus. Jimmy has a budget of
$10 to enjoy his evening, so he can afford go to the circus.”
5.2 Pricing Models for Allocating Network Resources
When allocating network resources among multiple stations, the goal is to strike
a balance between throughput and QoS while preserving network fairness. Network
fairness can be defined in a number of manners depending on the problem at hand,
but can generally be summarized as two requirements. First, network resources must
33
be efficiently utilized. Next, individual stations or data flows should be given a fair
portion of the network resources.
Microeconomic models can be applied to network resource allocation, to provide
QoS while preserving fairness. We will now highlight some related works that provide
significant contributions applying pricing models to wireless networking.
5.2.1 Pricing in Ad Hoc Networks
Much of the research involving wireless microeconomic models is applied towards
ad hoc networks. The research proposed by Liu et al is most relevant to this thesis.
In their system a price-based routing scheme for wireless networks is introduced [16].
Destination stations must pay sending nodes for each packet delivered. The stations
are selfish, meaning they will only send data if they are properly compensated.
Their simulated results show that even with a selfish pricing scheme reliable rout-
ing paths can still be established. Unlike this thesis, the mechanism developed by Liu
et al revolves around ad hoc networks. The goal of their algorithms is to determine
paths along multiple stations that make up an ad hoc network. This is of little use
in an infrastructure network, where network boundaries are defined by the reception
area of the AP, and all data must be routed through the AP.
5.2.2 Power Management via Pricing
Saraydar et al propose an interesting system to improve power management in
wireless networks [17]. A pricing system is used to manage the power consumption
of stations and improve overall power efficiency. Their approach also provides a
statistical mechanism for QoS. Service differentiation is achieved by adjusting the
transmit powers of stations based on current network price.
The pricing mechanism proposed by Saraydar et al is shown to be particularly
beneficial in a heavily loaded network. They also demonstrate that a pricing model
can be simple enough to allow the AP to periodically broadcast the network price to
34
all terminals. Their research shares some important similarities with this thesis.
As we will see in the next chapter, the CSMA/CA/MS protocol proposed by this
thesis introduces a new and unique method for using wireless power conservation fea-
tures and microeconomic models. The proposed protocol achieves admission control
based QoS, with an approach more lightweight than most related works.
35
Chapter 6: Microeconomic-Based Wireless
Service Differentiation
Chapter 3 described the need for quality of service in wireless networks. In chap-
ter 4, we introduced the techniques 802.11 networks utilize for power management.
The previous chapter outlined the advantages associated with microeconomic based
network resource allocation. This chapter will draw upon all of these concepts to in-
troduce a new wireless MAC protocol that brings quality of service to CSMA/CA net-
works: Carrier Sense Multiple Access with Collision Avoidance and Managed Sleep,
or CSMA/CA/MS.
6.1 A New Approach: Pay to Wake Up
As described in chapter 4, stations on wireless networks will often put their wireless
receiver to sleep to conserve energy. Various conditions can cause a mobile station to
enter a sleep state. For example, after a predetermined period of time, the station
will awaken from the sleep state. Typically the station then sends a poll to ask the
AP for any buffered frames that arrived while the station was asleep.
CSMA/CA/MS capitalizes on these sleep states to introduce service differentiation
into the protocol. A number of service classes can be created based on a predefined
station priority. Once asleep, a lower priority station will have less of a chance of
waking up from a sleep state. A higher priority station will have a greater chance of
waking up from a sleep state.
To put it in other words, this new approach achieves service differentiation by
forcing low priority stations to remain in sleep states for longer periods of time. This
effectively regulates which stations are allowed admission to the network resources.
36
CSMA/CA/MS only differs from the standard CSMA/CA protocol when a station is
already in a sleep state. Stations are never forced to go to sleep when they would not
normally be entering a sleep state.
The key idea of CSMA/CA/MS is that once stations enter a sleep state, lower
priority stations will have a greater probability for remaining in that sleep state for
a longer period of time. This is actually a connection admission control technique,
as described in chapter 3. CSMA/CA/MS manages the relative number of high and
low class stations that are allowed to access the shared channel at a given time.
A static service differentiation model is easy to implement using this technique.
Each station is assigned ahead of time a probability that designates a stay asleep
chance. A low priority station will have a higher chance of remaining asleep. A high
priority station will have a higher chance of being allowed to wake up. Since the high
priority station has more frequent access to the channel, it will have an improved
QoS.
Static models are a good starting point for understanding the mechanics of a
system. However, this static approach does nothing to compensate and adjust to
changing network conditions. A dynamic model offers much more practical use.
6.2 Dynamic Price-Based Service Differentiation
In this section we will describe how microeconomic models can be used to create
service differentiation in wireless networks. In this model the AP acts as a producer,
the stations act as consumers, and the resource is access to the shared channel. The
model dynamically adjusts to high and low loads, and provides high utilization of
channel resources.
6.2.1 Budget
Each station on the network is assigned a budget. For example, high priority
stations are given a larger budget, while lower priority stations are given a smaller
37
budget. This difference in allocated budget is all that is required to achieve service
prioritization. Furthermore, the relative difference in budget amounts will translate
to a difference in bandwidth. The budget is represented as a rate, not a cumulative
sum. Each station will have a certain amount of dollars to spend on each pricing
interval, after which their budgets will be reset to the starting value. The “product”
is perishable, meaning stations cannot defer transmissions in an attempt to save up
a larger budget for future transmissions.
6.2.2 Demand
A station’s budget and the current price determine that station’s probability of
waking up from a sleep state, which can be viewed as the demand. A station whose
budget is greater than the price will always wake up as normal. A station whose
budget is less than the price will have a decreased probability of waking up as normal.
The probability g of an over-budget station waking up is given by the following
demand equation,
g = min
{β
p, 1
}(6.1)
where β is the budget of a station, and p is the current offered price. This is based
on the widely accepted Cobb-Douglass demand equation [15],
g = β · pα (6.2)
A station that fails this probability check will remain asleep for an additional listen
interval, and then attempt to wake up again.
All stations have an α value of -1, meaning they have perfect elasticity. This is
why the Cobb-Douglass demand equation 6.2 is simplified to equation 6.1 for this
thesis. Perfect elasticity means the consumers (in this case each station) will react
proportionally to any change in the price.
6.2.3 Pricing
Stations use their budget to purchase the right to use the network bandwidth. The
38
price for using the network fluctuates based on the amount of traffic on the network.
The price of bandwidth is evaluated on regular pricing intervals. In the simulated
experiments of this thesis, each price interval lasted 1 second. At the end of a pricing
interval if the traffic on the network was greater than network supply, the price to
access the network will rise. At the end of a pricing interval if the traffic was less
than network supply, the price will fall. The new price is broadcasted by the AP as
part of the beacon frame.
To handle this dynamic change, a formula is used to price network resources,
pc+1 = pc ·dcs
(6.3)
where pc+1 is the new price, pc is the current price, dc is the current measured demand,
and s is 95% of the supply of network resources. In order to gauge network overload,
s must be just below 100% of the network resources. The price will move towards
equilibrium. If dc is greater than s, pc+1 will rise, which will cause dc+1 (the new
demand) to drop. When d is less than s, the pc+1 will fall, allowing a higher dc+1.
6.3 Analysis of Pricing Sleep
In this section the behavior of the proposed CSMA/CA/MS protocol is analyzed
to determine if proper utilization and differentiation is achieved. The analysis shows
that the system resources will be fully utilized in a fair manner.
Consider n equally privileged stations need to transmit a fixed sized frame. The
proposed approach attempts to limit the number of stations allowed to transmit by
requiring stations to pay to awaken from a sleep state. This is a connection admission
control approach, where the AP seeks to have k stations active at any time, where
k ≤ n. The probability of k stations being awake from a total of n can be given using
the binomial random variable equation
Prob{k awake} =n!
k!(n− k)!gk(1− g)k (6.4)
39
where g is the probability of a single station awaking from a sleep state. This prob-
ability is maximized when g = kn
since it can be considered an unbiased estimator.
Given s as the maximum number of awake stations the AP can support, then the
system is fully utilized when k = s. The important question is whether the pricing
approach will maximize the probability given in equation 6.4.
6.3.1 Single Class Case
Consider n equal stations where s is the maximum number of stations the AP
can support. The system is in equilibrium when the price p∗ is found. This causes
demand to equal supply, or k = s. The probability of waking g is governed by the
demand, expressed in equation 6.1. Given that p∗ causes supply to equal demand, set
the demand equation equal to the waking probability that maximizes equation 6.4,
β
p∗=s
n(6.5)
solving for p∗,
p∗ =n · βs
(6.6)
Theorem 6.1 Given the CSMA/CA/MS mechanism and a single class of equally
prioritized stations, the equilibrium price will maximize the probability that the system
is fully utilized.
Proof. The equilibrium price will maximize the probability equation 6.4 since
g =β
p∗(6.7)
using equation 6.6 gives
β
p∗=
βn·βs
(6.8)
which results in
βn·βs
=s
n= g (6.9)
showing that the equilibrium price will results in the probability to maximize system
utilization.
40
6.3.2 Multiple Class Case
The previous section showed that the system will maximize utilization. This
section will define the differentiation provided for multiple classes. Assume two classes
of stations, A and B, where the budget for class A is α times the class B budget
(α > 0). This analysis expands upon the single class case. The number of stations
awake, k, will consist of both class A and class B stations. At the equilibrium price,
equation 6.5 becomes
α · βp∗
=sAnA
(6.10)
where nA is the total number of high class stations and sA is the number of class A
stations that are awake. Similarly for class B stations.
β
p∗=sBnB
(6.11)
Solving for p∗ in the previous two equations and setting them equal
α · β · nAsA
= β · nBsB
(6.12)
assuming the number of A and B class stations are equal (nA = nB) then α as many
class A stations will be awake than class B stations. Note, the values of sA and sB
are dependent on the number of stations per class. If nB = α · nA then the number
of stations awake in each class will be equal. However, this does not imply there is
no difference between the two classes. A station in class B will have a lower chance
of waking than class A. Specifically,
gB =sBnB
=sB
α · nA(6.13)
This is more formally stated in the following proof.
Theorem 6.2 Given the CSMA/CA/MS mechanism and two classes of stations A
and B, where class A has a budget α times greater than class B, class A will have an
α times greater chance at waking than class B stations.
41
Proof. Equation 6.10 can be rewritten as the fraction of budget to price for class A
stations.
β
p∗=
sAα · nA
(6.14)
Setting equation 6.11 and equation 6.14 equal,
sBnB
=sA
α · nA(6.15)
at the equilibrium price this is the optimal probability of waking,
gB =1
αgA (6.16)
Therefore the probability of a class B station waking is a fraction of a higher budget
class A station.
6.4 Access and Bandwidth
It is important to note that these calculations do not point directly towards band-
width guarantees. An adequate budget compared to network price simply grants a
station access to the network channel. It does not buy a specific fraction of band-
width. However, knowing that s/n provides full utilization, expected bandwidth can
be derived from the formulas Bianchi describes for performance analysis of the 802.11
standard [18]. The formula Bianchi describes gives a method for predicting system
throughput by dividing average frame size by the time to transmit a frame.
Although there is no difference between stations that are awake, service differ-
entiation is provided. The likelihood of being allowed to enter the awake state is
governed by the station class. The next chapter will demonstrate the validity of
CSMA/CA/MS. We will use simulated experiments to measure the performance and
behavior of this new protocol.
42
Chapter 7: Experimental Results
In the previous chapter we proposed a new wireless protocol, CSMA/CA/MS.
Our proposal builds upon the basic functionality already provided by the CSMA/CA
protocol. CSMA/CA/MS brings QoS to wireless networks through a admission con-
trol based service differentiation. Microeconomic-based pricing models are used to
dynamically adjust to current network conditions over time.
To complement the previous chapter’s analytical methods, this chapter will focus
on simulated experiments. For this thesis, it is necessary to accurately model the
behavior of 802.11 networks. Simulated experiments an attractive solution, as imple-
menting and testing new protocol on actual networking hardware would be a costly
and difficult venture. We will begin by outlining our simulation method, then proceed
to the results found during simulation.
7.1 Simulation Method
The network simulator ns-2 was used for all experiments. ns-2 is a discrete event
simulator designed specifically for academic networking research [19]. Both wired
and wireless networks can be supported by ns-2. While primarily built in C++, ns-2
allows simulation parameters to be defined through a scripting language. This thesis
uses scripts to describe network topology and experiment flow of event. To actually
bring the functionality of CSMA/CA/MS to the ns-2 wireless MAC, the concept of a
probability based extended sleep interval had to be added to the C++ code.
In these experiments, 16 mobile stations are evenly distributed around a central
AP. The stations work in pairs, with one station sending data at a constant bit rate
of 10Mbps to a receiving station, creating 8 different data flows. The AP also has a
43
maximum rate of 10Mbps. A simple visual representation of this network topology is
shown in Figure 7.1. A uniform packet size of 1400 bits is used by all stations. This
is an infrastructure network so all data is routed through the AP before arriving at
the receiving station.
Figure 7.1: Network Topology
Performance will be measured using network utilization and delay. Network uti-
lization is the ratio of network throughput (observed data rate) compared to the
maximum potential throughput that network can achieve, 50Mbps. Delay is the
amount of time required to transmit a packet successfully.
Before delving into simulation results of CSMA/CA/MS, we will first examine the
performance provided by the standard CSMA/CA protocol. Then we will investigate
the degree of overhead caused simply by introducing the extended sleep probability
mechanism. Following that we will examine a completely static probability class
scenario. Finally we will present the full system provided by CSMA/CA/MS.
44
7.2 Standard CSMA/CA
In the first experiment, the stations operate using the standard features of the
CSMA/CA protocol. No station has any special priority over any other stations. No
station has any chance of remaining in a special prolonged sleep state. Stations will
go to sleep and wake up as normal prescribed by the standard protocol. Each station
has the same chance to access the medium, leading us to expect that each station will
have similar individual utilization.
(a) (b)
Figure 7.2: CSMA/CA total network utilization shown in (a). Individual stations’utilization for CSMA/CA shown in (b).
Figure 7.2 shows the network utilization and individual utilization of each station
from the simulation. The simulation verifies our assumption that each station will
receive a similar share of the total network utilization. Since no station has a particu-
lar advantage over any other, they all experience similar performance. Total network
utilization was 0.349, and each station experienced utilization very close to 0.044.
The fluctuations in utilization visible on both graphs is due to the dynamic phys-
ical medium and contention-based behavior of wireless networking. This is to be
expected when measuring wireless traffic. The contention nature of wireless network-
ing is also the reason utilization does not approach 1. As more stations attempt to
transmit data over the same period of time, network utilization falls.
45
7.3 Single Class Performance
CSMA/CA/MS can achieve service differentiation by utilizing the power saving
sleep functionality that already exists in 802.11. Before delving into the performance
of static and dynamic class based differentiation, we first must gauge overhead impact
of simply introducing this mechanism. This experiment consists of eight stations
sending data. All stations are given an equal 50% chance of successfully waking up
when trying to awaken from a sleep state.
Figure 7.3 shows the results of this experiment. Introducing a 50% wake up
chance for all stations slightly lowered average network utilization when compared
to standard CSMA/CA. The standard deviation was also effected, representing an
increase in the amount of performance variance.
Average Std DevStandard CSMA/CA 0.3491 0.0504Single Class 0.3354 0.0596
Figure 7.3: Network utilization overhead introduced by a 50% wake up probability.
These results are to be expected. Even though a probability to stay asleep is
introduced no one station is given a special advantage. Each station has the same
chance to wake up, which leads to the same chance to access the channel, which leads
to similar utilization among stations. Each station experienced utilization very close
to 0.042.
The overhead caused by forcing stations to have a chance of sleeping for longer
periods also makes sense. A chance of longer sleep times will decrease the amount of
time on average each station spends trying to access the channel. This results in a
3.9% drop in network utilization.
It is important to consider the overhead caused by simply introducing probabilistic
prolonged sleep into the protocol. However, examining only one class of stations is
far from exciting. In the next section we will begin our experiments dealing with
46
multiple priority classes of stations.
7.4 Multiple Class Performance
In this experiment we measure the results of static sleep probability based service
differentiation with two classes. Each class has four transmitting stations. The low
priority class is assigned a 50% chance of remaining asleep when attempting to awaken
from a sleep state. The high priority class will have a 0% chance of remaining asleep,
meaning that it will awaken from sleep states in the same manner as in standard
CSMA/CA. The sleep probabilities remain static for the duration of the experiment.
(a) (b)
Figure 7.4: Total class network utilization with two static classes shown in (a). Net-work utilization of individual stations with two static classes shown in (b).
Figure 7.4 shows the resulting utilization for each class. The high priority class
utilization was 0.214, while the low class utilization was 0.120. This gives a total
network utilization of 0.334, similar to that experienced in the single class experiment.
A 50% chance to remain in the sleep state means that the low priority class will have
access to the channel 50% as often as the high priority class. That is why the low
priority stations have nearly half the utilization of the high priority class.
Throughput is not the only characteristic that matters for QoS. Delay and jitter
both have a significant effect on multimedia and other real time applications. Figure
7.5 shows the delay over time for both classes. Similar to the utilization results, the
47
Figure 7.5: Average class delay with one 0% chance sleep class and one 50% chancesleep class.
delay for the low priority stations roughly doubles. However, the jitter for the low
priority stations is much higher. This is indicated by the large amount of fluctuation
in delay experienced by the low priority class. In exchange, the high priority class
experiences a very minor amount of jitter.
The results of the static service differentiation experiments are promising. How-
ever, this approach does nothing to adjust to network conditions. In the next section
we will detail a scenario that results in wasted resources when using the static model.
7.4.1 Drop Off Scenario
This experiment begins with an identical setup as used in the previous section.
However, half way through the simulation, all high priority stations cease transmis-
sion, and do not attempt to send any more data. We will refer to this high class drop
off scenario several more times in this chapter.
Figure 7.6 shows the result of this experiment. Midway through the simulation
the utilization of the high class drops to zero and the low class utilization rises signifi-
cantly. This large increase in utilization is due to a few number of stations competing
for access to the channel.
Despite the jump in utilization experienced by the low class, network resources are
being wasted in this scenario. The low priority stations no longer have any need to
48
(a) (b)
Figure 7.6: Total class network utilization during dropoff scenario for two static classesshown in (a). Network utilization during dropoff scenario for individual stations oftwo static classes shown in (b).
defer to any high priority stations. However, the low priority stations retain their 50%
sleep chance. This results in a lowering of network utilization similar to that shown
in Figure 7.3. In the next experiment we will explore a solution to this problem.
7.5 CSMA/CA/MS Performance
Fully implemented CSMA/CA/MS uses a microeconomic pricing model to dy-
namically allocate network resources through mission control. A price for accessing
the channel is periodically set based on current measured demand. Each class of
stations is assigned a budget rate. The high priority class is given a budget rate of
100 tokens per price interval, while the low priority class is given a budget rate of 50
tokens per price interval.
As the network price rises above a station’s budget, that station’s probability of
experiencing a prolonged sleep state rises. The price is calculated using equation 6.3.
The probability of a station remaining in a sleep state is determined by comparing
budget to price as shown in equation 6.1.
In these experiments a regular interval must be established for the pricing model.
During every interval the AP measures network demand based on total transmitted
frames. At the end of each interval the network price is updated based on measured
49
demand. If the pricing interval is too short, the AP will not have a chance to monitor
a sufficient amount of network traffic. An excessively long pricing interval will cause
the AP to be unable to react to changes in network conditions. Rudimentary testing
suggested that a 1 second long pricing interval was appropriate for these simulations.
(a) (b)
Figure 7.7: Total class network utilization during dropoff scenario for price-basedCSMA/CA/MS shown in (a). Network utilization during dropoff scenario for indi-vidual stations of price-based CSMA/CA/MS shown in (b).
Figure 7.7 shows the performance of the pricing model when applied to the drop
off scenario introduced in the previous section. The start of the simulation shows the
utilization of the high and low priority classes diverging as the pricing model takes
effect.
After the midpoint of the simulation the low priority stations experience a rise
in utilization similar to that seen in 7.6. However, CSMA/CA/MS achieves greater
utilization after the dropoff than experienced in the previous section, as shown in
Figure 7.8. CSMA/CA/MS achieved 10% higher network utilization than that seen
in the second half of the two static classes experiment.
Average Std DevTwo Static Classes 0.5130 0.0695CSMA/CA/MS 0.5637 0.0561
Figure 7.8: Network utilization after high class dropoff.
50
The improved performance seen in the second half of the simulation is achieved
thanks to the dynamic price adjustment. Figure 7.9 shows how the network price
changes throughout the experiment. In the first half of the simulation the network
is saturated, yielding high demand and a price that remains high. The low priority
class has half the budget of the high priority class, leading to proportionately lower
utilization.
Figure 7.9: Change of price and class chance of normal wake up in the dropoff scenario.
In the second half of the simulation the high priority stations cease transmission,
causing a large drop in network demand. With only the low priority stations still
transmitting the network does not experience full demand, and the network price
gradually falls. Since the network price eventually falls below the low priority budget
rate, those stations will no longer have any chance of experiencing an extended sleep
state. This price driven adjustment of class sleep probabilities is the reason for the
improved performance seen in 7.8.
51
7.6 Impact of Distance from AP
One of the hallmark features of wireless networks is mobility. However, when a
station begins to travel too far from the AP, a drop in performance is inevitable. The
goal of this final experiment is to determine if excessive distance from the AP has an
overly problematic effect on the CSMA/CA/MS protocol. Will the excessive distance
combined with the beacon transmitted pricing interval cause the far away station to
suffer from starvation, or gain an unfair advantage over the other stations?
To begin, we obtain a baseline reading for the expected drop in performance
caused by excessive distance in standard CSMA/CA. Three transmitting stations are
positioned around the base station similarly to all previous experiments. One station
is placed far enough away from the AP to experience a noticeable drop in performance,
as shown in Figure 7.10.
(a) (b)
Figure 7.10: Impact of distance from AP in CSMA/CA shown in (a). Impact ofdistance from AP in CSMA/CA/MS shown in (b). Both graphs show only low prioritystations, and the excessively distant station is represented by a solid line.
Figure 7.10 also depicts the results of a single distant station in the pricing model
based drop off scenario. The graph omits the high priority stations, and instead
only shows the 4 low priority stations, one of which is positioned excessively far from
the AP. Under the CSMA/CA/MS protocol the distant station receives a drop in
performance similar to that experienced in standard CSMA/CA. There is no evidence
52
of any abnormalities from distance caused delay when receiving the updated pricing
interval.
7.7 Results Summarized
We have observed that the new CSMA/CA/MS protocol provides service differen-
tiation and improved QoS in 802.11 networks. Figure 7.11 summarizes the outcome
of each experiment. An increased probability of a station remaining in a sleep state
results in a proportional drop in performance. Utilization for that station is lowered,
and delay and jitter increases. This allows stations with a higher priority of exiting
the sleep state to experience a priority based better than best effort QoS.
Simulation Experiment Utilization DelayAverage Std Dev Average Std Dev
Standard CSMA/CA 0.3491 0.0504 0.0106 0.0049Single Class 0.3354 0.0596 0.0150 0.0051Two Static ClassesHigh Class 0.2145 0.0481 0.0085 0.0015Low Class 0.1202 0.0296 0.0181 0.0076Static Dropoff ScenarioHight Class, 1st Half 0.2140 0.0463 0.0092 0.0017Low Class, 1st Half 0.1190 0.0302 0.0178 0.0085Low Class, 2nd Half 0.5130 0.0695 0.0088 0.0008CSMA/CA/MS Dropoff ScenarioHigh Class, 1st Half 0.2118 0.0450 0.0086 0.0034Low Class, 1st Half 0.1244 0.0288 0.0183 0.0097Low Class, 2nd Half 0.5637 0.0561 0.0078 0.0005
Figure 7.11: Utilization and delay results for each experiment.
The microeconomic-based pricing model grants CSMA/CA/MS the ability to con-
tinuously adjust class sleep probabilities based on current network conditions. The
pricing model is dynamic and allows greater utilization of network utilization. This
approach succeeds even in severe examples such as the drop off scenario used in several
of the experiments.
53
Lastly, the impact of excessive distance from the AP is considered. Station mo-
bility will always lead to varying distance from the AP, which introduces a degree of
inherent unfairness to wireless networks. Excessive distance is shown to provide no
significant advantage or disadvantage under CSMA/CA/MS when compared to the
standard CSMA/CA protocol.
The CSMA/CA/MS protocol proposed by this thesis provides an effective mech-
anism for service differentiation based QoS. The protocol introduces very little over-
head, and is relatively simple and practical to implement using existing wireless tech-
nologies. This new protocol can improve the service quality of multimedia and real-
time applications.
54
Chapter 8: Further Work
Without QoS, performance of multimedia and real-time applications will suffer.
Unfortunately, current internet infrastructure is best effort in nature, and provides no
special assistance to these kinds of applications. This thesis proposes a new wireless
protocol called Carrier Sense Multiple Access with Collision Avoidance and Managed
Sleep, or CSMA/CA/MS. It combines the existing power conservation mechanisms
of 802.11 with a very lightweight pricing model, achieving admission control-based
service differentiation. CSMA/CA/MS provides a new method for bringing QoS to
wireless networks at the data link layer.
This thesis raised a question regarding the optimal timing interval for calculating
the price of network resources when implementing a microeconomic-based mechanism.
While excessively small or excessively large intervals will obviously have a negative
impact on the pricing system, does an ideal timing exist? And does this ideal interval
change over time along with network conditions? Research involving the optimal time
interval could provide a significant benefit to many microeconomic-based systems.
The simulated experiments demonstrated that a station being a far distance away
from the AP did not derail the functionality of the CSMA/CA/MS protocol. However,
the distance still had a negative effect on that stations performance, even when using
the pricing model. Future research should investigate the possibility of a wireless
pricing model that detects and compensates for performance degradation caused by
distance or other forms of interference. Since mobility is one of the primary features
of wireless networks, a system to dynamically compensate for station’s distance has
far reaching potential.
The mechanism proposed by this thesis only considers networks with a single AP.
55
In reality, a large wireless network can be composed of many APs. Stations can
change which AP they are associated with as signal strength and other conditions
change. CSMA/CA/MS could be extended to support a multi-market price-based
model. When choosing an AP, there is potential for stations to consider not just
signal strength, but also the demand-based network price at that AP.
56
Bibliography
[1] M. Gast, 802.11 Wireless Networks : The Definitive Guide, O’Reilly, 2002.
[2] J. Kurose, Computer Networking : A Top−Down Approach, Addison Wesley,
2007.
[3] A. Tanenbaum, Computer Networks, Prentice Hall PTR, 2002.
[4] B. O’Hara, The IEEE 802.11 Handbook : A Designer′s Companion, Institute
of Electrical & Electronics Engineering, 2005.
[5] H. Zhu et al, A Survey of Quality of Service in IEEE 802.11 Networks,
IEEE Wireless Communications, August 2004.
[6] S. Mangold, Analysis of IEEE 802.11e for QoS support in wireless LANs,
Wireless Communications, IEEE, December 2003.
[7] Y. Ge and G. Hou, An Analytical Model for Service Differentiation in IEEE
802.11, IEEE ICC ′03, Vol. 2, May 2003.
[8] R. Kester. Service Differentiation Using p − Persistent CSMA/CD
Protocols, Master thesis, Wake Forest University, 2004.
[9] M. Barry et al, Distributed Admission Control for IEEE 802.11 Ad Hoc Networks,
in IEEE INFOCOM . 2001.
[10] S. Valaee. Distributed Call Admission Control in Wireless Ad Hoc Networks, in
IEEE V TC 2002, September 2002.
[11] S. H. Shah et al, Dynamic Bandwidth Management for Single-Hop Ad Hoc Wire-
less Networks, in IEEE International Conference on Pervasive Computing
and Communication, 2003.
57
[12] M. Kazantzidis et al, Permissible Throughput Network Feedback for Adaptive
Multimedia in AODV MANETs, IEEE ICC ′01, Vol. 5, June 2001.
[13] P. Chaporkar et al, Throughput and Fairness Guarantees Through Maximal
Scheduling in Wireless Networks, IEEE Transactions on Information Theory,
2008.
[14] R. Manghram et al, Optimal Fixed and Scalable Energy Management for Wire-
less Networks, In IEEE INFOCOM , 2005.
[15] W. Nicholson, Microeconomic Theory : Basic Principles and Extensions,
South-Western College Pub, 2004.
[16] H. Liu and B. Krishnamachari A Price-based Reliable Routing Game in Wireless
Networks, ACM International Conference Proceeding Series, Vol. 199, 2006.
[17] C. Saraydar et al, Efficient Power Control via Pricing in Wireless Data Networks,
IEEE Transactions on Communications, 2002.
[18] G. Bianchi, Performance analysis of the IEEE 802. 11 distributed coordination
function, IEEE Journal on selected areas in communications, 2000.
[19] S. McCanne and S. Floyd ns Network Simulator. Academic Press,
http://www.isi.edu/nsnam/ns/.