Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Proactive resource allocation optimization in LTE with inter-cellinterference coordination
Michael Brehm • Ravi Prakash
Published online: 27 October 2013
� Springer Science+Business Media New York 2013
Abstract This paper presents a distributed, dynamic self-
organizing network (SON) solution for downlink resources
in an LTE network, triggered by support vector regression
instances predicting various traffic loads on the nodes in
the network. The proposed SON algorithm pro-actively
allocates resources to nodes which are expected to expe-
rience traffic spikes before the higher traffic load occurs, as
opposed to overloaded nodes reacting to a resource-
exhaustion condition. In addition, the solution ensures
inter-cell interference coordination is maintained across the
LTE cells/sectors.
Keywords LTE � SON �Machine learning � Support
vector regression � Ricart–Agrawala � ICIC
1 Introduction
The rise of long term evolution (LTE) is being fueled
largely by subscriber demand for improved data through-
put, which in turn has fueled a host of research into time-
and frequency-domain scheduling at the LTE eNodeB.
However, just as critical to the scheduling problem is the
optimization of the amount of downlink resources to
schedule at each node. In other words, the scheduler needs
to know not only how to properly allocate its current set of
resources but also how to negotiate for the proper amount
of resources to schedule. To this end, this paper presents a
self-organizing network (SON) solution which facilitates
the sharing of downlink resources between neighboring
cells/sectors in both the time and frequency domains. These
resources will be used by the eNodeB schedulers in those
cells with the goal of allowing unused resources in one cell/
sector to be shared with an overloaded neighbor.
One critical design element is to also incorporate inter-
cell interference coordination (ICIC) within the SON
solution. A SON solution must not allow resources to be
moved across the network such that the resulting resource
allocation produces interference between neighboring
cells/sectors. Considering this solution would be run
mainly during periods of network congestion, to do so
would likely result in a degradation of subscriber quality of
experience rather than an improvement in network
utilization.
The problems of inter-cell interference coordination and
optimizations of spectral resource allocation to the network
nodes are left as open design problems in the 3GPP spec-
ifications. Several very promising techniques have been
proposed to address this problem, generally favoring a
variation of a soft frequency reuse (SFR) scheme [1]. As
such, the proposals either put forth a static SFR imple-
mentation or utilize SON solutions to optimize the fre-
quency reuse per observed traffic parameters. To this end,
the dynamic solutions to-date are reactive in nature, trig-
gered by observed resource shortages or sub-optimal
throughput, and focus their effort on improving this sub-
optimal condition as fast as possible.
In contrast, this paper presents a pro-active SON solu-
tion. Since traffic conditions have been shown to be gen-
erally repeatable processes [7, 8], this solution leverages
constructs from the field of machine learning to provide
estimated traffic loads as triggers for the SON algorithm. If
the estimated traffic loads indicate an overload condition is
M. Brehm (&) � R. Prakash
University of Texas at Dallas, 800 West Campbell Road,
Richardson, TX, USA
e-mail: [email protected]
R. Prakash
e-mail: [email protected]
123
Wireless Netw (2014) 20:945–960
DOI 10.1007/s11276-013-0657-y
imminent, the SON algorithm will coordinate with neigh-
boring cells/sectors to find new resources for the cell.
Consequently, the downlink scheduler has a greater chance
of having resources available to schedule when the traffic
spike occurs.
2 LTE cell/sector background
Due to the use of orthogonal frequency-division multiple
access (OFDMA), LTE networks are commonly being built
with a frequency reuse factor of 1, meaning that all
available spectrum is allocated to each cell in the network.
To achieve a frequency reuse factor of 1, the SFR scheme
[1, 2] divides each cell (or sector) into an inner and outer
region. The resulting cell/sector has an appearance similar
to the overlaid-cell concept presented by MacDonald [3, 4].
The inner portion will have potentially full access to the
entire set of resource blocks (RBs) (i.e. entire spectrum at
any point of time), minus any RBs being used by the outer
ring, by servicing devices at a power level not to exceed a
pre-defined power threshold. This power threshold is
defined such that a transmission from an eNodeB to a
device in its inner region would not result in visible
interference with a neighboring cell’s outer region. The
outer region of the cell is configured with a set of RBs
which does not overlap with the set of RBs allocated for the
outer region of the neighbor cells. Power levels for trans-
missions to devices in the cell’s outer region are allowed to
exceed the pre-defined power threshold and, thus, would
potentially cause interference to neighboring cells’ outer
regions.
3 Problem formulation
The problem to be addressed is to dynamically coordinate
the sets of RBs for the outer region of each cell per their
expected traffic loads without the need for a centralized
controller. The resource coordination algorithm is triggered
when a node predicts that it does not have sufficient
resources to maintain an acceptable level of throughput for
an upcoming time interval.
More formally, the problem definition is as follows.
Since a reuse factor of 1 is assumed, let S be the set of
all RBs available to an eNodeB, per the spectrum alloca-
tion of the operator in the given market. Let Si.inner be the
set of RBs allocated to the inner region at node i and
Si.outer be the set of RBs allocated to the outer region at
node i. Since the total spectrum allocation is provided to a
given cell and then split into the inner and outer region, it
implies the inner region has full access to all spectrum not
allocated to the outer region:
Si:inner [ Si:outer � S) Si:inner � S� Si:outer ð1Þ
Further, let Ni be the set of neighbors for node i, which is
the set of nodes within interference range of node i. Since
the subset of RBs allocated to the outer region of node i
cannot overlap with the RBs allocated to any neighboring
outer region:
Si:outer � S�[
y2Ni
Sy:outer ð2Þ
These set relationships can be visualized per Fig. 1.
Let time be divided into fixed intervals of duration Dt
with the xth interval starting at time tx. Let Ti.inner and
Ti.outer be the traffic load expected for node i’s inner and
outer regions, respectively, for an upcoming Dt interval.
The problem statement is then, given a fixed set S, at each
time interval Dt ensure that the following statement is true:
8i; Si:innerj j � Ti:innerj j ^ Si:outerj j � Ti:outerj j^ Si:outer � S�
[
y2Ni
Sy:outer
^ Si:inner [ Si:outer � S
^ Si:inner \ Si:outer ¼ u ð3Þ
where |X| is the cardinality of set X, and ^ is the logical
conjunction operator (i.e. AND).
4 Related work
In addition to the static SFR approach, several approaches
for dynamic resource coordination between neighboring
LTE nodes have been proposed in existing literature.
In one example, Stolyar and Viswanathan [5] have
proposed a power level optimization scheme based on
maximizing a utility function via gradient descent tech-
niques. In this scheme, the utility function estimates the
average bitrate for each flow given the power level possible
for transmission on each frequency. Feedback mechanisms
from each user device provide SINR measurements and
frequency preferences at regular intervals, which aid in this
calculation. Further, each node shares partial derivatives of
the utility function with its neighbors, allowing each node
to calculate power levels which optimize utility in a dis-
tributed manner. The results of these optimization calcu-
lations are then handed off to a separate scheduler for
resource allocation.
While this solution is based on theoretically sound
principles, the implementation of the solution becomes
problematic. As noted above, the reactive nature of the
algorithm translates to potentially large amounts of cal-
culations which must be performed at very fine-grained
time intervals. In the implementation provided in the
946 Wireless Netw (2014) 20:945–960
123
paper, each iteration was run in a virtual scheduling
window, which occurs 30 times per actual scheduling
window. Given the scheduling window of 1 ms in the
paper, the implementation thus required 30 full passes
through the optimization calculations per millisecond
before handing off to a scheduler, which must then per-
form its calculations for the scheduling interval. As such,
Stolyar et al.’s implementation requires significant com-
putational resources to be effective, creating competition
for computing resources among the myriad of real-time
functions on an LTE node.
Alternatively, the end-to-end efficiency (E3) Project
also set forward a plan on the use of machine learning
algorithms to assist in SON techniques for wireless net-
works [6]. In their approach, they leveraged genetic
algorithms with the most recent 200 data points of traffic
measurements to estimate the imminent traffic needs.
Their simulation found that machine learning algorithms
were very effective in predicting upcoming traffic con-
ditions, and used this to optimize traffic conditions for a
given cell. The SON algorithm was presented in central-
ized and distributed variations, which ultimately would
allow use in an LTE network without the need for a
centralized controller.
However, the solution presented by E3 was not concerned
with ICIC, as their assumptions were based on a study
showing that spectrum is abundant and most spectrum is idle,
which is not in line with the spectrum usage for today’s
mobile networks. As such, the distributed algorithm simply
allows a node to choose the ‘‘best’’ spectrum for its traffic
without regard for its neighbor, most likely resulting in
interference during heavy traffic periods. Further, the
machine learning algorithm chosen runs approximately a
thousand parallel tasks, which the paper noted would impose
significant resource requirements on the LTE node. This
machine learning algorithm (and, thus, its computational
resource requirements) was selected due to a stated
assumption that traffic is chaotic, which is not consistent with
the daily and weekly traffic trends for voice and data com-
monly observed in today’s mobile networks [7, 8].
Thus, while these algorithms provide strong steps for-
ward in dynamic resource allocation between LTE nodes,
there is still a critical need for a SON algorithm which can
coordinate the LTE resources in time for the scheduler, not
after it is needed, with proper safeguards for interference
avoidance, and within reasonable means for computational
requirements. To this end, this paper proposes the follow-
ing pro-active SON solution.
Fig. 1 Resource set
relationships
Wireless Netw (2014) 20:945–960 947
123
5 Solution overview
The goal of the proposed solution is to create a SON
algorithm which accomplishes the following:
• incorporates ICIC
• will allow dynamic reallocation of resources based on
fluctuating traffic levels
• will account for anomalies such as sudden hot-spot
appearances
• will allocate resources in advance of a traffic spike rather
than in response to the traffic spike, eliminating loss or
delay due to execution time of a reallocation algorithm
First, we propose using constructs from machine learn-
ing to predict traffic levels given traffic metric inputs at the
eNodeB. This allows the eNodeB to not only quantify their
current traffic levels, but also the expected traffic levels for
the next Dt time interval. These estimated traffic levels can
be compared against the current resource allocation of the
eNodeB, which can then determine if a resource shortage is
likely to occur. Thus, the machine learning output serves as
a triggering event for the SON algorithm.
Second, we propose a distributed, dynamic SON algo-
rithm which allows a node that has predicted it will enter
an overload state (i.e. where traffic exceeds the available
resources) to negotiate the transfer of resources from its
neighbors. This algorithm draws from the area of distrib-
uted mutual exclusion, specifically the Ricart–Agrawala
algorithm [9], as a basis for determining which node is
allowed to use each specific resource. Optimizations to the
general form of the Ricart–Agrawala algorithm are
explored for this use case, specifically to limit the lengths
of dependency chains for resource requests.
The high-level flow of the proposed SON algorithm is
depicted in Fig. 2.
It is important to note that each portion of this proposal
could be extracted and used independently of the other. For
example, an implementer may wish to utilize the distrib-
uted mutual exclusion SON algorithm without taking
advantage of the pro-active machine learning trigger. In
this case, the implementer would sacrifice the ability to
predict and reallocate resources prior to a traffic spike, but
still retain the benefits of the distributed, dynamic resource
allocation protocol. This may result in temporary overload
conditions during the SON protocol execution until the
resources are reallocated. However, this approach may ease
the overall implementation by allowing a reactive trigger.
This paper will consider the requirements and benefits of
the entire algorithm.
6 Predicting bandwidth requirements
The machine learning tool chosen for the task of predicting
imminent traffic loads for a node is support vector regression
(SVR), using a Gaussian radial basis function (RBF) kernel.
SVR is a statistical learning model which maps input vectors
into a higher-dimensional space via a kernel function and
makes predictions based on calculated weights of the avail-
able ‘‘support vectors’’. In this case, the input vector will
consist of a measure of eNodeB downlink resources used at a
given time as well as pertinent traffic metadata, such as the
time of day and the day of the week. The output would be the
number of RBs required to process the expected load.
The reasons for choosing the SVR construct is twofold:
its proven effectiveness in the field as well as its light
computational requirements. SVR has been effectively
used for a wide range of applications, including traffic
control [10] and geo-science data analysis [11]. Further, as
noted in [12] and shown in Eqs. 4 and 5, the computational
requirements to calculate the predicted traffic load is
minimal. An important caveat is that training the SVR
instance does require significantly more computational
resources, as it must solve for the support vector weights
Fig. 2 High-level flowchart of proposed SON algorithm
948 Wireless Netw (2014) 20:945–960
123
over the entire collection of captured training data. How-
ever, this is mitigated by scheduling these training calcu-
lations to non-peak traffic times each day on the LTE node.
A generalized formula for the SVR model is provided by
the following equation.
f ðx~Þ ¼ w0 þXm
i¼1
wikðx~i; x~Þ ð4Þ
where
• x~i is a support vector, derived from the training data set
collected over a recent sample period
• x~ is the input vector, consisting of the current traffic
metric data and metadata
• wi is the weight assigned to a support vector, calculated
by the SVR training exercise
• w0 is the bias constant
The RBF kernel k is defined as:
kðx~i; x~Þ ¼ e1
2r2 x~i�x~k k2
ð5Þ
where r is a constant which controls the amount of
‘‘smoothing’’ in the function f [12]. In other words, rdefines the width of each Gaussian, with a smaller rimplying that the support vector(s) closest to the input
vector contributes more significantly to the overall result.
This becomes intuitive by recognizing the kernel function
in Eq. 5 is closely related to distance-weighted regression
[13]. Through experimentation, we propose a relatively
low value of r = 0.1 for the SVR implementation, which
limits the overall smoothing of the function f.
A critical integration detail of the SVR is to choose input
variables which significantly affect the target values. Traffic
measurement data from deployed mobile networks show that
the time of day and the day of the week significantly con-
tribute to the estimated load of the system, making them ideal
candidates for inclusion in the input vector. Of course, time
and day variables are not sufficient on their own for an input
vector, as traffic is not likely to always be equal at, say, 8 a.m.
on a Monday. In addition to the time variables, a feedback
mechanism is also incorporated which will input the last
measured resource load. This resource load incorporates
both the average amount of downlink resources scheduled as
well as the average data queue size.
As such, the support and input feature vectors are defined
as x~i ¼ time of day; day of week; lasttraffic measurementf g.Thus, using a Boolean for each hour of the day and day of the
week, we have a x~i 2 <32 with a mapping f : <32 ! <.
Another important note from LTE traffic analysis is that
voice and data usage trend differently in each time seg-
ment. For example, ‘‘off-peak’’ hours for voice are gener-
ally defined as after 7 p.m. or 8 p.m. on a weekday, which
is generally shown to be a peak time for web browsing data
traffic [7, 8]. Thus, multiple SVR instances will be required
to properly model the expected traffic loads for a single
LTE node.
To account for this behavior, our solution proposes
unique instances to predict voice and data traffic loads for
both the inner and outer regions of the cell. In the model
used by this paper, the LTE guaranteed bitrate (GBR)
traffic classes are dominated by voice traffic while the Non-
GBR traffic classes are dominated by data traffic. Thus, the
voice SVR instances will track GBR traffic, while the data
SVR instances will track the non-GBR traffic.
To optimize the feedbackmechanism ofeach SVR instance,
the last traffic measurement value will consist only of the
component of the observed traffic for which the SVR instance
predicts. For example, if the SVR instance is providing esti-
mations for the GBR traffic on the inner region, the last traffic
measurement will be equal to the GBR traffic observed from
the previous time interval in the inner region only.
In addition, to combat noise in training samples, an
e-insensitive loss function will be employed, in which the
input vector will not contribute to the overall weight if it is
within e of the current predicted function. The loss func-
tion provides an additional advantage of limiting the
number of support vectors to maintain. Sparse data sets
increase computational speed and prevent over-fitting of
the training data. By normalizing the input vectors to the
[0, 1] range, a common e = 0.1 value will be used in the
SVR simulation.
With the input vector and loss function defined, and with a
set of m training data elements x~1; y1f g; x~2; y2f g; . . .;fx~m; ymf gg collected by an active eNodeB, each SVR instance
can be trained to determine the appropriate weights for each
support vector. As mentioned previously, training the SVR
instance carries an additional computation expense,
prompting the recommendation to execute the training
exercise nightly during low-traffic periods for voice and data.
The general form of the training exercise is to solve an
optimization problem [14] to create a hyperplane defined
by w~þ b with slack parameters n, n* as upper and lower
constraints:
minw;b;n;n�
1
2w~T w~þ C
Xm
i¼1
ni þ CXm
i¼1
n�i
subject to
w~T/ðxiÞ þ b� yi� e� ni;
yi � w~T/ðxiÞ � b� e� n�i ;
ni; n�i � 0; 8i ¼ 1; . . .;m ð6Þ
where C is a constant [ 0 and /(xi) maps xi into a higher-
dimensional space. Given that w~ is a high-dimensional
vector, this optimization is difficult to solve, prompting
focus to the dual problem:
Wireless Netw (2014) 20:945–960 949
123
mina;a�
1
2ða� a�ÞT Qða� a�Þþ e
Xm
i¼1
ðaiþ a�i ÞþXm
i¼1
yiðai� a�i Þ
subject to
eTa� a� ¼ 0;
0�ai; a�i �C; 8i¼ 1; . . .;m ð7Þ
where Qij ¼ kðx~i;x~jÞ and e = [1,…,1]T is a vector of all
ones. Solving for the Lagrange multipliers a, a* yields the
following approximation function, matching the general
form in Eq. 4:
Xm
i¼1
ai � a�i� �
kðx~i; x~Þ þ b: ð8Þ
For excellent background material on kernel functions and
support vector regression, please refer to [13, 15–17].
7 Reallocating resources
Upon determining an overload state, a node must be able to
negotiate the transfer of resource blocks with its neighbors.
As noted by Prakash et al. [18], a distributed spectrum
allocation problem shares many attributes with a distrib-
uted mutual exclusion problem. Instead of negotiating for
the execution of a critical section, the nodes negotiate the
use of common resources. Prakash et al. successfully
applied this principle in adapting the Ricart–Agrawala
distributed mutual exclusion protocol for channel-based
networks. However, due to the stark differences between
OFDMA and channel-based allocation schemes, the algo-
rithm presented by Prakash et al. cannot directly be exe-
cuted in an LTE network. Specifically, the previous
algorithm negotiated the transfer of a channel dedicated to
a single voice call. In LTE, a resource block (RB) can be
used for multiple calls or data sessions. Conversely, a
single call or data session normally spans multiple RBs due
to frequency diversity and scheduling logic implementa-
tions. In addition, each call and data session in LTE is
assigned unique QoS characteristics as opposed to each call
in previous wireless technologies essentially being inter-
changeable. Thus, while the use of distributed mutual
exclusion logic can be carried forward, the algorithm for
resource transfer must be fundamentally rewritten for LTE.
7.1 General distributed mutual exclusion algorithm
As a basis, the algorithm proposed by this paper starts with
the Ricart–Agrawala distributed mutual exclusion algo-
rithm. As a brief overview of the algorithm, each process
i has a set of processes, denoted Ri, from which it requires
permission to enter a critical section. In Ricart–Agrawala,
Ri is the set of all processes. Messages are timestamped via
Lamport’s clock [19]. When i wishes to enter the critical
section, i sends a timestamped REQUEST message to each
process in Ri. Upon receipt of a REQUEST message, a
process j will perform one of the following actions:
• If process j is not executing or requesting access to the
critical section, process j sends a REPLY message.
• If process j is also requesting access to the critical
section, but the REQUEST from process i has a smaller
timestamp, process j sends a REPLY message.
• Otherwise, process j defers sending the REPLY to
process i until process j has completed execution of its
critical section.
When process i receives a REPLY message from each
process in Ri, it is allowed to enter the critical section.
Upon completion of the critical section, a process will send
any deferred REPLY messages, allowing the next process
to unblock and enter the critical section.
The Ricart–Agrawala algorithm requires 2(|Ri|- 1)
messages per critical section request, which includes
(|Ri| - 1) REQUEST messages and (|Ri| - 1) REPLY
messages.
While the Ricart–Agrawala algorithm is proven to
ensure mutual exclusion, several adaptations and optimi-
zations are required to develop a distributed SON algo-
rithm for an LTE network.
The most obvious adaptation is to convert critical sec-
tion execution to negotiating access to RBs in the outer
region. This can be achieved by simply equating the use of
an RB in the outer region to a critical section in a dis-
tributed mutual exclusion algorithm. Instead of concurrent
access to shared resources in a critical section causing
problems such as consistency errors, the concurrent access
in LTE will cause interference between neighboring cells.
As with critical section execution, all neighboring nodes
which potentially have access to the shared resource must
grant access to a node which is requesting to use the resource.
So, in the case of LTE, Ri becomes all nodes within inter-
ference range of the outer ring of node i, or Ni from Eq. 2.
7.2 Determining request sets
As opposed to a process in a distributed mutual exclusion
algorithm, a node in this algorithm is only aware of the
number of RBs to request and not necessarily which RBs to
request. Since each node is autonomous in its resource
scheduling logic, a node is not fully aware of which RBs
are available for transfer from a neighboring node at any
given moment in time. Thus, the node needs to determine a
‘‘request set’’ to fulfill its RB needs.
A first step in determining a request set is to determine if
there are RBs which are not in use in any of the
950 Wireless Netw (2014) 20:945–960
123
neighboring nodes’ outer region as well as its own outer
region. Such a gap can be created during initialization of
the network or, more commonly, after a neighboring node
has transferred a set of RBs to a node outside of interfer-
ence range. In other words, if a node j 2 Ni transfers RB z
to a neighbor k 62 Ni, then node i could request RB z
without impacting the current resource allocation for node
j. More effectively, this search for ‘‘vacated’’ RBs is
defined as node i determining if an RB exists in S which
does not exist inS
y2Ni
Sy:outer [ S:outer.
The second, more intrusive, step in determining a
request set is to query neighboring nodes in Ni for resources
which can be borrowed from their outer regions. To this
end, a node will query one of its neighbors for an ‘‘offer’’
of available RBs, which will be returned if the queried
node has RBs to share. Then the node needs to confirm that
the RBs offered by this neighbor are not being used by
other neighbors.
For system-wide traffic prioritization, a node must also
ensure that RBs regularly used for GBR traffic are not
shared to a neighboring node for the neighboring node’s
Non-GBR traffic. Thus, the request sets must be delineated
as GBR and Non-GBR request sets.
7.3 Completing the resource transfer
With the generated request sets, a node can send the
request message for obtaining access to the shared resource
as in the Ricart–Agrawala algorithm to all nodes in Ni.
However, instead of negotiating each resource block
transfer individually, an eNodeB will request access to its
entire request set at one time. Upon receipt of the request
message, the queried node will reply with any resources
that are ‘‘blocked’’ due to a resource conflict with the
request sets. This blocking mechanism ensures ICIC is
enforced, preventing a neighboring node from acquiring a
resource which would conflict with a node’s outer region.
After all reply messages are received, the requesting
node will finalize the transfer by sending a message to all
members of Ni which identifies any blocked resources. As
such, each node in Ni will only remove the request set
minus the blocked resources from its outer region alloca-
tion, and only after the transfer is finalized.
7.4 Correctness
Before moving into optimizations, at this point it is still
fairly obvious that the algorithm provides a distributed
means of resource sharing without risk of interference
between neighboring cells.
A high-level proof by contradiction can be presented as
follows: Provided that initialization created non-
overlapping outer region allocations, assume the algorithm
results in neighboring nodes i and j each using resource
w in their outer rings, creating interference. Walking
through the possible cases, either i and j each received
w when requesting resources or one received w as part of
the resource sharing algorithm while its neighbor was still
allowed to use w.
For the first case, Ricart–Agrawala would prevent both
i and j from each simultaneously claiming w, as the node
with the higher timestamp would have been forced to defer
the request until the other completed the transfer of
w. Without loss of generality, assume the request from node
i has the smaller timestamp. Since i is requesting resources,
by definition it does not have resources available to share.
When node i sends the deferred reply to j, it will not offer or
approve the transfer of w as w is assigned to traffic in node
i. Thus, node i would block the transfer of resource w to node
j, preventing node j from also claiming w.
For the second case, and again without loss of gener-
ality, assume i was the requesting node and j initially
owned w. If i includes w on its request set and j does not
include w in a blocked list, w is placed on a transfer list for
i. When i finalizes the transfer, node j removes w from its
allocation and i is allowed to start using w. For w to be used
in both i and j simultaneously, either i started using
w before the transfer was finalized or j continued using
w after the transfer was finalized, neither of which are
allowed in the algorithm.
7.5 Message complexity
These modifications to the base Ricart–Agrawala algorithm
significantly impact the message complexity of the algo-
rithm. First, the number of nodes is greatly reduced from
|Ri| to |Ni|, which in the absolute worst case is limited to 32
by 3GPP standards [20]. Second, the algorithm is executed
for the entire set of requested resources, as opposed to a 1:1
mapping of requests to critical sections. Thus, the message
complexity improves from once per critical section to once
per Dt.
However, the number of messages in each pass is also
altered due to the nuances required for the LTE environ-
ment. Specifically, the ‘‘offer creation’’ process will require
two additional messages per execution round and the
completion notification will require an additional |Ni| - 1
messages. Finally, the execution may have to parse mul-
tiple offers (up to |Ni| - 1 worst case), in addition to the
‘‘vacated’’ resource round, until the resource deficit is
fulfilled.
Thus, for each Dt interval, the worst case message
complexity of a requesting node is O(|Ni|2). Improvements
to O(|Ni|) are possible by techniques such as aggregating
offers across all of Ni (as opposed to processing offers one
Wireless Netw (2014) 20:945–960 951
123
at a time), but doing so increases the likelihood of a
message being deferred/blocked at a given neighbor. Once
an offer is presented, a node would not present another
offer of available resources until a decision was made on
the first. Otherwise, there is a possibility of offering
overlapping resources, violating ICIC. Thus, with the
message blocked for the entire duration of the request
process, the end-to-end time delay grows significantly. In
the current form, the increased message count is the trade-
off for a relatively small end-to-end time delay, which will
be further explored in the next section.
However, applying heuristics to the creation of the
request set would improve both the message complexity
and end-to-end time delay. When an offer is received, a
node may review the outer region deployments of its
neighbors to aid in choosing a request set that is least likely
to be rejected/blocked. While the outer region resource
deployment in the neighbors may also be changing at this
time, applying these heuristics would improve the number
of resources approved from each offer, reducing the num-
ber of offers required and eliminating several rounds of
messaging.
7.6 Limiting dependency chains
While the current form of the proposed algorithm will
provide a safe mechanism for resource sharing, an opti-
mization to the algorithm can be easily applied to greatly
improve execution time and fairness. This optimization
limits the length of ‘‘waiting chains’’ (or ‘‘dependency
chains’’), as defined by Lynch et al. [21].
As background on the topic, Lynch found that in dis-
tributed systems consisting of multiple users asynchro-
nously requesting from a pool of shared local resources,
long dependency chains can be created. In a dependency
chain, a user is blocked from executing its critical section,
waiting for a resource currently locked by another user,
which in turn is blocked waiting on a locked resource from
another user, and so on. Translated to this application,
consider if neighboring nodes i and j each send request
messages, with i having the lower timestamp. Node j would
then block waiting for i’s request to complete. Further, if
node k 2 Nj then sends a request message, k is blocked by
j, which is still blocked by i. This chain can potentially
continue to build across the network.
Dependency chains for the general form of Ricart–
Agrawala can extend to the number of processes in the
system. Thus, in the case of this algorithm, the longest
potential dependency chain would be the number of eNo-
deBs deployed in the LTE network, creating significant
delays in execution as nodes block for the length of the
chain. The algorithm by Prakash et al. is also susceptible to
this dependency chain problem.
As such, we propose to integrate a graph-coloring
solution proposed by Lynch which creates a relatively
small upper limit on the length of any dependency chain.
Lynch discovered that by performing vertex coloring on a
graph where the resources are the vertices and an edge
exists between resources with a common user, the colors of
vertices can create an order in which to request resources
such that the longest possible chain will be dependent on
the number of colors in the graph. Since the number of
colors is tied to the number of common users for a shared
resource, the potential dependency chains will be signifi-
cantly less than the chains dependent on the number of
resources or number of users in the system.
To properly integrate Lynch’s solution, a mapping must
be created between the ‘‘resources’’ and ‘‘users’’ in the
distributed systems model and the LTE network model. A
user is an entity which requires permission to perform a
given task, while a resource (at this level of abstraction) is
an entity whose permission must be secured by the user
prior to performing the task. Once the user has secured
permission for a given set of resources, it can execute its
task.
Intuitively, a ‘‘user’’ will map to the ‘‘eNodeB’’. How-
ever, a ‘‘resource’’ will map to the ‘‘neighbors of the
eNodeB’’, not the individual RBs.
The more intuitive mapping of ‘‘resources’’ to ‘‘RBs’’
does not map to the Lynch solution. In the proposed
algorithm, an eNodeB is not waiting for a neighbor to
release a unilaterally determined RB. Instead, for pro-
cessing to occur, the eNodeB requires a known quantity of
interchangeable RBs to be added to its outer region. Fur-
ther, since the node is requesting an entire set of RBs at any
given time, the eNodeB blocks when its neighbor has any
outstanding resource request which has a timestamp less
than its own request. In other words, the actual RBs
requested are somewhat inconsequential in this matter.
Instead, as seen in the example above, the creation of the
dependency chain in the proposed algorithm is more tightly
linked to the neighbors of the eNodeB.
With this classification, the vertices of the graph will be
the eNodeB outer regions, with edges between eNodeBs
sharing a common neighbor. For example, consider node
13 in the example, wrap-around network presented in
Fig. 3. Assuming omnidirectional antenna in the cell, the
interference relationship (and, thus, the ‘‘neighbor’’ status)
between the network nodes is fairly obvious, as highlighted
in the figure.
The next step is to create a graph with edges between
nodes with a common neighbor. In the example, a common
neighbor can be found for all nodes within two hops of
node 13. As such, the edges for node 13 can be shown in
952 Wireless Netw (2014) 20:945–960
123
Fig. 4, which consist of edges to all nodes in the example
except nodes 1, 2, 4, 5, 21 and 25. This exercise can be
repeated for all nodes to create the required graph for the
dependency chain graph-coloring exercise. The resulting
colors will create an ordering for resource requests to be
used by the proposed SON algorithm. Further, the number
of colors, which will equate to the number of nodes within
two hops of any given node, becomes the upper limit of the
dependency chain size in the proposed algorithm.
8 Proposed SON algorithm
At a high-level, the SON resource allocation algorithm
introduced by this paper executes the following phases:
Start-up phase
1. Initialize inner and outer region RB allocation sets per
a static allocation scheme.
2. Collect traffic measurements as training data for the
SVR instances.
3. When a sufficient number of training samples have
been collected, train each SVR instance. Re-train each
SVR instance daily from this point forward.
Execution phase
1. Predict the traffic load for GBR and Non-GBR traffic
in the inner and outer regions via the four SVR
instances.
2. If an overload state is predicted, execute a distributed
algorithm based on the SVR output to reallocate RB
assignments between neighboring nodes.
8.1 Initialization
Initially, all cells are deployed with a reuse factor of 1. How-
ever, the outer region of each cell must be allocated with a
subset of RBs which does not overlap any of its neighbors’ outer
region RB allocation. Thus, the initialization phase must create
these non-overlapping sets. To this end, an offline vertex graph-
coloring exercise is leveraged, where graph G = (V, E) is
created such that V = {v | v is an eNodeB} and E = {(u, v) | u,
Fig. 3 Nodes within interference range of node 13
Fig. 4 Dependency chain graph
subset—node 13
Wireless Netw (2014) 20:945–960 953
123
v 2V, u is within interference range of v}. The vertex coloring
exercise assigns a color cx from the set of colors C such that no
neighboring vertices have the same color. Each color in the
resulting colored graph G is assigned a particular subset of RBs,
where {RBs designated for cx}\ {RBs designated for cy} = /.
As these steps need only be performed on initialization
of the network or when a node is added to a previously
colored network, the graph-coloring exercise can be
implemented from a centralized network planning and
engineering location. However, if desired, the implemen-
tation can further leverage a distributed graph-coloring
SON algorithm such as the one presented by Gerlach [22].
After initialization, the nodes stay apprised of their
neighbor’s outer region allocation via periodic update
messages. These update messages consist of the current
Si.outer RB set, which is the current set of RBs allocated to
the outer region of node i.
Per the discussion on dependency chains, a second round
of graph-coloring is also required to combat the dependency
chains common in distributed mutual exclusion algorithms.
In this second round, the outer region of an eNodeB will be
the vertices of the graph, with edges between nodes with
common neighbors. From graph G, representing the physi-
cal layout of the eNodeBs, the neighbors of each eNodeB
are already known. Therefore, graph G0 = (V0, E0) is created
where V0 = {v0 | v0 is the outer region of an eNodeB} and
E0 = {(u0, v0) | u0 and v0 share a common neighbor in G}.
Again, an offline vertex graph-coloring exercise or a dis-
tributed graph-coloring SON algorithm is performed, which
ensures eNodeBs sharing a common neighbor do not share
the same color. The colors are ordered, creating an ordering
for resource requests by an eNodeB.
8.2 Building training data
After initialization of the network, training data must be
collected for the SVR instances. During this phase, as the
eNodeB processes traffic, it records input vectors for each
SVR instance and records the observed value as each SVR
instance’s expected output. The training data vectors are
recorded at pre-defined time intervals (Dt), which is also
the interval in which the SVR instances will be later run.
When a sufficient number of training data vectors and their
associated expected output have been recorded, the SVR
instances are trained on this data and subsequently are
enabled for traffic predictions. The time interval Dt will be
further explored in the Sect. 9.
An interesting note is that, per the nature of LTE, a
particular RB in a radio frame can be designated as part of
a GBR bearer in one radio frame, free the next frame, and
part of a non-GBR bearer the next. Thus, the algorithm
cannot simply count a particular RB as a part of a specific
traffic class with a snapshot of an arbitrary radio frame.
Instead, the algorithm will keep running percentages of
times each RB is allocated for a given type of traffic: GBR,
Non-GBR, or unallocated. Let Pir be the running percent-
age of assignments of type i to resource block r. When the
training data is collected, the percentages for each traffic
type across all RBs will be summed, providing an average
number of RBs used for the traffic type.
An average must also be maintained for the queued traffic
for each traffic type. Let Qbibe the bytes stored in the data
queue for bearer b, which is of type i. Note that by consid-
ering the queued traffic load in addition to the node
throughput, the SVR is allowed to output values above the
egress traffic threshold of the eNodeB, which would be the
case for a node in overload. Otherwise, the output of the
function will always show the node is under or at capacity,
even if a large amount of traffic is queued. In this case, the
current queue size, measured in bytes, must be converted to
an RB-based measurement, using a conversion factor Cb (for
user device owning bearer b). This can be performed at the
eNodeB, as the eNodeB is aware of each device’s current
modulation and coding scheme (MCS). For example, assume
a bearer is reporting a wideband channel quality index of 15,
which is associated with 64QAM and a high code rate [23].
Simplifying to ignore some control traffic overhead, the
factor to convert bytes queued to RBs queued would be:
ð6 bits=symbolÞ � ð7 symbols=RBÞ � ð948=1;024 code rateÞ�ð1 byte=8 bitsÞ ¼ 4:86 bytes=RB
With the number of downlink resources denoted by NRBDL ,
this calculation can be described as:
8i 2 fGBR;NonGBR; unallocatedg;Avg RBs for type i ¼
X
r2NRBDL
Pri þ
X
b2bearers
Qbi� Cb
8.3 Predict overload state
Once the SVR instances are trained, the eNodeB executes
each instance at the pre-defined Dt time intervals to predict
the expected traffic load for the cell. The eNodeB must
then determine if it has sufficient resources to handle the
expected traffic load or if it finds itself in an overload state.
We define several unique overload states for specific
resource deficiencies:
• None: Node has sufficient resources.
• Aggregate_GBR: The predicted GBR traffic exceeds all
potential resources available (entire spectrum); it is
impossible to meet demand.
• Outer_GBR: The outer region has insufficient resources
to process the predicted GBR traffic.
954 Wireless Netw (2014) 20:945–960
123
• Outer_NGBR: The outer region has insufficient
resources to process the predicted Non-GBR traffic,
after all predicted GBR traffic is handled.
In addition, there is a ‘‘Sharing’’ flag defined to specify
if a node is allowed to transfer RBs to a neighbor, which is
set if the outer region has a sufficient surplus of RBs.
The overload state is defined by the following logic, run
at the eNodeB. Let NextOverloadState be the predicted
overload state of the node in the next Dt interval, Defi-
cit_GBR be the number of RBs required in addition to the
node’s current RB set to meet the expected GBR traffic
load, and Deficit_NGBR be the number of RBs required in
addition to the node’s current RB set to meet the expected
Non-GBR traffic load.
• If the sum of the SVR outputs for inner GBR and outer
GBR is greater than |S|, then NextOverloadState is set
to ‘‘Aggregate_GBR’’
• Else, if the SVR output for outer GBR is greater than
|Si.outer|, then NextOverloadState is set to ‘‘Out-
er_GBR’’ and Deficit_GBR equals the outer GBR
SVR output minus |Si.outer|.
• Else, if the SVR output for inner GBR is greater than
|Si.inner|, then allow the inner region to service GBR
traffic by grabbing RBs from outer region before
servicing the outer ring Non-GBR traffic.
• If the SVR output for outer Non-GBR is greater than
the remaining outer RBs and no other overload state has
been reported, NextOverloadState is set to ‘‘Out-
er_NGBR’’ and Deficit_NGBR equals the outer Non-
GBR SVR output minus the remaining outer RBs.
• If |Si.outer| is less than or equal to a pre-defined
minimum RB threshold, then NextOverloadState
includes the ‘‘Non-Sharing’’ flag.
• If none of these conditions apply, then NextOverload-
State is set to ‘‘None’’ and ‘‘Sharing’’.
8.4 Negotiate RB transfer
If a node finds itself in an Outer_GBR and/or Outer_NGBR
overload state, it initiates the Ricart–Agrawala-based SON
algorithm described above to negotiate the transfer of RBs from
neighboring nodes. At a high-level, the approach is as follows:
1. Determine if any RBs have been vacated by previous
executions of this sharing algorithm and attempt to claim
these vacated RBs. A vacated RB is defined as an RB in S
which does not exist inS
y2NiSy:outer [ Si:outer.
2. Query the set of neighbors in ascending order of color
from G’, asking for their current overload status and, if
not in overload, a list of RBs eligible to share for GBR
and Non-GBR traffic.
3. For each response with resources available to share,
attempt to claim a set of RBs for both GBR and Non-
GBR traffic by asking each neighbor for permission.
Each neighbor can either grant permission for the
entire set or grant permission to a subset of the RBs
requested which excludes RBs conflicting with local
resource utilization.
9 Simulation and results
In the simulation exercise for this algorithm, the algorithm
is executed across a set of 25 cells represented by their
respective eNodeBs for a simulated time of 2 weeks. To
increase the number of interfering neighbors, a wrap-
around network structure was used (see Fig. 3).
9.1 Traffic generation
To generate traffic for each node, traffic pumps were cre-
ated for voice and data traffic for each node.
The voice traffic pump was fed by an M/M/m queue,
with k values varying per hour, roughly following voice
traffic trends in mobile networks today [7], and l values
held constant throughout the day. After a voice call arrives,
voice packets are created per the 3GPP verification
framework [24], which defines VoIP traffic characteristics
for LTE validation. These characteristics include geometric
distributions for transitioning between talking and silent
states, per a voice activity factor of 50 %, a mean talk
duration of 5 s, and a 20 ms encoding assumption. The talk
spurts create packet trains, which drive the traffic rate for
voice applications. The traffic rate was then averaged over
a given time span (corresponding to the Dt value) and
recorded as input to the SON simulator.
The data traffic pump was also fed by an M/M/m queue,
but the k values varied per data traffic trends observed in
high-speed mobile data traffic networks [8]. The number of
active data sessions then drive a collection of Pareto-dis-
tributed subsources, which are then multiplexed together to
generate a self-similar traffic model [25, 26], simulating a
traditional data network [27–30]. In this model, the Hurst
parameter was maintained at 0.9. As with the voice traffic,
the self-similar model would create data packet trains,
which drive the data rate for the node. The data rate was
then averaged over the desired time span and fed into the
SON simulator.
To simulate ‘‘unexpected’’ hot-spots in the LTE net-
work, a node would generate a random Boolean value
every 30 min to determine if it would experience an
additional spike in traffic. If the node deems that it is a hot-
spot, the k values which feed the traffic pumps were
Wireless Netw (2014) 20:945–960 955
123
multiplied by a factor of 3. This translates to much shorter
inter-arrival times, increasing the amount of traffic on the
node.
9.2 Determining Dt
To properly estimate imminent traffic loads, the Dt time
interval becomes a critical element. In an experimental
approach to determine the Dt value, the traffic pumps were
run for 5 h, capturing traffic metric statistics at various
time granularities. The time granularities under review
were 100 ms, 1 s, 10 s, 30 s, 1 min, 5 min and 10 min,
with overall metrics shown in the Table 1 and Fig. 5.
Overview of the results showed that the 1 min interval
was optimal in this case. The shorter granularities, such as
the 100 ms view, fluctuated wildly due to the burstiness of
the data-heavy traffic. At this time granularity, the indi-
vidual data queues would absorb the traffic bursts and
troughs, making it unnecessary to capture these peak data
points. Similarly, the 1 s traffic view still showed a rela-
tively high standard deviation.
On the opposite end of the spectrum, the 5 and 10 min
intervals did not adequately capture the traffic measure-
ments. The shorter-duration windows all follow a curved
trend line per the simulated traffic load. By the inherent
nature of the longer-duration windows, the temporal vari-
ations of the traffic measurements are masked, as evi-
denced in Fig. 5. Further, since the time interval is on the
order of minutes, data queuing will not be sufficient to
handle these variations. Thus, the resulting SVR output
based on this captured traffic data would not be as accurate
as desired.
For the remaining time scales, the statistical measures
are roughly in line with one another, meaning the SVR
output based on any of these inputs would be almost
identical. As such, there is little benefit to choose a time
scale below 1 min intervals. Thus, choosing a 1 min win-
dow creates an accurate summary of the input data while
providing more downtime for the SON algorithm, reducing
the computational, storage, and communication-related
resource strain on the hosting LTE node.
9.3 SON simulator
The nodes were given a set of traffic metric data points for
every minute of the 2-week window, with the SVR
instances executing every minute. The expected overload
state, calculated based on the output of the SVR instances,
had the potential to trigger the distributed SON resource
allocation algorithm.
As described previously, the SVR instances were
implemented as being statically trained at regular intervals.
The preference is to move this cost to low-traffic times to
limit the computational cost on the node. As such, the SVR
Table 1 Traffic metrics by measurement window granularity
Window Mean (RBs) Std dev Min Max
100 ms 109.92 68.55 94.22 587.11
1 s 109.92 38.27 106.33 286.49
10 s 109.96 31.02 107.34 200.80
30 s 109.99 30.11 107.68 188.19
1 m 110.08 29.55 107.34 183.35
5 m 111.01 27.10 107.54 167.32
10 m 111.92 23.63 110.42 156.96
Fig. 5 Traffic measurement window comparison
956 Wireless Netw (2014) 20:945–960
123
instances will be retrained every night at 2 a.m., which is a
low-traffic time for both voice and data.
To allow this nightly retraining of the SVR instances, each
node will store the most recent 3000 traffic data samples.
Note that the number of support vectors used in the SVR
execution will be less due to loss function (‘‘e-tube’’).
The Simulator implementation leveraged an existing,
proven SVR library named libSVM [14]. The libSVM-
based SVR instances were tuned per the parameters pre-
sented in this paper for the simulation runs.
10 Results
The results of the simulation were extremely positive.
First, the output of the SVR instances showed that they
were able to identify the traffic trends presented, including
the ability to successfully identify the hot-spot use case
via the previous Dt traffic measurement feedback mech-
anism. The Spearman correlation coefficient of the actual
and predicted loads was q = 0.999969, showing that a
trend upward or downward by the actual data traffic
coincided with an analogous change in the predicted load.
This is illustrated in Fig. 6, which maps the SVR-pre-
dicted load against the actual presented load on the sys-
tem over the 2-week span. Of course, as observed in
Fig. 6, the start-up period of the simulation contains no
predictions while the SVR instances are collecting their
initial training data.
Due to the extremely large number of data points, the
finer details of Fig. 6 are difficult to distinguish. It is
apparent that the macro trends are accurately captured, but
the graph does not provide enough confidence in the
accuracy at a smaller scale. To illustrate the solution
effectiveness at this level, a 10 h data set from the above
graph is magnified in Fig. 7 to show the minute-by-minute
comparison of actual versus predicted traffic loads.
The control traffic was also analyzed, to ensure the X2 links
between the eNodeBs were not being excessively saturated by
the algorithm. When resource requests are not occurring, there
is only one message sent every 30 min which contained a
12-byte payload in the simulation. When the node was a
neighbor being queried in a resource request, the total payload
sent by the node for that minute increased to a range of
80–250 bytes. Finally, when the node was requesting
resources from its neighbors, the total payload sent by the
node for the minute ranged from 2,400–3,100 bytes. Thus, in
all cases, the control traffic injected by the algorithm was
minimal, as depicted in an example 10 h window in Fig. 8.
As a result of effectively predicting imminent traffic
loads, aggregate data throughput in the network was dra-
matically improved. For a point of reference, the traffic
data was run through a static SFR solution, which is
common in wireless networks today. In this ‘‘baseline’’
solution, the allowed resource set for the node’s outer
region was pre-defined and static. However, the scheduling
behavior of the node for the inner and outer regions is
identical to the proposed SON solution.
Fig. 6 SVR predicted load
versus actual load presented
Wireless Netw (2014) 20:945–960 957
123
Fig. 7 SVR predicted load
versus actual load presented—
magnified
Fig. 8 Control traffic on X2 interface from sample node—magnified
958 Wireless Netw (2014) 20:945–960
123
The execution of the SON algorithm, given the SVR
input, showed marked improvement as opposed to a
baseline static SFR scenario. Figure 9 illustrates this point,
showing the throughput (positive y axis) and loss (negative
y axis) of an arbitrary node in the environment for both
resource allocation strategies. The SVR solution shows
significant increases in throughput and almost no loss.
While this figure shows a half-day of execution data
to better illustrate the throughput and loss differences, the
improvement was observed consistently throughout
the run. In short, when demand was projected to increase,
the actual traffic did in fact increase, and resources were
transferred in time to meet the expected demand.
11 Conclusion
As shown in the Simulation, the proposed algorithm pro-
vides a means for significant improvement in aggregate
network throughput. By divorcing the algorithm from the
constraints of a reaction-based approach, the proactive
nature of the solution allows network resources to be
dynamically reallocated in optimal locations for expected
traffic loads. Further, by leveraging a proven machine
learning tool in support vector regression, the estimates for
expected traffic loads are shown to be reasonably accurate,
providing an effective triggering mechanism for this
dynamic reallocation process.
References
1. Xiang, Y., Luo, J., & Hartmann, C. (2007). Inter-cell interference
mitigation through flexible resource reuse in OFDMA based
communication networks. In European wireless conference.
2. Sesia, S., Toufik, I., & Baker, M. (Eds.). (2009). LTE: The UMTS
long term evolution, from theory to practice. London: Wiley.
3. MacDonald, V. (1979). The cellular concept. Bell System Tech-
nical Journal, 58(1), 15–41.
4. Yum, T. S. P., & Wong, W. S. (1993). Hot-spot traffic relief in
cellular systems. IEEE Journal on Selected Areas of Communi-
cation, 11(6), 934–940.
5. Stolyar, A. L., & Viswanathan, H. (2009). Self-organizing dynamic
fractional frequency reuse for best-effort traffic through distributed
inter-cell coordination. Cambridge: INFOCOM, IEEE.
6. Jeux, S., Mange, G., Arnold, P. & Bernardo F. (2009). End-to-end
efficiency deliverable D3.3: Simulation based recommendations
for DSA and self-management, E3. https://ict-e3.eu/project/
deliverables/executive-summaries/E3_WP3_D3.3_090731_ES.pdf.
Accessed May 2011.
7. Gupta, A., Jharia, B., & Manna, G. C. (2011). Analysis of mobile
traffic based on fixed line tele-traffic models. International Journal
of Computer Science and Information Security, 9(7), 61–67.
8. Nokia Siemens Networks (2010). Mobile broadband with HSPA
and LTE—capacity and cost aspects. http://www.nokiasiemensnet
works.com/sites/default/files/document/MobilebroadbandA426041.
pdf. Accessed May 2011.
9. Ricart, G., & Agrawala, A. K. (1981). An optimal algorithm for
mutual exclusion in computer networks. Communications of the
ACM, 24(1), 9–17.
10. Vanajakshi, L., & Rilett, L. (2004). A comparison of the per-
formance of artificial neural networks and support vector
machines for the prediction of traffic speed. In Intelligent vehicles
symposium (pp. 194–199). IEEE.
Fig. 9 Example node throughput and loss comparison
Wireless Netw (2014) 20:945–960 959
123
11. Kanevski, M., Wong, P., & Canu, S. (2000). Spatial data mapping
with support vector regression and geostatistics. In 7th interna-
tional conference on neural information processing (pp.
1307–1311).
12. Herbrich, R. (2002). Learning kernel classifiers: Theory and
algorithms. Cambridge: The MIT Press.
13. Mitchell, T. M. (1997). Machine learning. New York: WCB/
McGraw-Hill.
14. Chang, C.-C., & Lin, C.-J. (2011). LIBSVM: A library for sup-
port vector machines. In ACM transactions on intelligent systems
and technology, vol. 2 (pp. 27:1–27:27). Software available at
http://www.csie.ntu.edu.tw/*cjlin/libsvm.
15. Vapnik, V., Golowich, S. E., & Smola, A. (1996). Support vector
method for function approximation, regression estimation, and
signal processing. In Advances in neural information processing
systems vol. 9 (pp. 281–287). MIT Press.
16. Gunn, S. R. (1998). Support vector machines for classifica-
tion and regression. University of South Hampton. Technical
Report. http://users.ecs.soton.ac.uk/srg/publications/pdf/SVM.pdf.
Accessed April 2011.
17. Smola, A. J., & Schlkopf, B. (2003). A tutorial on support vector
regression. NeuroCOLT, Technical Report. TR-98-030.
18. Prakash, R., Shivaratri, N. G., & Singhal, M. (1995). Distributed
dynamic fault-tolerant channel allocation for mobile computing.
In 14th ACM symposium on principles of distributed computing.
19. Lamport, L. (1978). Time, clocks and the ordering of events in a
distributed system. Communications of the ACM, 21(7), 558–565.
20. 3GPP (2010). Evolved Universal Terrestrial Radio Access (E-
UTRA); Radio Resource Control (RRC); Protocol specification,
TS 36.331 v9.2.0.
21. Lynch, N. A. (1981). Upper bounds for static resource allocation
in a distributed system. Journal of Computer and System Sci-
ences, 23(2), 254–278.
22. Gerlach, C. G., Karla, I., Weber, A., Ewe, L., Bakker, H., Kuehn,
E., et al. (2010). ICIC in DL and UL with network distributed and
self-organized resource assignment algorithms in LTE. Bell Labs
Technical Journal, 15, 47–62.
23. 3GPP (2011). Evolved Universal Terrestrial Radio Access (E-
UTRA); Physical layer procedures, TS 36.213 v10.1.0.
24. 3GPP (2007), R1-070674: LTE physical layer framework for
performance verification.
25. Kulkarni, S. S. (2002). Adaptive load-balancing over multiple
routes in mobile ad hoc networks. USA: University of Texas at
Dallas.
26. Kramer, G. (2001). On generating self-similar traffic using
pseudo-Pareto distribution. University of California, Davis.
Available at http://www.glenkramer.com/ucdavis/papers/self_sim.
pdf. Accessed Oct 2010.
27. Leland, W. E., Willinger, W., Taqqu, M. S., & Wilson, D. V.
(1995). On the self-similar nature of Ethernet traffic. ACM SIG-
COMM Computer Communication Review, 25, 202–213.
28. Willinger, W., Taqqu, M. S., Sherman, R., & Wilson, D. V.
(1997). Self-similarity through high-variability: Statistical ana-
lysis of ethernet LAN traffic at the source level. IEEE/ACM
Transactions on Networking, 5, 71–86.
29. Willinger, W., Paxson, V., & Taqqu, M. S. (1996). Self-similarity
and heavy tails: Structural modeling of network traffic.
30. Crovella, M. E., & Bestavros, A. (1996). Self-similarity in world
wide web traffic: Evidence and possible causes.
Author Biographies
Michael Brehm is presently a
principal solution architect at
Microsoft for IPTV solutions
featuring Microsoft’s Media-
room platform. Michael’s pro-
fessional background revolves
around development and per-
formance optimization in dis-
tributed environments and
includes eleven patent filings in
the fields of IPTV, LTE, and
VoIP. Michael’s educational
background includes a Bache-
lor’s Degree in Computer Sci-
ence, a Master’s Degree in
Computer Science (Major in Software Engineering), and Ph.D. in
Computer Science, all from the University of Texas at Dallas.
Ravi Prakash received the
B.Tech. degree in computer
science and engineering from
the Indian Institute of Technol-
ogy, Delhi in 1990 and the M.S.
and Ph.D. degrees in computer
and information science from
The Ohio State University,
Columbus, in 1991 and 1996,
respectively. He joined the
Department of Computer Sci-
ence, University of Texas at
Dallas in July 1997, where he is
a Professor. During 1996–1997
he was a Visiting Assistant
Professor in the Computer Science Department, University of
Rochester. His areas of research are mobile computing, distributed
computing, and sensor networks. He has published his results in
various journals and conferences and has been involved in the orga-
nization of various IEEE and ACM conferences and workshops as
technical program chair, technical program committee member, etc.
He received the National Science Foundation CAREER award. He
has conducted research in the areas of efficient channel allocation and
location management for cellular systems, efficient dependency
tracking and causally ordered message delivery in mobile systems,
routing, node configuration and reliable broadcasting in mobile ad hoc
networks and vehicular ad hoc networks, energy-efficient routing and
contention-free channel access for sensor networks, and channel
access in wireless mesh networks and cognitive radio networks. He is
leading an NSF-funded project to build a wireless networking testbed.
960 Wireless Netw (2014) 20:945–960
123