Multiple MAC Protocols Selection StrategiesRef: Fundamentals of Statistical Signal Processing,...

Preview:

Citation preview

Multiple MAC Protocols Selection Strategies

Presented byChen-Hsiang Feng

Outline

Motivation and GoalSimulation EnvironmentMAC Selection StrategiesConclusions

Motivation

Today's devices have multiple PHY and MACEx.

Cell phone:3G, 4G, WiMax, Wi-Fi, Bluetooth, Infrared, USB...Each of them using independent PHY/MAC

Personal computerMultiple Ethernet cards, Multiple Wi-Fi cards, 3G, USB.....Each of them using independent PHY/MAC

Motivation -- 2

Communication conditions change dynamically, there is no definite good or bad for these PHY/MAC pairs.

GoalIf we can freely select any of the PHY/MAC pairs to use at run time, what is the best selection strategy?

Optimize Packet Success Rate

Outline

Motivation and GoalSimulation Environment, AssumptionsMethodsConclusions

System ModelCan reach any neighbor using any MAC/PHYCurrently only one node can select MAC

Optimize packet success rate of that node

Traffic Model

Poisson traffic is not realistic (no memory) Wide-Area Traffic: The Failure of Poisson Modeling (1995), by Vern Paxson , Sally Floyd

If there is no memory, you can learn nothing more than λ

Traffic Model - 2

In general, there is relation of consecutive traffics in time.

Short-range dependence (SRD) ACF of the form

Long-range dependent (LRD) ACF of the form

ρ k ~ e− βk ,β> 0

ρ k ~ k− β=e− β log k ,β> 0

Definition:

    =E [ X t−   X t  −   ]

 2

Poisson vs. LRD

Ex. for similar mean traffic

Generating LRD TrafficModeling Video Traffic Using M|G|� Input Processes: A Compromise Between Markovian and LRD Models (1998), by Marwan M. Krunz , Armand M. Makowski

Use M/G/� input processesM for exponential inter-arrival time distribution (Poisson) G for general service time distribution distribution (arbitrary) � for infinity number of servers

The pdf of G can be derived from ACF k

t

A(1) =1

A(3) =1

A(2) =2

A(4) =0 ........

Sigma = 2142

R = 1 3 22 1

Instantaneous Traffic

Average Traffic Load G

Average traffic is the average of instantaneous traffic over a window period. (EX, 10 time instances)

Packet Success Rate

Traffic Load G

Aloha : S= G e−G

Slotted non− persistent CSMA : S= aG e− aG

 1− e− aG  a

p− persistent CSMA : S=1− e− aG[ Ps'  0 Ps 1−  0 ]

 1− e− aG [a t  0 a t  1− 0  1 a ] a 0

Packet Success Rate = S/G

Throughput S

Packet Switching in Radio Channels: Part I - Carrier Sense Multiple-Access Modes and Their Throughput-Delay Characteristics (1983), by Leonard Kleinrock , Fouad , A. Tobagi

Measurement of PSRWe assume MAC protocol can measure the previous slots' packet success rate

The measurement is exact value + noise

ω[s] is Gaussian noise term with fixed varianceσω2 = 0.09 in the emulation

 Ps [s ]= Ps[ s]  [s ]

Ref: http://en.wikipedia.org/wiki/Standard_deviation

0.2 0.80.5

0 1

Measurement of PSR - 2In practice we can count the number of packets (collision & success) during a window period, thus packet success rate.

Outline

Motivation and GoalEmulation EnvironmentMAC Selection StrategiesConclusions

SettingTwo MACs

Both Non-persistent CSMATraffic

Both long-range dependent (LRD)

Saturating Counter

Optimal Stopping

Choosing a optimal stopping time of using a MAC to minimize the expected total costThe problem is to find a threshold V* that

If x < V*, switch to next MACIf x >= V*, stay using the current MAC

Optimal Stopping - 2The reward sequence is defined as

AssumptionX1, X2, .... are observations of PSR

iid with known distribution F(x) The real distribution of F(x) is intractable

Assume F(x) ~ N(μ,σ2)

Y 0=− ∞ ,Y 1= X 1− c , ... ,Y n= Xn− c , ... ,Y ∞=−∞

Optimal Stopping - 3V* can be calculated from the optimality Equ.

Where F is the CDF of Gaussian distribution N(μ,σ2)

Thus,

Where Φ(x) is the standardized Gaussian CDF

Multi-armed BanditOnly one machine is operated at each timeMachines that are not operated remain frozenMachines are independentFrozen machines contribute no reward

Multi-armed Bandit - 2We model the state of each MAC with a Markov chain

State i has reward K+1-iThe smaller state represents higher success rate and thus higher reward

Multi-armed Bandit - 3By calculating the expected reward of each machine from the current states, the optimal solution is the machine (MAC) with the highest expected reward.

Extensions of the multiarmed bandit problem: The discounted case (1985), by P. Varaiya, J. Walrand, and C. Buyukkoc

Proactive MAC SelectionUntil now, we only use the PSR estimation of the chosen MAC What can we do if all MAC can update PSR estimation at every time slot?

New Bounds

Parallel Saturating CounterEach MAC is model by a separate saturating counter.All counters are updated at every time instanceChoose the counter with best previous state as the current selected MAC

Kalman FilterModel the system using an AR(p) process

u[n] ~ N(0,σu2) is the state variance, and w[n] ~

N(0,σw2) is the observation noise variance

AR:Autoregressive

Kalman Filter - 2

We can rewrite the state equation in the form

Ref: Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory (v. 1, p.426)by Steven M. Kay

Kalman Filter - 3The updating rules

Ref: Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory (v. 1, p.436), by Steven M. Kay

Traffic is Matter

So far Kalman filter is the bestThis is only true for certain testing traffic.

Kalman Filter state variance σu2

Small σu2 filter out noise, large σu

2 track channel variationCan not do both things good at the same time if σu

2 is fixed

Example of traffic defeat Kalman filter

Success RateLast Best Perfect Knowledge 82 %Last Best 75 %Kalman Filter 73%

New State Space Equation

The system is still modeled by

u[n] ~ N(0,σu2[n]) is the state variance, and w[n] ~ N(0,σw

2) is the observation noise variance

Tracking σu2

We model the state variance as

Where h[n] is the estimation of state variance σu2, and

Σu[n] ~ N(0,ηu2) , where ηu

2 is the variance of σu2, given as

h [n]= h [n− 1] u[n]y [n ]= h[n ]  w [n]

 u2=  u

2[− 1 ]� 2N

Ref: Estimation with Applications to Tracking and Navigation by Yaakov Bar-Shalom, X. Rong Li, Thiagalingam Kirubarajan, chapter 2.6.3 The variance of the sample mean and sample varinace, page 106.

EstimationThe idea is, we use Kalman filter to track the variance σu

2[n], and then use Particle (Kalman) filter to track the system state.

Improved bayesian MIMO channel tracking for wireless communications : Incorporating a dynamical model, by HUBER Kris and HAYKIN Simon

Outline

Motivation and GoalEmulation EnvironmentMAC selection MethodsConclusions

ConclusionWe investigated many MAC selection strategies under the cases that PSR estimations are(1)updated only at the chosen MAC, or(2)always updated for all MAC

For (1), Multi-armed Bandit method gave good performance if right model is chosenFor (2), Kalman Filter with varying σu

2[n] gave good performance

Conclusion - 2If computation power is limited, just choose the best MAC in previous slot. It also gives very good result.

Future WorkDo simulation that multiple nodes can select MACCurrently we only select the best MAC

Choose the best N MACs to increase throughputSending redundant (overlapped ) date via N MACs to increase reliability.

Recommended