View
221
Download
0
Category
Preview:
Citation preview
Joint Physical Layer Coding and Network Coding for Bi-Directional
Relaying
Makesh Wilson, Krishna Narayanan,
Henry Pfister and Alex Sprintson
Department of Electrical and Computer Engineering
Texas A&M University, College Station, TX
Wireless Communications Lab, TAMU 2
Network Coding
Network coding is the idea of mixing packets at nodes
Nov 5, 2008
Wireless Communications Lab, TAMU 3
Characteristics of Wireless Systems
Superposition of signals – signals add at the PHY layer
Broadcast nature – node can broadcast to nodes naturally
Nov 5, 2008
Half-duplex and no direct path between Nodes A and B
Wireless Gaussian links with signal superposition
Same Tx power constraint P and receiver noise variance
Metric: Exchange rate per channel use
Bi-Directional Relaying Problem
Node ANode B
Relay Node V
®®® (1)
®®®
®®®
xA
xB
xV xV
yB = xV + nByA = xV + nA
y = xA + xB + n
A Naive Scheme
Rate A - B = (1/8) log(1 + snr) Rate B - A = (1/8) log(1 + snr)
Rex = (1/4) log(1 + snr)
®®® (1)
®®®
®®®
xA xB
xB
xA
A - Relay
B - Relay
Relay - B
Relay - A
Total Transmission Time
Network Coding Solution
Ref : Katti et al, “XORs in the Air: Practical Wireless Network Coding”, ACM SIGCOMM 2006
®®® (1)
®®®
®®®
xV
xV
Rate A - B = (1/6) log(1 + SNR) Rate B - A = (1/6) log(1 + SNR)
Rex = (1/3) log(1 + SNR)
A - Relay
B - Relay
Relay - A, B
Total Transmission Time
xBxA
Recent related work – Scale and Forward
Forward link y = xA + xB + n
Reverse link: scale y and broadcast
Can achieve
Katti et al, “Embracing Wireless Interference: Analog Network Coding” ACM SIGCOMM 2007
xA
xB
ky ky
y = xA + xB + n
Forward link – MAC Phase y = xA + xB + n
Reverse link – Broadcast phase
Same Power P and noise variance
Exchange rate per channel use = (RAB + RBA)
Two Phase Schemes
®®® (1)
®®®
®®®
xA
xB
xV xV
y = xA + xB + n
yB = xV + nByA = xV + nA
¾2
9
Main Results in this Talk – Two Phase Schemes
An upper bound on the exchange “capacity” is
Coding Schemes
Lattice coding with lattice decoding
Lattice coding with minimum angle decoding
MAC channel decoding
Essentially optimal at high and low SNRs
Extends to other Network coding problems, asymmetric SNRs
Forward link – Code for MAC channel (RA , RB )
Reverse link – Code for the broadcast channel
Do we have to decode (xA , xB ) at the relay ?
Coding for the MAC Channel
®®® (1)
®®®
®®®
xA
xB
xV xV
y = xA + xB + nDecode (xA ,
xB )
yB = xV + nByA = xV + nA
Motivation – BSC (p) Example MAC Phase
xA , xB , xV 2 {1,0}n and channel performs Binary sum
®®®
®® (1)®®
®®®®
Relay Node V receives y = xA © xB © e , e 2 {1,0}n
Relay Node V transmits xV
yA = xV © eA , yB = xV © eB , eA ,eB 2 {1,0}n , BSC(p)
xA
xB
xV
y = xA © xB © e
BSC(p) channel – Upper bound
®®®
®® (1)®®
®®®®
BSC(p) channel
Cut-set to bound
C = 1-H(p)C = 1-H(p)
Coding in the MAC Phase
Coding Scheme: A, B use same linear code at rate R = 1-H(p)
®®®
®® (1)®®
®®®®
Relay Node V receives y = xA © xB © e
Relay Node V decodes xV = xA © xB
BSC channel with binary addition
xA
xB
xV
y = xA © xB © e
Reverse Link – BSC case
Relay broadcasts xV
Nodes A and B decode xV from xV © e’
Nodes obtain xB and xA by XOR at rate R
®®®
®® (1)®®
®®®®
xAxB
xV
xV
xB xA
In BSC R = 1 – H(p) is the best achievable rate (Rex)
y = xA © xB © e
Main point
Linearity was important in the uplink
Structured codes outperform random codes
16
1-D Example – with uniform noise
Upper Bound on Rex is 1 bit
Can we get this 1 bit?
Rx Noise Distribution
0-1 +1
Peak Tx Power Constraint
0-1 +1
®®®
®® (1)®®
®®®®
xAxB
xV
17
1-D example with uniform noise
Node A and B transmit +1 or -1
Relay receives 2 , 0 or -2 with noise and decodes y’
Map using modulo and transmit
We can indeed achieve an Exchange rate of 1bit!
+2-2 0 +1-1
-1 +1
(y’ mod 4) -1
Main point
Modulo operation is important to satisfy the power constraint at the relay
Gaussian Channel – Upper bound
®®®
®® (1)®®
®®®®
Using cut-set to bound by capacity of each link
Upper bound on Rex is (1/2) log(1 + snr)
C = (1/2) log(1 + snr)C = (1/2) log(1 + snr)
Structured Codes - Lattices
Lattice ¤ is a sub-group of Rn under vector addition
Q¤(x) – closest lattice point
x mod ¤ = x – Q¤(x)
¸1
¸2
¸1 + ¸2
xQ¤(x)
0
(fundamental) voronoi region
Nested lattices
Coarse lattice within a fine lattice
V, V1 – vol. of Voronoi regions
V1V
There exist good nested lattices such that coarse lattice is a good quantizer and fine lattice is good channel code
Structured coding - MAC Phase
xA = tA
xB = tB
®®®
®® (1)®®
®®®®
XAXB
y = xA + xB + n
Decode to (xA + xB) mod
¤
Reverse link
t at relay is function of tA , tB t = (tA + tB) mod ¤
t is transmitted back and decoded at nodes A and B
Notice that t satisfies the power constraint
tA = (t-tB) mod ¤ and tB = (t-tA) mod ¤
®®®
®® (1)®®
®®®®
Encoding with Dither
xA = (tA + uA) mod ¤
xB = (tB + uB) mod ¤
y = xA + xB + n
We want to decode (tA + tB) mod ¤ from y
®®®
®® (1)®®
®®®®
xA
tA + uA
Decoding – lattice decoding with MMSE
(® y + uA + uB ) mod ¤
Equals (tA + tB – (1 - ®) (xA + xB) + ® n) mod ¤
Define t = (tA + tB) mod ¤
Define Neq = – (1 - ®) (xA + xB) + ® n
Form (t + Neq) mod ¤
®®®
®® (1)®®
®®®®
Achievable rate
Theorem: Using Nested lattices a rate of (1/2) log (0.5 + snr) is achievable
The second moment of Neq is (2P ¾2)/(2P + ¾2) The second moment of t is P Hence rate is (1/2) log (0.5 + snr) – so many
fine lattice points in Coarse lattice
At high SNR approx equal to (1/2) log (1 + snr) , the upper bound
More Proof details
Follows Erez and Zamir results for Modulo Lattice Additive Noise (MLAN) Channel
(t mod ¤ + Neq) mod ¤
Neq can be approximated by Gaussian
t mod ¤ is uniformly distributed
Poltyrev exponents can be calculated
Low/medium SNR regime
Does Gaussian MAC to achieve CAB + CBA = (1/4) log(1 + 2 snr)
Note relay can decode both xA and xB
Does Slepian Wolf Coding in reverse link
®®®
®® (1)®®
®®®®
0 1 2 3 4 5 60
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
SNR
Exchange C
apacity
Lattice codingscheme
Upper Bound
Amplify andForward
Achievable rates
Rex
MAC
LATTICE CODING
SNR
SNR
30
Where does the suboptimality come from?
Replace lattice decoding with minimum angle decoding
Still gives the same rate ! No dither here
P2P
31
Where does the sub-optimality come from?
Not all lattice points at radius of 2P are code words at low rates!
Prior distribution of (xA + xB) is not uniform !
P2P
Does it Generalize?
Exchanging information between nodes in Multi-hop
Holds for general networks like the Butterfly network
Asymmetric channel gains
Multiple hops
Multi-hop with each node can communicate with only two immediate neighbors
Each node can not broadcast and listen in the same transmission slot
Again (1/2) log (0.5 + snr) can be achieved
®®®
®® (1)®®
®®®®
Example 3 relays
At each stage one packet is unknown. Hence we can always decode
®®
Node A
®®® (1)
®®®
®®®
Node Ba1
a2
a3
a4
b1
b2
b3
b4
12
a1 mod ¤ b1 mod ¤
a2
a3
a4
b2
b3
b4
(a1 + b1) mod ¤
3a3
a4
b3
b4
(a2 + a1 +b1) mod ¤ (b2 + b1 +a1) mod ¤
4b1a1
(2 a1 + a2 + 2 b1 + b2) mod ¤
5a4 b4
(2 a1 + a2 + a3 + 2 b1 +b2) mod ¤
(2 a1 + a2 + 2 b1 +b2 + b3) mod ¤
(4 a1 + 2 a2 + a3 + 4 b1 + 2 b2 + b3) mod ¤
6b2
a2
35
Fading with asymmetric channel gain
Transmission occurs in L coherence time intervals
Pai , Pbi , Pri are powers at nodes A, B and V in ith coherence time
Channel is symmetric and ha, hb (L£1)vectors known to all nodes
Total sum power constraint on the nodes
®®®
®® (1)®®
®®®®
Fading links
ha 2 CL hb 2 CL
36
Upper Bound
Upper bound using cut-set arguments
C(x):= log(1 + x)
®®®
®® (1)®®
®®®®
Fading links
maximizeLX
i=1
minfC(jhai j2Pai );C(jhbi j2Pr i )g+ minfC(jhbi j2Pbi );C(jhai j2Pr i )g
subject toLX
i=1
Pai +LX
i=1
Pbi +LX
i=1
Pr i · 2LP
Pai ;Pbi ;Pr i ¸ 0; i 2 1;2:::L
37
Analysis for L = 1
Hence
Or
Pa = P Pb = P· 2
1+ · 2
Pb = PPa = P1
1+ · 2
For ·2 small
For ·2 large 0 0.5 1 1.5 2 2.5 3 3.5 4
20
40
60
80
100
2
Pa
0 0.5 1 1.5 2 2.5 3 3.5 40
50
100
2
Pb
0 0.5 1 1.5 2 2.5 3 3.5 460
70
80
90
100
2
Pr
· 2 := jhaj2=jhbj2
jhaj2Pa ¼jhbj2Pb
· 2Pa ¼Pb
38
Achievable scheme using channel inversion
D(snr):= Rate using Lattice/MAC based scheme for given snr
maximize minfD(jhai j2Pai );D(jhbi j2Pr i )g+ minfD(jhbi j2Pbi );D(jhai j2Pr i )g
subject toLX
i=1
Pai +LX
i=1
Pbi +LX
i=1
Pr i · 2LP;
jhai j2Pai = jhbi j2Pbi ;
Pr i ¸ Pai ;
Pr i ¸ Pbi ;
Pai ;Pbi ;Pr i ¸ 0; i 2 1;2:::L
39
Analysis for L = 1
Here ·2 Pa = Pb
· 2 := jhaj2=jhbj2
0 0.5 1 1.5 2 2.5 3 3.5 420
40
60
80
100
2
Pa
0 0.5 1 1.5 2 2.5 3 3.5 40
50
100
2
Pb
0 0.5 1 1.5 2 2.5 3 3.5 460
70
80
90
100
2
Pr
40
Comparison of the Bounds for arbitrary L
Theorem: For the problem setup, for arbitrary L and ¢ = 0.5, under the high snr approximation, the channel inversion scheme with Lattices is at max a constant (0.09 bits per complex channel use), away from the upper bound!
0 100 200 300 400 500 6003
3.5
4
4.5
5
5.5
6
6.5
7
7.5
8
Average Channel Power
Avera
ge E
xchange r
ate
in B
its
Upper Bound
Channel inversion scheme with Lattice coding
Does it Generalize?
Exchanging information between nodes in Multi-hop
Holds for general networks like the Butterfly network
Conclusion
Structured codes are advantageous in wireless networking
Results for high and low snr show are nearly optimal
Extension to multihop channels and asymmetric gains
Many challenges remain Capacity is unknown –only 2-phase schemes were
considered Channel has to be known Even if channel is known can we get 0.5 log(1+snr) ? Practical lattice codes to achieve these rates
Structured coding - MAC Phase
®®®
®® (1)®®
®®®®
Node A Node B
xA xB
tA mod ¤tB mod ¤
t = (tA + tB )mod ¤
tA
tB
t
xVxV
Bi-AWGN Channel ML decoding
Equivalent channel from b1 © b2 to y
Linear codes so b1 © b2 is a codeword
®®
Node A
Relay Node V
®®® (1)
®®®
®®®
Node B
x1 2 {-1,1}nx2 2 {-1,1}n
x1 + x2 2 {-2,0,2}n
b1 2 {1,0}n b2 2 {1,0}n
Conjecture ML decoding Lattices
Can’t do better than (1/2)log ( 0.5 + SNR) with ML
Minkowski-Hlawka for existence of lattices
Blichfeldts Principle for Concentration of codewords ideas
Assuming uniform distribution yeilds good concentration
Concentration at 2 P
Sum xA + xB will be in 2 P
x1 x2
Use Blichfeldt for lattice equivalent
Projecting to inner sphere
Project Codewords to inner sphere
Use Minkowski – Hlawka to show existence of Lattice
Calculate probability of error
Connection to Lattices
Using Blichfeldts theorem we can establish concentration for lattices also
Minkowski-Hlawka theorem can be used to perform ML Decoding
Again it appears we can get only (1/2) log (0.5 + SNR)
References
Recommended