11
Hindawi Publishing Corporation Journal of Electrical and Computer Engineering Volume 2012, Article ID 614384, 10 pages doi:10.1155/2012/614384 Review Article On Particle Swarm Optimization for MIMO Channel Estimation Christopher Knievel and Peter Adam Hoeher Information and Coding Theory Laboratory, University of Kiel, 24143 Kiel, Germany Correspondence should be addressed to Christopher Knievel, [email protected] Received 7 July 2011; Revised 30 November 2011; Accepted 14 December 2011 Academic Editor: Lisimachos P. Kondi Copyright © 2012 C. Knievel and P. A. Hoeher. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Evolutionary algorithms, in particular particle swarm optimization (PSO), have recently received much attention. PSO has successfully been applied to a wide range of technical optimization problems, including channel estimation. However, most publications in the area of digital communications ignore improvements developed by the PSO community. In this paper, an overview of the original PSO is given as well as improvements that are generally applicable. An extension of PSO termed cooperative PSO (CPSO) is applied for MIMO channel estimation, providing faster convergence and, thus, lower overall complexity. Instead of determining the average iterations needed empirically, a method to calculate the maximum number of iterations is developed, which enables the evaluation of the complexity for a wide range of parameters. Knowledge of the required number of iterations is essential for a practical receiver design. A detailed discussion about the complexity of the PSO algorithm and a comparison to a conventional minimum mean squared error (MMSE) estimator are given. Furthermore, Monte Carlo simulations are provided to illustrate the MSE performance compared to an MMSE estimator. 1. Introduction Multiple-input multiple-output (MIMO) transmission is considered as a key technology to reach the challenging goals of upcoming wireless standards, such as long-term evolution advanced (LTE-A) [1]. A wide range of MIMO detectors are known in the literature oering a performance close to the channel capacity. Precise channel state information (CSI) is required to obtain this performance. Correspondingly, the performance of the detection algorithms highly depends on the accuracy of the CSI. Minimum mean squared error- (MMSE-) based channel estimation reaches optimum per- formance with computational cost that may become infeasi- ble for practical implementation. Advanced iterative receivers, which jointly detect the data symbols and estimate the channel, oer a close-to-optimum performance at often reduced computational complexity. However, the majority of joint receivers need good initial channel estimates to reach their ultimate performance. The space alternating generalized expectation (SAGE) algorithm in [2] and the graph-based iterative receiver in [3] are exam- ples of iterative receivers which need proper initialization. Generally, channel estimation can be seen as an optimiza- tion problem, that is, to minimize the Euclidean distance between the estimated and the true channel coecients. The straightforward solution to this problem incorporates matrix inversion and leads to the well-known least-squares (LS) and/or MMSE estimator. Heuristic, nature-inspired algorithms, such as particle swarm optimization (PSO) [4, 5] or genetic algorithms (GA) [6, 7], are attractive low-complexity solutions to facilitate MIMO channel estimation. PSO is a population-based heur- istic global optimization algorithm, which originated in mod- eling the social behavior of bird flocks and fish schools. It has been applied to a variety of technical optimization prob- lems, including channel and parameter estimation [813] as well as data detection [14] and multiuser detection [15]. Unfortunately, a fair evaluation of PSO is rather dicult due to the wide range of available modifications and the fact that the algorithm is often tuned to optimum performance for a specific optimization problem by empirical measures. Genetic algorithms are inspired by natural evolution. Ac- cordingly, population members are termed chromosomes. Based on an optimization metric, a subset of chromosomes

Review Article ...Journal of Electrical and Computer Engineering 3 Initialize swarm Locate leader i=1 while i

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

  • Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2012, Article ID 614384, 10 pagesdoi:10.1155/2012/614384

    Review Article

    On Particle Swarm Optimization for MIMO Channel Estimation

    Christopher Knievel and Peter Adam Hoeher

    Information and Coding Theory Laboratory, University of Kiel, 24143 Kiel, Germany

    Correspondence should be addressed to Christopher Knievel, [email protected]

    Received 7 July 2011; Revised 30 November 2011; Accepted 14 December 2011

    Academic Editor: Lisimachos P. Kondi

    Copyright © 2012 C. Knievel and P. A. Hoeher. This is an open access article distributed under the Creative Commons AttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properlycited.

    Evolutionary algorithms, in particular particle swarm optimization (PSO), have recently received much attention. PSO hassuccessfully been applied to a wide range of technical optimization problems, including channel estimation. However, mostpublications in the area of digital communications ignore improvements developed by the PSO community. In this paper, anoverview of the original PSO is given as well as improvements that are generally applicable. An extension of PSO termed cooperativePSO (CPSO) is applied for MIMO channel estimation, providing faster convergence and, thus, lower overall complexity. Insteadof determining the average iterations needed empirically, a method to calculate the maximum number of iterations is developed,which enables the evaluation of the complexity for a wide range of parameters. Knowledge of the required number of iterations isessential for a practical receiver design. A detailed discussion about the complexity of the PSO algorithm and a comparison to aconventional minimum mean squared error (MMSE) estimator are given. Furthermore, Monte Carlo simulations are provided toillustrate the MSE performance compared to an MMSE estimator.

    1. Introduction

    Multiple-input multiple-output (MIMO) transmission isconsidered as a key technology to reach the challenging goalsof upcoming wireless standards, such as long-term evolutionadvanced (LTE-A) [1]. A wide range of MIMO detectors areknown in the literature offering a performance close to thechannel capacity. Precise channel state information (CSI) isrequired to obtain this performance. Correspondingly, theperformance of the detection algorithms highly depends onthe accuracy of the CSI. Minimum mean squared error-(MMSE-) based channel estimation reaches optimum per-formance with computational cost that may become infeasi-ble for practical implementation.

    Advanced iterative receivers, which jointly detect the datasymbols and estimate the channel, offer a close-to-optimumperformance at often reduced computational complexity.However, the majority of joint receivers need good initialchannel estimates to reach their ultimate performance. Thespace alternating generalized expectation (SAGE) algorithmin [2] and the graph-based iterative receiver in [3] are exam-ples of iterative receivers which need proper initialization.

    Generally, channel estimation can be seen as an optimiza-tion problem, that is, to minimize the Euclidean distancebetween the estimated and the true channel coefficients. Thestraightforward solution to this problem incorporates matrixinversion and leads to the well-known least-squares (LS)and/or MMSE estimator.

    Heuristic, nature-inspired algorithms, such as particleswarm optimization (PSO) [4, 5] or genetic algorithms (GA)[6, 7], are attractive low-complexity solutions to facilitateMIMO channel estimation. PSO is a population-based heur-istic global optimization algorithm, which originated in mod-eling the social behavior of bird flocks and fish schools. Ithas been applied to a variety of technical optimization prob-lems, including channel and parameter estimation [8–13]as well as data detection [14] and multiuser detection [15].Unfortunately, a fair evaluation of PSO is rather difficult dueto the wide range of available modifications and the fact thatthe algorithm is often tuned to optimum performance for aspecific optimization problem by empirical measures.

    Genetic algorithms are inspired by natural evolution. Ac-cordingly, population members are termed chromosomes.Based on an optimization metric, a subset of chromosomes

  • 2 Journal of Electrical and Computer Engineering

    is selected to breed a new generation, which is subsequentlyused to generate a new generation by means of crossoverand/or mutation.

    PSO and GA share many similarities as both start with arandomly initialized population and both use a fitness valueto evaluate their population members. The main differencelies within the selection of leaders (in terms of PSO) orparents (in terms of GA) as well as the update of positionand/or generation of new members, respectively. Populationmembers within PSO are updated iteratively and influencethemselves directly by their personal best position. On thecontrary, population members in GA pass characteristicinformation to their children.

    It is difficult to compare the performance of PSO andGA in general as both depend on the specific optimizationproblem. Additionally, a similar variety of possible imple-mentations exists also for GA. However, several publicationsin the field of digital communications come to the conclusionthat PSO is advantageous compared to GA in terms ofcomputational complexity, convergence speed, and accuracy[16–18]. Additionally, fewer parameters need to be set for thePSO algorithm.

    PSO is a viable alternative to replace the closed-formsolution of standard LS/MMSE estimators if it can providesimilar performance at lower complexity. Nevertheless, theoverwhelming variety of implementations makes a perfor-mance/complexity analysis cumbersome. This paper evalu-ates the applicability of PSO for MIMO channel estimationwith respect to mean squared error (MSE) performanceand computational complexity. The paper comprises thefollowing central points.

    (i) An overview of the original PSO is given as wellas general improvements which lead to a modifiedupdate function. Although PSO is widely adopted inmany fields, mainly the original version of PSO isapplied. Nevertheless, in many cases the performancecan be improved and/or the required number ofiterations can be reduced by more advanced versions.The focus is hereby on strategies which can be appliedin general and do not have to be tuned for a specificoptimization problem.

    (ii) While PSO has already been applied to MIMOchannel estimation in the literature (e.g., [9]), anextension known as cooperative PSO (CPSO) [19] isintroduced in this paper. Although PSO is directlyapplicable for some cases, advanced strategies, suchas CPSO, are necessary either to provide optimumperformance and/or a lower complexity.

    (iii) The performance of PSO and CPSO is comparedwith a conventional channel estimation algorithm,namely, the optimum MMSE estimator. Additionally,PSO/CPSO and an MMSE estimator are used toprovide initial channel estimates for a graph-basediterative receiver. BER results illustrate the advantageof CPSO over PSO.

    (iv) A general rule to determine the maximum number ofiterations is developed on the basis of the generalized

    extreme value distribution. This approach allowsthe actual computation of the required number ofiterations to reach a predefined target. To the authorsbest knowledge, a similar method to predict therequired number of iterations does not yet exist,although it may be essential for the evaluation of thecomplexity.

    (v) The complexity of PSO/CPSO is discussed andcompared to the conventional MMSE estimator.It is known that the complexity of PSO/CPSOper iteration is low. However, depending on theoptimization problem several hundred iterations arerequired until convergence is achieved. Utilizingthe proposed criterion to determine the maximumnumber of iterations, the complexity of PSO/CPSOcan be evaluated for a wide range of parametersettings, and an optimum tradeoff between iterationsand complexity per iteration can be determined.

    The remainder of this paper is organized as follows:PSO and the extension to cooperative PSO is elaborated inSection 2. The application to MIMO channel estimation isdescribed in Section 3. A performance as well as a complexitycomparison of PSO/CPSO with an MMSE estimator is givenin Sections 4 and 5, respectively. The extension to multipleobjectives is discussed in Section 6. Finally, Section 7 drawsthe conclusion.

    Throughout the paper, the following notation conven-tions are adopted. Bold-face capital and lower-face lettersstand for matrices and vectors of appropriate dimensions,respectively. INT denotes a NT ×NT identity matrix. Further-more, ()† represents the Hermitian operator.

    2. Particle Swarm Optimization

    PSO is a population based, heuristic, iterative optimizationalgorithm. Due to the heuristic approach, no gradientinformation is required to converge to the global optimum.Hence, it can easily be adopted to a wide range of technicaloptimization problems.

    In the following a general overview of PSO is givenin Section 2.1. An extension termed cooperative PSO isintroduced and applied for MIMO channel estimation inSection 2.2.

    2.1. Standard PSO. The standard PSO is described byAlgorithm 1. Initially all Np particles of a swarm are ran-domly set throughout the feasible search region [Smin, Smax],where S ∈ RD. The particles of a swarm “fly” through aD-dimensional search space, which is gradually explored byadjusting the trajectory of each particle at each iteration.Within each iteration the current position of a particlepi = [p1, . . . , pD] is used as a candidate solution for theoptimization metric termed fitness function. The fitnessvalue of a particle is distributed to all particles within theswarm. The previously best position of a particle is termedpersonal best pIBi , whereas the previous best position of the

  • Journal of Electrical and Computer Engineering 3

    Initialize swarmLocate leaderi = 1while i < imax or convergence do

    for each particle doUpdate position using (1)/(2), (4)Evaluation using (5)Update pBest

    end forUpdate leaderi++

    end while

    Algorithm 1: Standard PSO algorithm.

    swarm is called global best pGB. The velocity vector of aparticle i is updated according to [5, 20]:

    v′i= ωvi + c1ε1 ◦(

    pIBi − pi)

    + c2ε2 ◦(

    pGB − pi)

    , (1)

    where ◦ denotes the entrywise product. The variables ε1and ε2 denote random numbers in the range of [0, 1]. Theinertia weight ω typically decreases from 0.9 to 0.4 overthe course of iterations. The social and cognitive parametersc1 and c2 are acceleration coefficients towards the personaland/or global best position, respectively. The velocity vectorof a particle is, similar to the search space, restricted withincertain boundaries [vmin, vmax]. Particles which are beyondthe boundaries of search space and velocity, respectively, arereset to the boundary limits.

    The update function (1) was published in 1998 as part ofan already revised version of the PSO algorithm. The originalupdate function of PSO published in 1995 [4] did not includethe inertia weight or the cognitive and social parameters.Since then, an overwhelming amount of variations havebeen proposed. However, no standard algorithm or setof parameters has yet emerged, which delivers optimumperformance independent of the optimization problem.Hence, parameters are tuned for each specific problemand settings tuned by means of empirical measures areoften applied. The authors of [21] propose a so-calledstandardized version of PSO which incorporates severalgeneral applicable improvements, that is, bound handling,swarm size, and an update equation replacing the inertiaweight with a constriction factor. The standardized versionimproves the performance for most optimization problemscompared to the original version. We restrict ourselves togeneral applicable optimization for PSO, although adaptiveversions [22] are also able to improve the performance ofthe standard PSO, but their parameters may be optimized foreach optimization problem.

    The update rule based on the constriction factor is givenby

    v′i = χ{

    vi + c1ε1 ◦(

    pIBi − pi)

    + c2ε2 ◦(

    pGB − pi)}

    , (2)

    with

    χ = 2∣∣∣2− ϕ−√ϕ2 − 4 · ϕ

    ∣∣∣, (3)

    where ϕ = c1 +c2, ϕ > 4. The factors c1 and c2 are constraintson the velocity towards the global and the personal bestposition. According to [23], suitable values for a wide rangeof test functions are as follows: c1 = 2.8 and c2 = 1.3, whichresults in χ ≈ 0.7298. The standardized update function(2) as well as the above-mentioned parameters are appliedthroughout all simulations. The position of a particle isupdated subsequently according to

    p′i = pi + v′i . (4)

    The updated velocity vector v′i is added to the currentposition pi of a particle. The new position p′i is usedas a candidate solution for the optimization metric. Theoptimization performed by PSO is described by

    pOPT = arg minpi

    f(

    pi). (5)

    The so-far emerged personal and/or global best pIBi and pGB,

    respectively, are replaced by the updated position p′i , if thefitness value pOPT is improved compared to the values ofthe personal and the global best position. This procedure isrepeated until PSO is converged or the maximum numberof iterations imax is reached. imax is chosen to be sufficientlylarge to prevent that the algorithm is stopped before theglobal optimum could be found. Frequently, the optimumsolution is found with just a fraction of imax. Therefore, astopping criterion is necessary to reduce the average numberof iterations needed for convergence. An overview of suitablestopping criteria is given in [24]. In this paper, PSO is saidto be converged if pOPT is below a certain threshold th for γiterations.

    In case PSO converges, all particles p of the swarm arelocated at the same position minimizing (5). Without loss ofgenerality, only minimization problems are considered.

    2.2. Cooperative PSO. In general, population-based opti-mization algorithms are searching for a region of small,specified volume in a D-dimensional search space, sur-rounding the global optimum. In order to converge to theglobal optimum, an optimization algorithm needs to createa sample within this region. The probability of generating asample within the region is the volume of the region dividedby the volume of the search space [19]. This probabilitydecreases exponentially with increasing dimensionality ofthe search space. This effect is often termed “curse ofdimensionality.”

    Separating the high-dimensional search space into sets ofsmaller dimension improves the performance significantly.PSO is known to perform rather poor for high-dimensionalproblems. A large variety of solutions is proposed to solvethis problem. In [25] the update function (1) is adaptedto take adaptive parameters into account. These parametersare changed over the course of iterations and improve the

  • 4 Journal of Electrical and Computer Engineering

    converge behavior of PSO algorithm. However, the optimumset of parameters remains problem dependent. An alternativesolution to improve the performance of the original PSOalgorithm is given by a so-called cooperative approach toparticle swarm optimization (CPSO) presented in [19]. TheCPSO approach relies on the original update equation andis described in the following. The pseudocode describingCPSO is given by Algorithm 2. The Np particles of the PSOswarm are now separated into Ns swarms with N ′p particles.The number of particles for both PSO and CPSO should bechosen within a certain range. Too few particles (Np,N ′p < 5)lead to a deteriorated performance, while too many are notable to increase the performance (Np,N ′p > 100). About15 particles is a good tradeoff between complexity andperformance [23].

    Accordingly, the D-dimensional problem is split into Nssubsets and optimized separately by an individual swarmof particles s = [s1, . . . , sD/δ], where δ is the number ofdimensions for each swarm. The position of a particle i ofswarm s is given by ρs,i = [ρ1, . . . , ρδ]. The separation ofthe dimensions is mitigating a drawback of the standardPSO algorithm: Since the standard PSO considers the full-dimensional vector in the update function, it allows thatsome dimensions move further away from the solutionas long as the overall fitness value is improved. On thecontrary, cooperative PSO is evaluating subsets of the D-dimensional vector. The probability that single componentsare deteriorated in favor of other dimensions is thus reduced.

    For Ns = 1 swarm, CPSO is equivalent to PSO sinceall dimensions are optimized by one swarm. In case ofNs > 1, the evaluation of the optimization metric is notdirectly possible since a particle represents only a subsetof dimensions of the optimization problem. Consequently,a context vector φs,i is necessary. In order to constructa D-dimensional vector, the D − δ missing dimensionsare replaced by the global best positions of the remainingswarms: φs,i = [ρGB1 . . . ρs,i . . . ρGBD/δ]. The optimization func-tion (5) is changed accordingly:

    pOPT = arg minφs,i

    f(φs,i). (6)

    3. Application to MIMO Channel Estimation

    We consider a MIMO system with NT transmit and NRreceive antennas. The received signal vector at time indexk, y[k] ∈ CNR×1, is modeled as

    y[k] = Hx[k] + n[k], (7)where x[k] ∈ CNT×1 is the transmitted signal vector attime index k. The entries of the channel matrix H ∈CNR×NT are assumed to be independent and identicallydistributed (i.i.d.) according to CN (0, 1). We consider aquasi-time-invariant channel for the numerical analysis ofPSO and CPSO. The application to time-varying channelsis given briefly by the description of a multiple-objectivePSO in Section 6. Furthermore, only a memoryless channelis considered in the numerical results in order to simplifythe optimization metric and the discussion of the results.

    Initialize Ns swarms with N ′P particlesLocate leaderi = 1while i < imax or convergence do

    for each swarm dofor each particle do

    Update position using (1)/(2), (4)Evaluation using (6)Update pBest

    end forend forUpdate leaderi++

    end while

    Algorithm 2: Cooperative PSO algorithm.

    The complexity analysis is directly applicable to frequency-selective channels as well, since only the dimensionality ofthe problem is discussed here. Dimensions can be increasedby either transmit and receive antennas and/or the channelmemory length L. The MSE performance for a channelmemory length L > 0 will follow the MSE performanceof a least-squares estimator. For detailed information aboutthe application of PSO to channels with memory, interestedreaders are referred to [12]. Furthermore, n[k] denotes thenoise vector at the receiver whose entries are i.i.d. modeledas CN (0, σ2n).

    Training symbols are transmitted to support pilot-aidedchannel estimation (PACE). Stacked in a matrix, the transmitvector x[k] can be written as X ∈ CNT×NT . A minimum of NTtraining symbols are transmitted to ensure a full rank. Thetraining matrix consists of orthogonal sequences subject toXX† = μINT , where μ is related to the signal power assignedto the training matrix [26].

    In the following, we assume that the transmit vector x[k]of length NT consists of training symbols only.

    In case of a quasi-invariant (block-fading) channel, themaximum-likelihood metric (fitness function) for PSO canbe written as follows:

    f(

    pi) =

    NT∑

    k=1

    ∥∥y[k]− Pix[k]∥∥2. (8)

    The position of the ith particle Pi is used as a potentialsolution for the metric. For a consistent notation in line with(7), the previously used vector notation of the position ofthe particle is changed here to a matrix notation with Pi ∈CNR×NT . Thus, a position of a particle represents a hypothesisof the channel matrix H̃. It is of importance to note that eachdimension of a particle is real valued. As a particle needs toestimate NR × NT complex-valued channel coefficients, thedimensions of the search space results in D = 2 ·NT ·NR.

    The maximum-likelihood metric for CPSO is verysimilar to the PSO metric. Instead of using the position ofa particle, a context matrix φs,i is utilized, since a position

  • Journal of Electrical and Computer Engineering 5

    CPSO

    Swarm 1 Swarm 2

    h11 h12 h22h21

    PSO

    Figure 1: Possible separation of an 8-dimensional problem into aset of lower-dimensional problems by the CPSO compared to PSO.

    of a particle of a subswarm represents only a fraction of thechannel matrix:

    f(φs,i) =

    NT∑

    k=1

    ∥∥y[k]− φs,ix[k]∥∥2. (9)

    In case of MIMO channel estimation, NR ·NT channel coef-ficients are estimated assuming a flat-fading time-invariantchannel. As mentioned before, the performance of PSO isdegraded with increasing dimensions. The dimensionalityof the optimization problem is not only determined by thenumber of transmit and/or receive antennas but also bythe channel memory length. Thus, channel estimation ofa realistic MIMO system may result in a high-dimensionaloptimization problem.

    Figure 1 illustrates the difference between PSO and CPSOfor channel estimation of a 2 × 2 MIMO system. PSOoptimizes all channel coefficients with one swarm. CPSO isable to separate the D = (2 ·NR ·NT)-dimensional probleminto subsets and optimizes each subset with a separateswarm. In this example two swarms are shown; however, thenumber of possible subswarms is in the range of [1,D]. In thecase of D subswarms, a single swarm would optimize eitherreal or imaginary part of one channel coefficient. While thenumber of subswarms Ns is directly related to the numberof dimensions, there is no such relation for the numberof particles. A minimum number of particles need to beassigned for each subswarm in order to allow convergence.Additionally, the performance of CPSO cannot be improvedby increasing the number of particles once a threshold isreached. The number of particles depends again on theoptimization problem. A good tradeoff between complexityper iteration and performance is Np = N ′p = 15 [23].

    4. Performance Comparison

    A performance comparison of PSO and CPSO with an opti-mum MMSE channel estimator in terms of mean squarederror (MSE) is given in Figure 2. The simulation setupconsists of an MIMO system and a quasi-time-invariantRayleigh fading channel. Pilot-aided channel estimation isconducted with orthogonal training sequences. The numberof particles is fixed to Np = 60 particles for PSO algorithm.The CPSO algorithm consists of Ns = 4 subswarms withN ′p = 15 particles for each swarm. The overall complexity ofCPSO compared to PSO for one iterations is thus the same.

    0 5 10 15 20

    MSE

    10−4

    10−3

    10−2

    10−1

    Es/N0 (dB)

    PSO, D = 16PSO, D = 32PSO, D = 64CPSO, D = 16CPSO, D = 32

    CPSO; D = 64MMSE, D = 16MMSE, D = 32MMSE, D = 64

    Figure 2: MSE performance of PSO and CPSO compared to MMSEchannel estimation depending on the number of dimensions for aquasi-time-invariant Rayleigh fading channel.

    The MMSE estimate of H for a time-invariant channel isgiven by

    Ĥ = YX†[σ2nINT + XX

    †]−1

    . (10)

    PSO exhibits an inferior performance compared to CPSOand/or MMSE estimation with increasing SNR and dimen-sions D as illustrated in Figure 2. In this case, the dimensionsare increased by increasing the number of transmit antennaswith a constant number of receive antennas. The largerthe dimensions the earlier PSO converges to an error floor.Large dimensions (D > 32) can easily be reached withsettings defined in upcoming wireless standards, that is,long-term evolution advanced (LTE-A) [1]. For example, foran 8 × 8-MIMO-OFDM system with a channel memorylength of L = 0 the dimensions results in D = 8 · 8 ·2 = 128. Only the dimensions are studied to focus thereader on the problem of large dimensions for PSO. Theperformance of the original PSO can be further improvedby adapting the variables in (1) and/or (2) over the courseof iterations and/or by applying more advanced boundhandling mechanisms. These optimization methods for PSOhave to be tuned to the specific optimization metric. On theother hand, the performance of CPSO reaches the optimumMMSE performance with general settings.

    It is worth mentioning that, although GA does not sufferfrom the curse of dimensionality and is thus able to convergein general, the required number of iterations increases, whichrenders a practical implementation unfeasible.

    Besides reaching the optimum performance, anotherstrength of the PSO algorithm is its fast convergence to a“reasonable” MSE. The fast converging nature of PSO/CPSO

  • 6 Journal of Electrical and Computer Engineering

    0 100 200 300 400 500

    Iterations

    MSE

    PSO

    CPSO

    MMSE

    SNR = 0 dB

    SNR = 10 dB

    SNR = 20 dB10−3

    10−2

    101

    10−1

    102

    100

    Figure 3: Convergence speed comparison of PSO and CPSO for dif-ferent SNR values compared to the optimum MMSE performancewith a 4 × 4 MIMO system.

    can be seen in Figure 3, where a PSO algorithm with Np = 60particles as well as a CPSO algorithm with Ns = 4 andN ′p = 15 is used to estimate a 4 × 4-MIMO channel.The MSE improves significantly in the first 20, 60, and 120iterations for CPSO depending on the SNR. The subsequentiterations are needed for convergence to the optimumperformance. Interestingly, PSO converges earlier to theoptimum performance in case of an SNR of 20 dB. The CPSOalgorithm is attracted to local optima which are createdby the separation of the dimensions [19]. The convergencespeed, is slower in this case. The fast convergence is especiallybeneficial when PSO/CPSO is used as an initializationalgorithm where only a good starting value is needed insteadof the optimum value. In order to further illustrate theadvantages of CPSO over PSO, the graph-based soft iterativereceiver (GSIR) of [3] is initialized by PSO, CPSO, andthe MMSE estimator in the following. A 8 × 8 MIMOsystem is considered with QPSK modulation and a rate-1/2 repetition code. A fixed number of 10 iterations areused for the GSIR. A training preamble with orthogonalproperties, as described in Section 3, is applied. A datasequence of length KD = 100 is transmitted subsequently.Thus, the complete transmitted sequence can be representedas X = [XTXD]. PSO, CPSO, and the MMSE estimatorare applied once, utilizing only the training symbols givenin XT and provide the initial channel coefficient estimatesused by the GSIR. The maximum number of iterationsimax for the PSO/CPSO is hereby restricted to keep thecomputational overhead of the initialization at a minimum.The BER results are shown in Figure 4. It is obvious thatCPSO outperforms PSO in terms of convergence speed. Amaximum of only 30 iterations is required for the CPSOto provide initial coefficients that are sufficiently well forthe GSIR to converge to the same performance achieved

    0 5 10 15

    BE

    R

    Eb/N0 (dB)

    10−4

    10−3

    10−2

    100

    10−1

    GSIR, PSO, imax = 5GSIR, CPSO, imax = 5GSIR, PSO, imax = 10GSIR, CPSO, imax = 10GSIR, PSO, imax = 30

    GSIR, CPSO, imax = 30GSIR, PSO, imax = 100GSIR, CPSO, imax = 100GSIR, MMSE

    Figure 4: BER performance of the GSIR initialized by PSO, CPSO,and MMSE as a function of the maximum number of iterations.

    with an MMSE initialization. As suggested by the resultsgiven in Figure 3, the performance of PSO/CPSO dependson the number of particles and subswarms. The relationbetween the number of particles/subswarms on the numberof iterations is investigated in the following section. In orderto keep the computational overhead at a minimum themaximum number of iterations for the PSO/CPSO shouldbe set to a minimum.

    5. Complexity Analysis

    The complexity of PSO/CPSO is determined by the numberof particles, subswarms, dimensions, and the required num-ber of iterations for convergence. The number of particlesand subswarms is a design parameter of the algorithm andis commonly chosen to achieve a good performance in termsof MSE for channel estimation. The number of dimensionsis a fixed parameter depending on the optimization problem(e.g., number of transmit and receive antennas and/orchannel memory length). In each iteration all particles N ′pof all subswarms Ns have to evaluate their current positionand compare their current fitness value with their personalbest as well as the global best, which results in a complexityof order

    O(

    CPSO(it))= O

    (N ′p ·Ns ·D

    ), (11)

    per iteration.The overall number of particles influences the number

    of iterations needed to converge. In case of using only oneparticle the required number of iterations until convergenceis maximized and computational complexity per iterationis minimized, while, on the other hand, using an infinite

  • Journal of Electrical and Computer Engineering 7

    0 100 200 300 400 500 600 700 8000

    0.005

    0.01

    0.015

    0.02

    0.025

    0.03

    0.035

    Iterations

    pdf

    CPSO iterations

    Extreme value distribution

    D = 4

    D = 8

    D = 16

    Figure 5: Histogram of the minimum number of iterationsrequired to converge in dependence of the dimensionality.

    number of particles is minimizing the number of iterationsand maximizing the computational complexity per iteration.With an infinite number of particles, PSO is equivalent toexhaustive search. Hence, a tradeoff between the overall sizeof PSO/CPSO and the number of iterations has to be found.Furthermore, the required minimum number of iterationsis depending on the optimization metric as well. In general,the more complex (higher dimensional) the optimizationproblem is, the more iterations are needed and vice versa.

    A strategy often used to determine the maximumnumber of iterations imax is to find the minimum valueof iterations at which the optimum MSE performance isreached. This approach requires extensive simulations overa variety of parameters in order to determine the optimumtradeoff between complexity and iterations.

    In the following a general criterion to determine themaximum number of iterations based on the probability dis-tribution function of the iterations required by PSO/CPSOfor convergence is presented. The advantage of this strategyis that only a fraction of parameters need to be simulatedwhile missing parameters can be reconstructed by means ofan interpolation. PSO/CPSO is said to reach convergence ifthe fitness value pOPT of (8)/(9) is below a certain thresholdth for γ iterations. In this case the threshold is set to th = 10−6with γ = 10.

    Monte Carlo simulations with a fixed parameter setfor CPSO and varying number of transmit antennas areconducted. The iteration at which the stopping criterion isfulfilled is recorded. A histogram of the iterations fulfillingthe stopping criterion for different dimensions is shownin Figure 5. Each histogram can be approximated by ageneralized extreme value distribution. The characteristicshape of the function is in the steep slope once a certainvalue is exceeded and a slow decline after the maximum isreached. The probability density function (pdf) of the gen-eralized extreme value distribution is described by (12). The

    distribution is characterized by three parameters, namely, theshape parameter k, the scale parameter σ , and the locationparameter μ:

    p(k,μ, σ

    ) =(

    )exp

    (−(

    1 + k(x − μ)

    σ

    )−(1/k))

    ×(

    1 + k(x − μ)

    σ

    )−1−(1/k).

    (12)

    Given the pdf for a certain parameter set, the maximumnumber of iteration imax can be defined to cover a certainpercentage of the pdf. The amount to which the pdfis covered defines the tradeoff between performance andcomplexity. Setting the maximum number of iterations toolow reduces the complexity of the algorithm, but also impliesa performance loss due to a premature stop of the algorithm.Vice versa, setting the maximum number of iterations toolarge is increasing complexity without a gain in performance.In case of D = 4 (cf. Figure 5 the location parameter resultsin μ = 45, which resembles the most likely iteration at whichthe algorithm converges. In order to cover at least 90% ofthe required iterations the maximum number of iterationsshould be set to imax ≥ 105.

    The aforementioned tradeoff between the number of par-ticles/subswarms and the number of iterations is evaluated inthe following. The maximum number of iterations is definedto cover 90% of the pdf.

    The number of iterations required by PSO/CPSO untilconvergence depends on the number of dimensions of theoptimization problem and the allocated number of swarmsand inherently particles. In Figure 6, the required number ofiterations depending on the dimensions of the optimizationproblem is given for different swarm sizes. With a constantswarm size the iterations are increasing quadratically withthe dimensions. On the contrary, with increasing swarmsizes, the required iterations are nearly constant over thedimensions, as can be seen from the similar starting pointsof the curves. The required number of iterations for PSO(Ns = 1) to converge, exceeds 8000 at 20 dimensions. Sincethe three parameters of the extreme value distribution arecorrelated over the number of particles and subswarms, notall swarm sizes need to be simulated but can be calculatedby means of interpolation. The optimum tradeoff betweenswarm size and iterations can thus be determined with aminimum amount of simulations.

    The overall complexity of CPSO depends on the com-plexity per iteration and the number of iterations:

    O(

    CPSO(total))= O

    (CPSO(it)

    )·O(I). (13)

    The number of iterations is taken into account here, as theyare influenced by the dimensionality of the optimizationproblem. The complexity of the MMSE is dominated by thematrix inversion which has a complexity of order

    O(MMSE) = O(N3T). (14)

    A fair comparison of the complexity is not straightforward,when for example the number of complex multiplications

  • 8 Journal of Electrical and Computer Engineering

    10 20 30

    Dimensions

    0

    2000

    4000

    6000

    8000

    Iter

    atio

    ns

    = 1= 2= 4= 6

    = 8= 10= 20= 30

    Ns

    Ns

    Ns

    Ns Ns

    Ns

    Ns

    Ns

    Figure 6: Required number of iterations of different swarm sizes asa function of the number of dimensions at an SNR of 10 dB.

    is considered, as different implementations as well as opti-mization techniques for both algorithms will end in differentresults. On basis of the O-notation, CPSO and MMSEare compared to give an insight on the influence of thedifferent parameters of CPSO. The total complexity of CPSOis separable into two parts as can be seen from (13),namely, into the complexity per iteration and the complexityintroduced by the number of iterations. The complexity ofCPSO per iteration is increased linearly with the number oftransmit antennas. It is obvious that the complexity of theMMSE will eventually be larger than the complexity of CPSOper iteration with increasing number of transmit antennas.

    However, the total complexity of CPSO will only be lowerfor a given number of transmit antennas if the numberof iterations required by CPSO for convergence is small.An assumption that is fulfilled when CPSO is used forinitialization.

    With increasing number of transmit antennas a largernumber of subswarms/particles can be supported with a low-er complexity than the MMSE estimator. Using a criterionfor optimum performance and not fast convergence (initial-ization) is further increasing the complexity of CPSO. Hence,optimum performance with a complexity lower than MMSEis only reached for larger MIMO systems.

    The following conclusions can be drawn from this result.

    (i) CPSO is suitable for medium- to large-sized MIMOsystems when used for initialization.

    (ii) Channel estimation with PSO/CPSO targeting opti-mum performance is computational feasible forMIMO systems with several dozens of transmitantennas. Such scenarios are often referred to asLarge-MIMO systems [27].

    6. Multiobjective PSO

    High-dimensional optimization problems can be solvedefficiently by the aforementioned cooperative particle swarmoptimization algorithm. However, PSO as well as CPSO arelimited to solve single-objective problems. A constraint thatis fulfilled for channel estimation given quasi-time-invariantchannels. With time-varying channels, the need for multi-objective optimization arises, since the optimization metric(8) and/or (9) can no longer be minimized by a singleconstant matrix. Specifically, this means that the position ofa particle, which is used as a candidate solution, may be exactfor one time index but will not for a second time index.

    Considering the coefficients over the time as additionaldimensions is infeasible due to the inherent increase ofiterations and complexity. However, due to the correlationof adjacent channel coefficients in the time domain amultiobjective particle swarm optimization (MOPSO) canbe applied. The fitness function is changed for MOPSO tominimize K objectives simultaneously:

    f(

    pi[k]) = ∥∥y[k]− pi[k]x[k]

    ∥∥2, 1 ≤ k ≤ K , (15)where K represents the number of training symbols in caseof pilot-aided channel estimation of a time-varying channel.Obviously, the candidate solution pi[k] may minimize thekth objective but is not necessarily optimal for the (k + 1)thobjective. This means that one objective cannot be optimizedwithout neglecting the performance of at least one otherobjective. The previous concept of one global best positionpGB is replaced by an archive F� with the so-called dominantsolutions for all objectives. A particle pi is said to dominateanother particle p′i , denoted as pi � p′i , if and only if

    (1) ∀λ ∈ {1, . . . ,Λ} : pi ≤ p′i ,(2) ∃λ′ ∈ {1, . . . ,Λ} : pi < p′i .

    In each iteration an updated candidate solution is comparedto the solutions stored in the external archive. A candidatesolution is added to the archive if it improves at leastone objective compared to the already existing solutions.A candidate solutions replaces an existing solution if itimproves all objectives of an existing solution. The set ofsolutions contained in the archive is termed Pareto set:

    F�.={

    pi ∈ RD | �p′i ∈ RD : p′i � pi}. (16)

    It is of importance that all objectives are optimized equallywell by the particles of a swarm. Without an additionalcontrol mechanism to ensure a certain degree of diversity(quality) within the archive, all particles may concentrateon one objective, equivalent to the single-objective PSO. Adiverse solution set can be found by applying the so-calledsigma method [28]. The idea of the sigma method is thateach point of the D-dimensional search space is assigned asigma value. The Euclidean distance of the sigma value of acurrent particle and the sigma value of the archive membersis calculated. The leader of the current particle is determinedby the smallest Euclidean distance between the sigma valueof the current position and the sigma value of a member ofthe archive.

  • Journal of Electrical and Computer Engineering 9

    Initialize swarmLocate leader in an external archivei = 1while i < imax or convergence do

    for each particle doSelect leader from archiveUpdate position using (1)/(2), (4)MutationEvaluation using (15)Update pBest

    end forUpdate leaders in the external archiveQuality (leaders)i++

    end while

    Algorithm 3: Multiobjective PSO algorithm.

    The principle of MOPSO is described by Algorithm 3.The update function is hereby unchanged to PSO/CPSO.An additional mutation operator is recommended as theMOPSO algorithm occasionally converges prematurely. Themutation operator is used at certain iteration intervals andincreases the velocity of a particle and/or reinitializes a parti-cle. The maintenance and the additional mutation operatorcontribute to an increased complexity of the algorithm.Nevertheless, a multi-objective PSO (MOPSO) is successfullyused in [29] to initialize a graph-based iterative receiverwith only 5 MOPSO iterations to achieve the optimumperformance with the subsequent graph-based receiver.

    7. Conclusion

    An overview of particle swarm optimization for MIMOchannel estimation has been given. General applicable solu-tions for MIMO channel estimation are presented as well asthe achievable performance of the algorithms is evaluated bymeans of Monte Carlo simulations. Furthermore, an analysisof the complexity based on the distribution of the requirediterations until convergence is introduced. The proposedmethod allows the calculation of a maximum number ofiterations with a minimum of simulation overhead, sincemissing parameters can be reconstructed by means of aninterpolation.

    It has been shown that cooperative PSO is able toapproach the optimum MMSE estimator. Thus, for a poten-tial implementation, the required number of iterations areof utmost importance. The presented MSE and BER resultsfurther illustrate that CPSO is also able to converge fast toa “reasonable” MSE, which allows an iterative receiver toconverge to the same performance achieved with an MMSE-based initialization, with just a minimum of iterationsrequired for the CPSO. The advantage of CPSO over MMSEis in the flexible tradeoff between complexity per iterationand required number of iterations, which makes it ideallysuited for parallelization. Furthermore, the parameters forCPSO do not need to be tuned by empirical measures, which

    is an advantage of CPSO, since the algorithm can directly beapplied for MIMO channel estimation.

    Although PSO/CPSO can be used to estimate time-varying channels by utilizing the extension to multipleobjectives, the strength of PSO/CPSO lies in the estimationof time-invariant channels.

    References

    [1] 3GPP TS 36.300 version 10.0.0, 2010.[2] J. Ylioinas and M. Juntti, “Iterative joint detection, decoding,

    and channel estimation in turbo-coded MIMO-OFDM,” IEEETransactions on Vehicular Technology, vol. 58, no. 4, pp. 1784–1796, 2009.

    [3] C. Knievel, Z. Shi, P. A. Hoeher, and G. Auer, “2D graph-basedsoftchannel estimation for MIMO-OFDM,” in Proceedings ofthe IEEE International Conference on Communications (ICC’10), Cape Town, South Africa, May 2010.

    [4] J. Kennedy and R. Eberhart, “Particle swarm optimization,”in Proceedings of the IEEE International Conference on NeuralNetworks, pp. 1942–1948, December 1995.

    [5] J. Kennedy and R. C. Eberhart, Swarm Intelligence, MorganKaufmann, 2001.

    [6] D. Goldberg, Genetic Algorithms in Search, Optimization, andMachineLearning, Addison-Wesley, Reading, Mass, USA, 1989.

    [7] M. Y. Alias, S. Chen, and L. Hanzo, “Multiple-antenna-aidedOFDM employing genetic-algorithm-assisted minimum biterror rate multiuser detection,” IEEE Transactions on VehicularTechnology, vol. 54, no. 5, pp. 1713–1721, 2005.

    [8] Y. Gao, Z. Li, X. Hu, and H. Liu, “A multi-population particleswarmoptimizer and its application to blind multichannelestimation,” in Proceedings of the 3rd International Conferenceon Natural Computation (ICNC ’07), Haikou, China, August2007.

    [9] S. Tao, X. Jiadong, and Z. Kai, “Blind MIMO identificationusing particleswarm algorithm,” in Proceedings of the Interna-tional Conference on Wireless Communications, Networking andMobile Computing (WiCom ’07), Shanghai, China, September2007.

    [10] H. Bodur, C. A. Tunc, D. Aktas, V. B. Erturk, and A. Altintas,“Particleswarm optimization for SAGE maximization stepin channel parameterestimation,” in Proceedings of the 2ndEuropean Conference on Antennas and Propagation (EuCAP’07), Ankara, Turkey, November 2007.

    [11] W. Dong, J. Li, and Z. Lu, “Joint frequency offset and channelestimationfor MIMO systems based on particle swarm opti-mization,” in Proceedings of the IEEE 67th Vehicular TechnologyConference (VTC ’08-Spring), Singapore, May 2008.

    [12] K. Schmeink, R. Block, C. Knievel, and P. A. Hoeher, “Jointchannel andparameter estimation for combined communi-cation and navigation usingparticle swarm optimization,” inProceedings of the 7th Workshop on Positioning NavigationandCommunication (WPNC ’10), Dresden, Germany, March2010.

    [13] C.-H. Cheng, H.-C. Hsu, Y.-F. Huang, J.-H. Wen, and L.-C.Hsu, “Performance of an adaptive PSO parallel interferencecanceller forCDMA communication systems,” in Proceedingsof the 5th Annual ICST WirelessInternet Conference (WICON’10), Singapore, March 2010.

    [14] H. Palally, S. Chen, W. Yao, and L. Hanzo, “Particle swarmoptimisationaided semi-blind joint maximum likelihoodchannel estimation and datadetection for MIMO systems,” in

  • 10 Journal of Electrical and Computer Engineering

    Proceedings of the IEEE/SP 15th Workshop on StatisticalSignalProcessing (SSP ’09), pp. 309–312, September 2009.

    [15] K. K. Soo, Y. M. Siu, W. S. Chan, L. Yang, and R. S. Chen,“Particle-swarm-optimization-based multiuser detector forCDMA communications,” IEEE Transactions on VehicularTechnology, vol. 56, no. 5, pp. 3006–3013, 2007.

    [16] L. D. Orazio, Study and development of novel techniques forPHYlayeroptimization of smart terminals in the context of next-generationmobile communications, Ph.D. thesis, University ofTrento, Trento, Italy, 2008.

    [17] N. Liangfang and D. Sidan, “Evolutionary particle swarmalgorithmbased on higher-order cumulant fitting for blindchannel identification,” in Proceedings of the Wireless Com-munications, Networking and Mobile Computing(WiCOM ’08),Dalian, China, October 2008.

    [18] W. Qiang, Z. Jiashu, and Y. Jing, “Identification of nonlin-ear communication channel using an novel particle swarmoptimization technique,” in Proceedings of the InternationalConference on Computer Science and Software Engineering(CSSE ’08), pp. 1162–1165, December 2008.

    [19] F. van den Bergh and A. P. Engelbrecht, “A cooperative ap-proach to participle swam optimization,” IEEE Transactions onEvolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004.

    [20] Y. Shi and R. Eberhart, “A modified particle swarm optimizer,”in Proceedings of the IEEE World Congress on ComputationalIntelligence, Anchorage, Alaska, USA, May 1998.

    [21] D. Bratton and J. Kennedy, “Defining a standard for particleswarm optimization,” in Proceedings of the IEEE SwarmIntelligence Symposium (SIS ’07), pp. 120–127, April 2007.

    [22] Z.-H. Zhan, J. Zhang, Y. Li, and H. S.-H. Chung, “Adaptiveparticleswarm optimization,” IEEE Transactions on Systems,Man, and Cybernetics, vol. 39, no. 6, pp. 227–234, 2009.

    [23] A. Carlisle and G. Dozier, “An off-the-shelf PSO,” in Proceed-ings of the Particle Swarm Optimization Workshop, pp. 1–6,April 2001.

    [24] K. Zielinski and R. Laur, “Stopping criteria for a con-strained single-objective particle swarm optimization algo-rithm,” Informatica, vol. 31, no. 1, pp. 51–59, 2007.

    [25] T. Hendtlass, “Particle swarm optimization and high dimen-sional problemspaces,” in Proceedings of the IEEE Congresson Evolutionary Computation (CEC ’09), Trondheim, Norway,May 2009.

    [26] B. Hassibi and B. M. Hochwald, “How much training isneeded in multiple-antenna wireless links?” IEEE Transactionson Information Theory, vol. 49, no. 4, pp. 951–963, 2003.

    [27] S. K. Mohammed, A. Zaki, A. Chockalingam, and B. S. Rajan,“Highratespace-time coded large-MIMO systems: low-com-plexity detectionand channel estimation,” IEEE Journal of Se-lected Topics in Signal Processing, vol. 3, no. 6, pp. 958–974,2009.

    [28] S. Mostaghim and J. Teich, “Strategies for finding goodlocal guides inmulti-objective particle swarm optimization(MOPSO),” in Proceedings of the IEEESwarm IntelligenceSymposium (SIS ’03), Indianapolis, Ind, USA, April 2003.

    [29] C. Knievel, P. A. Hoeher, G. Auer, and A. Tyrrell, “Particleswarmenhanced graph-based channel estimation for MIMO-OFDM,” in Proceedings of the IEEE Vehicular TechnologyConference (VTC ’11-Spring), Budapest, Hungary, May 2011.

  • International Journal of

    AerospaceEngineeringHindawi Publishing Corporationhttp://www.hindawi.com Volume 2010

    RoboticsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Active and Passive Electronic Components

    Control Scienceand Engineering

    Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    International Journal of

    RotatingMachinery

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporation http://www.hindawi.com

    Journal ofEngineeringVolume 2014

    Submit your manuscripts athttp://www.hindawi.com

    VLSI Design

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Shock and Vibration

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Civil EngineeringAdvances in

    Acoustics and VibrationAdvances in

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Electrical and Computer Engineering

    Journal of

    Advances inOptoElectronics

    Hindawi Publishing Corporation http://www.hindawi.com

    Volume 2014

    The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

    SensorsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Modelling & Simulation in EngineeringHindawi Publishing Corporation http://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Chemical EngineeringInternational Journal of Antennas and

    Propagation

    International Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Navigation and Observation

    International Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    DistributedSensor Networks

    International Journal of