Simulated Annealing: Basics and application examples

Embed Size (px)

DESCRIPTION

Simulated Annealing (SA) is a stochastic computational technique derived from statistical mechanics for finding near globally-minimum-cost solutions to large optimization problems. In this document the basic theory of this algorithm is explained and some of its benefits are verified with practical examples.

Citation preview

  • Simulated Annealing: Basics and application examples

    Introduction

    Page 1

    Simulated Annealing: Basics and application examples

    By Ricardo Alejos

    I. Introduction Finding the global minimum can be a hard optimization problem since the objective functions can have many local

    minima. A procedure for solving such problems should sample values of the objective function in such a way as to

    have a high probability of finding a near-optimal solution and lend itself to efficient implementation. Such criteria

    is met by the Simulated Annealing method which was introduced by Kirkpatrick et al. and independently by Cerny

    in early 1980s.

    Simulated Annealing (SA) is a stochastic computational technique derived from statistical mechanics for finding

    near globally-minimum-cost solutions to large optimization problems [1].

    Statistical mechanics is the study of the behavior of large systems of interacting components, such as atoms in a

    fluid, in thermal equilibrium at a finite temperature. If the system is in thermal equilibrium, then its particles have

    a probability to change from one state of energy to another given by Boltzmann distribution which is dependent on

    the system temperature and the magnitude of the pretended energy change. This in such a way that higher temper-

    atures allow random changes while low temperatures tend to allow only decreasing energy state changes.

    In order to achieve a low-energy state, one must use an annealing process, which consists on elevating the system

    temperature and gradually lower it down and spending enough time at each temperature to reach thermal equilib-

    rium.

    In contrast to many of the classical optimization methods, this one is not based in gradients and it does not has a

    deterministic convergence: the same seed and parametric configuration may make the algorithm converge to a dif-

    ferent solution from one run to another. This is due to the random nature on how it decides to make its steps towards

    the final candidate solution.

    Such behavior may not result in the most precise/optimal solution, however it has other exploitable advantages: it

    can get unstuck from local optimum points when the algorithm is in a high-energy state, it can deal with noisy

    objective functions, and it can be used for combinatorial/discrete optimization, among others. All this with a small

    number of iterations / function evaluations in comparison to other optimization methods. When applicable, the SA

    algorithm can be used alternately with other methods to increase the accuracy of the final solution.

    In this document the basic theory of this algorithm is explained and some of its benefits are verified with practical

    examples.

    II. Basic theory of Simulated Annealing The name of this algorithm is inspired from metallurgy. In that discipline, annealing consists on a technique that

    involves heating a metal and cooling it down in a controlled manner such that it can increase the size of the solid

    crystals and therefore reduce their defects. This notion of slow cooling is implemented in the SA to decrease the

    probability of accepting local optimum values of the objective function.

  • Simulated Annealing: Basics and application examples

    Practical examples

    Page 2

    Each state (solution candidate) can be considered as a node which is interconnected to other nodes. The current

    node may change from one position to the other depending on the change of magnitude of the objective function.

    Such principle is implemented in a way that worse candidates are less likely to be accepted than better ones, favoring

    the creation of a path towards de optimal solution.

    Each next-node-candidate is generated randomly. A popular variant of the original SA also considers a temperature

    dependent step size which balances the chances of escaping from local optimum values and the precision of the

    final found solution candidate.

    Lets consider our current node is c and that the next generated candidate is n. This step has a probability P(c,n) of

    being taken and 1-P(c,n) of being rejected. When a candidate is accepted the next thing to do is just make c=n; and

    when a candidate is rejected then a new one is generated. After this the process is repeated (all this details can be

    better understood by looking at the pseudo-code included in this document). The probability function is described

    with the mathematical expression in (1).

    TEe

    ncP/1

    1),(

    (1)

    Where E is the change on the objective function value from c to n, and T is the current algorithm temperature

    parameter value. Notice that this function tends to have a value of 0.5 when T>>E and that it varies from ~0 to

    ~1 in other cases. Therefore, the higher the temperature the more random the algorithm becomes (which matches

    with its natural model) and any candidate get the same chance of being chosen or rejected.

    When T1 the behavior of P(c,n) becomes very similar to an time-inverted unit-step-function: its value goes rap-

    idly to 0 when E>0 and to 1 when E

  • Simulated Annealing: Basics and application examples

    Practical examples

    Page 3

    that make patent the advantages and disadvantages of these methods and gives an idea on how they complement

    each other.

    Case 1: Bowl function The Bowl function is a well-behaved and easy to optimize function that serves as a good first test for optimization

    algorithms. The mathematical expression that describes it is written as (2). Its analytic minimum can be calculated

    analytically to be x=[6, 4.5]T.

    4

    2

    2

    1 )5.4(25

    1)6( xxy (2)

    In order to assess the Simulated Annealing algorithm performance, it is compared with the Conjugate Gradients FR

    and the Nelder Mead algorithms. The results of applying them to the Bowl function are shown in the TABLE 3.

    Also, as a visual aid on how the Simulated Annealing chooses the next steps please look at the Figure 3.

    With the purpose of exploring the behavior far away from the optimum point, lets trigger the algorithm using the

    point [-20, 60] as the seed value. The results can be checked in TABLE 4 and the Simulated Annealing algorithm

    evolution can be observed in Figure 4.

    Case 2: Bowl function with random noise This test case basically consists on adding a relatively small amount of noise to the objective function. In this case

    it is a Gaussian noise centered in the function value at each point with a standard deviation of 0.5. Just as before, a

    set of experiments were made using the same algorithms to assess the performance of each one compared to the

    others. The results are shown in TABLE 5 while the Simulated Annealing algorithm evolution is visually described

    in Figure 5.

    Case 3: Multiple local minimum points For this test, the function to use is a periodic and exponentially decreasing function. With the purpose of illustrating

    it using graphs the exercise keeps the function dimensionality. The math expression that describes this function is

    (3).

    102212

    21

    13

    4cos

    13

    4sin

    xx

    exx

    y

    (3)

    Lets also limit the solution space to the range 0 to 12 for both x1 and x2. With this scenario the analytic solution

    happens at [11.4285, 9.8035]. Such limits are incorporated to the problem with punishment functions which in-

    crease the value of the function as it goes away from the ranges of interest.

    The results of such experiments, as well as the Simulated Annealing evolution are shown in TABLE 6 and Figure

    6 correspondingly.

    Case 4: Low-pass filter on micro-strip technology This exercise consists on finding the size parameters that allow the next low-pass filter match its specification

    requirements (which are known to be strict). Such requirements are shown below this paragraph and they have to

    be met by varying x=[W1, L1, S1]T while preserving z=[H, r, Wp, Lp, tan(), , t]:

    GHz6GHz2.0for 9.011 fS

    GHz10GHz8for 1.021 fS

  • Simulated Annealing: Basics and application examples

    Conclusions

    Page 4

    The value of z is [0.794mm, 2.2, 2.45mm, 12.25mm, 0.01, 5.8107S/m, 15.24m]. Such dimensions and the filter

    geometry can be visualized in Figure 7.

    This problem can be solved using a mini-max formulation, where the objective function is the maximum error with

    respect to the spec requirements. In this kind of formulations, the maximum error has to become less than zero to

    meet all the specs.

    As the last example, it has many local minimums which made other algorithms converge into them. For this problem

    the SA algorithm is fed with a periodic (re-heating) temperature profile. Such profiles make the algorithm recover

    a big step-size and randomness after they were near to convergence in the first cycle, and that is how the SA can

    escape the local minimums.

    The TABLE 7 shows the results of each of the three algorithms we have been using to make the comparison. Even

    though that the Nelder-Mead algorithm was able to achieve a negative maximum error, it doubled the cost that the

    SA algorithm took to find its nearby solution. For such cases the designer has to decide on the tradeoff between the

    computation cost and the quality of the solution when choosing an optimization algorithm. The Conjugate Gradients

    algorithm did not work as well as the other two algorithms.

    Refer to the Figure 8 to visualize the algorithm evolution in terms of the maximum design error.

    Case 5: Noisy filter optimization In this last case, the same problem as case 4 is solved but now with added complexity: random noise was added to

    the circuit response. This noise has its mean centered to the target function value with a standard deviation of 0.1.

    Refer to the experiment results in TABLE 8. Both seeds make the SAs algorithm to find near-to-zero solutions,

    however, the Nelder-Mead has an acceptable performance just in the first one (the second one diverges by almost

    90% of the maximum error).

    So other valuable application for the SAs algorithm is not just direct optimization over complex problems but also

    a good seed finder for finer optimization algorithms. In this case, after using the last SA solution and then applying

    it to the Nelder-Mead algorithm it gets a final objective function value of 0.0646 (however it still evaluates the

    function 601 times).

    IV. Conclusions The SA algorithm is a cheap optimization method compared to gradient based methods and the Nelder-Mead algo-

    rithm. Its capability of locating good-enough solutions in very short number of iterations makes it a tool that can be

    used for initial objective function exploration. Once a set of good candidate solutions are gotten, the algorithm can

    be adapted to finer steps and less randomness in order to achieve more precise solutions. Otherwise, its inexact

    solutions can be used as seed values for other optimization algorithms that otherwise would get stuck in local min-

    imum points along the objective function if they were initiated using the original seed value.

    In order to increase the capability of the SA algorithm to escape the local minimum points, periodic temperature

    profiles can be used so the function can recover the solution mobility after settling to a candidate solution. If it

    happens to lose a global maximum due to this recovered randomness it will still report the most optimum point

    along the search.

    Having a variable step size (determined by the current algorithm temperature) also allows the algorithm to gradually

    change its search style from coarse to fine benefiting its global solution search. Configuring the step is also helpful

    when the objective function is noisy, since such functions tend to make gradient based algorithms to diverge since

    they take the neighbor function values to determine the search direction and step magnitude.

  • Simulated Annealing: Basics and application examples

    References

    Page 5

    Noisy functions is a problem that is frequently observed when real-world measurements are done. And this is be-

    cause all measurements have a range of uncertainty, which can be modeled as noise. Other methods that are used

    to solve this kind of problems are based in the idea of averaging a set of samples of the objective function evaluated

    in a fixed point. Such method has a good effectiveness in terms of finding a good candidate solution, however they

    tend to increase the function evaluation cost exponentially (which does not happen with SA).

    V. References

    [1] J. Gall, "Simulated Annealing," in Computer Vision. A Reference Guide., Tbingen, Germany, Springerlink, 2014, p. 898.

    [2] P. Rossmanith, "Simulated Annealing," in Algorithms Unplugged, Springerlink, 2011, p. 406.

    [3] P. v. d. H. J. K. W. M. H. S. E. Aarts, "Simulated Annealing," in Metaheuristic Procedures for Training Neural

    Networks, Springerlink, 2006.

  • Simulated Annealing: Basics and application examples

    Appendix

    Page 6

    VI. Appendix

    Tables TABLE 1.

    PSEUDO CODE FOR THE SIMULATED ANNEALING ALGORITHM. NOTICE THAT THE ACCEPTANCE OF A NEW POINT HAP-

    PENS WITH A PROBABILITY P GIVEN BY THE PROBABILITY FUNCTION (1).

    Make the seed value our current node (c=x0)

    Evaluate the objective function in the seed value (E0=f(c))

    For each temperature T value in a decreasing normalized set ({1 0}): Generate a new step candidate (n)

    Evaluate the objective function in the new step candidate (E1=f(n))

    If P(c,n) > random(0,1)

    Accept the new candidate (c=n)

    Output: Final node value.

    TABLE 2.

    MATLAB IMPLEMENTATION OF THE SIMULATED ANNEALING ALGORITHM WITH THE IMPROVEMENTS MENTIONED IN THE

    SECTION The current implementation.

    function [x_opt, f_val, XN, FN] = SimulatedAnnealing2(f, x0, t, s, l) %{ Simulated Annealing - Optimization Algorithm Inputs f - Function to be optimized x0 - Seed value of independent variable of "f" t - Vector with temperature values s - Step size [Maximum Minimum] l - x limits [min(x); max(x)] Ouputs x_opt - x value that minimizes "f" to the found minimum. f_val - value of "f" at x_opt XN - x value history during the algorithm run FN - f value history during the algorithm run The more the cost of f is, the shorter the t vector should be. %}

    N = 1:1:length(t); % iterator P = @(DE,T) 1/(1+exp(DE/T)); % bigger when DE is more negative S = @(T) (s(2)-s(1))/(max(t)-min(t))*(T-min(t))+s(2); % linear step-size calcu-

    lation in function of temperature xn = x0; XN = zeros(length(t), length(x0)); FN = zeros(length(t),1); c=0; f0 = feval(f,xn); fn = f0; for n = N T = t(n); % update current temperature while (1) xt = rand(size(xn))-0.5; % generate random direction xt = xt/norm(xt,2); % make direction vector unitary xt = xt*S(T); % scale step size

  • Simulated Annealing: Basics and application examples

    Appendix

    Page 7

    xt = xn + xt; % advance that step if (sum(xt>l(1,:))==length(xt) && sum(xt

  • Simulated Annealing: Basics and application examples

    Appendix

    Page 8

    Conjugate Gradi-

    ents FR

    [-20, 60] [6.0000 4.4226] 9688 0.0774

    Nelder-Mead [6.0000 4.5000] 180 1.3098e-06

    SA (70 elements in

    downward ramp

    profile)

    [5.8363 4.8158] 70 0.3557

    SA (30 elements in

    downward ramp

    profile)

    [6.0663 4.0117] 30 0.4928

    TABLE 5.

    RESULTS FROM THE EXPERIMENTS MADE WITH THE NOISY BOWL FUNCTION AND USING THE CONJUGATE GRADIENT FR,

    NELDER-MEAD AND THE SIMULATED ANNEALING (10 FUNCTION EVALUATIONS ONLY). NOW IT BECOMES OBVIOUS THAT

    THE SIMULATED ANNEALING GOT A MUCH BETTER RESULT THAN THE OTHER ALGORITHMS WHICH SIMPLY DIVERGE

    FROM THE ANALYTIC SOLUTION.

    Algorithm Seed value Found solution Function evalua-

    tions

    Euclidean norm of

    error

    Conjugate Gradi-

    ents FR

    [1, 1] [5.8477 -1.0434]

    (maxed out itera-

    tions)

    37317 5.5455

    Nelder-Mead [1.0590 0.9977]

    (maxed out function

    evaluations)

    401 6.0563

    SA (10 elements

    downward ramp

    profile)

    [6.0581 4.3855] 10 0.1284

    TABLE 6.

    RESULTS OF APPLICATION OF THE CONJUGATE GRADIENTS FR, NELDER-MEAD AND SIMULATED ANNEALING ALGO-

    RITHMS FOR MINIMIZING THE FUNCTION DESCRIBED BY (3). NOTICE NOW THAT THE SIMULATED ANNEALING ALGO-

    RITHM IS THE ONE THAT IS CAPABLE OF GETTING A MUCH BETTER RESULT THAN THE OTHER ALGORITHMS. IT ALSO

    NEEDED MORE TRIES IN ORDER TO GET SUCH SOLUTIONS (EACH TRY THROWS A DIFFERENT RESULT AS IT WORKS AS A

    STOCHASTIC PROCESS

    Algorithm Seed value Found solution Function evalua-

    tions

    Euclidean norm of

    error

    Conjugate Gradi-

    ents FR

    [4, 4] [4.9285 3.3035] 54 9.1924

    Nelder-Mead [4.9285 3.3035]

    80 9.1924

    SA (10 elements

    downward ramp

    profile)

    [5.2441 7.0207] 10 6.7817

    SA (30 elements

    downward ramp

    profile) 5 tries.

    [11.0842 9.8603] 30*5=150 0.3490

    SA (30 elements 3

    cycle sawtooth) 2 tries.

    [11.3353 10.0847] 30*2=60 0.2963

    SA (90 elements 3

    cycle sawtooth)

    [11.2248 9.8939] 90 0.2229

  • Simulated Annealing: Basics and application examples

    Appendix

    Page 9

    SA (90 elements 3

    cycle sawtooth)

    [11.6230 9.8049] 90 0.1945

    TABLE 7

    RESULTS OF MINIMIZING THE ERROR WITH RESPECT TO THE DESIGN SPECIFICATIONS FOR THE FILTER DESCRIBED IN

    Case 4: Low-pass filter on micro-strip technology USING THE CONJUTAGE GRADIENTS FR, NELDER-MEAD AND SIMULATED AN-

    NEALING ALGORITHMS. IN THIS CASE, THE NELDER-MEAD PERFORMED THE BEST, FOLLOWED BY THE SIMULATED AN-

    NEALING ALGORITHM. THE CONJUGATE GRADIENTS FR METHOD DID NOT CONVERGE TO A SOLUTION EVEN AFTER

    MORE THAN 30000 FUNCTION EVALUATIONS.

    Algorithm Seed value (mm) Found solution

    (mm) Relative to seed

    Maximum error

    value at solution

    Function evalua-

    tions

    Conjugate Gradi-

    ents FR

    [3.5 5.6 4.2] [1.4002 6.1634

    4.9724]

    0.1753 37317

    Nelder-Mead [0.2730 1.1980

    0.9154]

    -0.0077 163

    Simulated Anneal-

    ing

    [0.5438 0.9741

    1.6639]

    0.0837

    63

    TABLE 8.

    RESULTS OF MINIMIZING THE MAXIMUM ERROR FOR THE DESIGN PROBLEM Case 5: Noisy filter optimization. THE SAME FIL-

    TER AS IN Case 4: Low-pass filter on micro-strip technology HAS BEEN USED, BUT THIS TIME THERE IS A WHITE NOISE COMPO-

    NENT ADDED TO THE FILTER RESPONSE MAKING THIS PROBLEM MORE DIFFICULT. NOTICE HOW THE NELDER-MEAD

    ALGORITHM IS STILL REPORTING GOOD RESULTS (NOT NEGATIVE BUT THE LOWEST IN THE TABLE) BY MAXING OUT ITS

    FUNCTION EVALUATIONS. THE SIMULATED ANNEALING ALGORITHM DOES NOT CONVERGE TO THE BEST SOLUTION BUT

    CAN BE USED TO GENERATE GOOD SEEDS NEAR THE REGION WHERE THE OPTIMUM POINT RESIDES.

    Algorithm Seed value (mm) Found solution

    (mm) Relative to seed

    Maximum error

    value at solution

    Function evalua-

    tions

    Nelder-Mead [3.5 5.6 4.2]

    [0.2368 1.1248

    0.8668]

    0.0115 601

    Simulated Anneal-

    ing (1st try)

    [0.3401 0.8781

    2.0473]

    0.1593 100

    Simulated Anneal-

    ing (2nd try)

    [0.5284 1.0444

    1.3042]

    0.1326 100

    Nelder-Mead [7 11.2 8.4]

    [2.0137 1.9943

    1.9928]

    0.8610 601

    Simulated Anneal-

    ing (1st try)

    [1.4555 0.4623

    2.4090]

    0.3735 100

    Simulated Anneal-

    ing (2nd try)

    [0.9258 0.8552

    2.1638]

    0.2072 100

  • Simulated Annealing: Basics and application examples

    Appendix

    Page 10

    Figures

    Figure 1. Plotted transition probability function. This describes how probable is to accept or reject a step given the en-

    ergy difference between the current point and the proposed one ()

    (a)

    (b)

    (c)

    (d)

    Figure 2. Different temperature profiles are used for this implementation of the Simulated Annealing algorithm. (a) shows a soft-transition

    profile which is then used periodically in (c). (b) is a downwards ramp profile which is used periodically in (d).

  • Simulated Annealing: Basics and application examples

    Appendix

    Page 11

    (a)

    (b)

    Figure 3. Evolution of the Simulated Annealing algorithm for the Bowl function starting from the point [1,1]. The steps that were accepted are marked with a circle. Notice how they concentrate near the analytic minimum. In (a) the

    algorithm runs evaluating 70 points in the objective function while in (b) it uses only 10. Notice that number is

    the number of points in the temperature profile function being used.

    (a)

    (b)

    Figure 4. Evolution of the Simulated Annealing algorithm for the Bowl function starting from the point [20,60]. This time, 70 temperature profile points were used to produce (a) and 30 for (b).

    Figure 5. Evolution of the Simulated Annealing algorithm within the noisy Bowl function surface starting at point [1,1].

  • Simulated Annealing: Basics and application examples

    Appendix

    Page 12

    (a)

    (b)

    (c)

    Figure 6. Evolution of the Simulated Annealing Algorithm applied to the function (3). All these scenarios are using a

    sawtooth profile with different number of elements: (a) shows the case with 30 elements, (b) and (c) show the

    behavior using 90 elements. Each element in the temperature profile translates to a function evaluation.

  • Simulated Annealing: Basics and application examples

    Appendix

    Page 13

    Figure 7. Dimensional description of the RF filter used for Case 4: Low-pass filter on micro-strip technology and Case 5:

    Noisy filter optimization.

    (a)

    (b)

    Figure 8. (a) shows the evolution of the maximum design error for the problem at Case 4: Low-pass filter on micro-strip

    technology as the Simulated Annealing algorithm progresses through the temperature profile shown in (b).

    Notice that (b) has 64 elements and therefore the function is evaluated 64 times and that the best error value

    happens between the evaluations #20 and #30.

    I. IntroductionII. Basic theory of Simulated AnnealingThe current implementation

    III. Practical examplesCase 1: Bowl functionCase 2: Bowl function with random noiseCase 3: Multiple local minimum pointsCase 4: Low-pass filter on micro-strip technologyCase 5: Noisy filter optimization

    IV. ConclusionsV. ReferencesVI. AppendixTablesFigures