11
Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 2013, Article ID 474852, 10 pages http://dx.doi.org/10.1155/2013/474852 Research Article A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack Problem Yourim Yoon 1 and Yong-Hyuk Kim 2 1 Future IT R&D Laboratory, LG Electronics Umyeon R&D Campus, 38 Baumoe-ro, Seocho-gu, Seoul 137-724, Republic of Korea 2 Department of Computer Science and Engineering, Kwangwoon University, 20 Kwangwoon-ro, Nowon-gu, Seoul 139-701, Republic of Korea Correspondence should be addressed to Yong-Hyuk Kim; yhdfl[email protected] Received 9 January 2013; Accepted 23 April 2013 Academic Editor: Xiaohui Liu Copyright © 2013 Y. Yoon and Y.-H. Kim. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present a new evolutionary algorithm to solve the 0-1 multidimensional knapsack problem. We tackle the problem using duality concept, differently from traditional approaches. Our method is based on Lagrangian relaxation. Lagrange multipliers transform the problem, keeping the optimality as well as decreasing the complexity. However, it is not easy to find Lagrange multipliers nearest to the capacity constraints of the problem. rough empirical investigation of Lagrangian space, we can see the potentiality of using a memetic algorithm. So we use a memetic algorithm to find the optimal Lagrange multipliers. We show the efficiency of the proposed method by the experiments on well-known benchmark data. 1. Introduction e knapsack problems have a number of applications in various fields, for example, cryptography, economy, network, and so forth. e 0-1 multidimensional knapsack problem (0- 1MKP) is an NP-hard problem, but not strongly NP-hard [1]. It can be considered as an extended version of the well-known 0-1 knapsack problem (0-1KP). In the 0-1KP, given a set of objects, each object that can go into the knapsack has a size and a profit. e knapsack has a certain capacity for size. e objective is to find an assignment that maximizes the total profit not exceeding the given capacity. In the case of the 0- 1MKP, the number of capacity constraints is more than one. For example, the constraints can be a weight besides a size. Naturally, the 0-1MKP is a generalized version of the 0-1KP. Let and be the numbers of objects and capacity constraints, respectively. Each object has a profit V , and, for each constraint , a capacity consumption value . Each constraint has a capacity . en, we formally define the 0-1MKP as follows: maximize k x subject to x b, x ∈ {0, 1} , (1) where k = (V ) and x = ( ) are -dimensional column vectors, = ( ) is an × matrix, b = ( ) is an -dimensional column vector, and means the transpose of a matrix or a column vector. , b, and k are given, and each element of them is a nonnegative integer. In brief, the objective of the 0-1MKP is to find a binary vector x which maximizes the weighted sum k x satisfying linear constraints x b. For the knapsack problem with only one constraint, there have been a number of researches about efficient approx- imation algorithm to find a near-optimal solution. In this paper, we are interested in the problem with more than one constraint, that is, the multidimensional knapsack problem. In [2, 3] among others, the exact algorithms for 0-1MKP have been introduced. Heuristic approaches for 0-1MKP have also been extensively studied in the past [413]. Also, a number of evolutionary algorithms to solve the problem have been proposed [6, 1419]. A number of methods for the 0-1 bi- knapsack problem, which is a particular case of 0-1MKP, have also been proposed. e reader is referred to [2022] for deep surveys of 0-1MKP. However, most researches directly deal with the discrete search space. In this paper, we transform the search space of

Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

Hindawi Publishing CorporationDiscrete Dynamics in Nature and SocietyVolume 2013 Article ID 474852 10 pageshttpdxdoiorg1011552013474852

Research ArticleA Memetic Lagrangian Heuristic for the 0-1 MultidimensionalKnapsack Problem

Yourim Yoon1 and Yong-Hyuk Kim2

1 Future IT RampD Laboratory LG Electronics Umyeon RampD Campus 38 Baumoe-ro Seocho-gu Seoul 137-724 Republic of Korea2Department of Computer Science and Engineering Kwangwoon University 20 Kwangwoon-ro Nowon-guSeoul 139-701 Republic of Korea

Correspondence should be addressed to Yong-Hyuk Kim yhdflykwackr

Received 9 January 2013 Accepted 23 April 2013

Academic Editor Xiaohui Liu

Copyright copy 2013 Y Yoon and Y-H Kim This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

We present a new evolutionary algorithm to solve the 0-1 multidimensional knapsack problemWe tackle the problem using dualityconcept differently from traditional approaches Ourmethod is based on Lagrangian relaxation Lagrangemultipliers transform theproblem keeping the optimality as well as decreasing the complexity However it is not easy to find Lagrange multipliers nearestto the capacity constraints of the problem Through empirical investigation of Lagrangian space we can see the potentiality ofusing a memetic algorithm So we use a memetic algorithm to find the optimal Lagrange multipliers We show the efficiency of theproposed method by the experiments on well-known benchmark data

1 Introduction

The knapsack problems have a number of applications invarious fields for example cryptography economy networkand so forthThe 0-1multidimensional knapsack problem (0-1MKP) is an NP-hard problem but not strongly NP-hard [1]It can be considered as an extended version of thewell-known0-1 knapsack problem (0-1KP) In the 0-1KP given a set ofobjects each object that can go into the knapsack has a sizeand a profit The knapsack has a certain capacity for size Theobjective is to find an assignment that maximizes the totalprofit not exceeding the given capacity In the case of the 0-1MKP the number of capacity constraints is more than oneFor example the constraints can be a weight besides a sizeNaturally the 0-1MKP is a generalized version of the 0-1KP

Let 119899 and 119898 be the numbers of objects and capacityconstraints respectively Each object 119894 has a profit V

119894 and

for each constraint 119895 a capacity consumption value119908119895119894 Each

constraint 119895 has a capacity 119887119895 Then we formally define the

0-1MKP as followsmaximize k

119879x

subject to 119882x le b x isin 0 1119899(1)

where k = (V119894) and x = (119909

119894) are 119899-dimensional column

vectors 119882 = (119908119895119894) is an 119898 times 119899 matrix b = (119887

119895) is an

119898-dimensional column vector and 119879 means the transposeof a matrix or a column vector 119882 b and k are givenand each element of them is a nonnegative integer In briefthe objective of the 0-1MKP is to find a binary vector xwhich maximizes the weighted sum k119879x satisfying 119898 linearconstraints119882x le b

For the knapsack problem with only one constraint therehave been a number of researches about efficient approx-imation algorithm to find a near-optimal solution In thispaper we are interested in the problem with more than oneconstraint that is the multidimensional knapsack problemIn [2 3] among others the exact algorithms for 0-1MKP havebeen introduced Heuristic approaches for 0-1MKP have alsobeen extensively studied in the past [4ndash13] Also a numberof evolutionary algorithms to solve the problem have beenproposed [6 14ndash19] A number of methods for the 0-1 bi-knapsack problem which is a particular case of 0-1MKP havealso been proposedThe reader is referred to [20ndash22] for deepsurveys of 0-1MKP

However most researches directly deal with the discretesearch space In this paper we transform the search space of

2 Discrete Dynamics in Nature and Society

the problem into a real space instead of directly managing theoriginal discrete spaceThe 0-1MKP is the optimization prob-lem with multiple constraints We transform the problemusing the multiple Lagrange multipliers However we havea lot of limitations since the domain is not continuous butdiscrete Lagrangian heuristics have been mainly used to getgood upper bounds of the integer problems by the duality Toget good upper bounds a number of researchers have studieddual solvings using Lagrangian duality surrogate duality andcomposite duality and there was a recent study using cuttingplane method for 0-1MKP However in this paper we focuson finding good lower bounds that is good feasible solutions

There have been a number of studies about Lagrangianmethod for discrete problemsThere were also a fewmethodsthat used Lagrange multipliers for 0-1MKP Typically mostof them found just good upper bounds to be used in abranch-and-bound algorithm by dualizing constraints andhence could not find a feasible solution directly To the best ofthe authorrsquos knowledge there have been only two Lagrangianmethods to find lower bounds (feasible solutions) One isthe constructive heuristic (CONS) proposed by Magazineand Oguz [10] and the other is its improvement usingrandomization (R-CONS) [13 19] There was also a methodcalled LM-GA that improved the performance by combiningCONS with genetic algorithms [23] LM-GA used the real-valued weight-codings to make a variant of the originalproblem and then applied CONS (see Section 22 for details)LM-GA provided a new viewpoint to solve 0-1MKP but itjust used CONS for fitness evaluation and did not give anycontribution in the aspect of Lagrangian theory

In this paper we present a local improvement heuristicfor optimizing Lagrange multipliers However it is not easyto obtain good solutions by just using the heuristic itselfThrough empirical investigation of Lagrangian space wedevise a novel memetic Lagrangian heuristic combined withgenetic algorithms The remainder of this paper is organizedas follows In Section 2 we present our memetic Lagrangianmethod together with some literature survey In Section 3we give our experimental results on well-known benchmarkdata and we make conclusions in Section 4

2 Lagrangian Optimization

21 Preliminaries The 0-1MKP is a maximization problemwith constraints It is possible to transform the original opti-mization problem into the following problem using Lagrangemultipliers

maximize k119879x minus ⟨120582119882x minus b⟩

subject to x isin 0 1119899(2)

It is easy to find the maximum of the transformed problemusing the following formula

k119879x minus ⟨120582119882x minus b⟩ =

119899

sum

119894=1

119909119894(V119894minus

119898

sum

119895=1

120582119895119908119895119894) + ⟨120582 b⟩ (3)

Tomaximize the above formula for the fixed 120582 we have to set119909119894to be 1 only if V

119894gt sum119898

119895=1120582119895119908119895119894for each 119894 Since each V

119894does

not have an effect on the others getting themaximum is fairlyeasy Since this algorithm computes just sum119898

119895=1120582119895119908119895119894for each

119894 its time complexity becomes 119874(119898119899)If we only find out 120582 for the problem we get the

optimal solution of the 0-1MKP in polynomial time Wemay have the problem that such 120582 never exists or it isdifficult to find it although it exists However this methodis not entirely useless For arbitrary 120582 let the vector x whichachieves the maximum in the above formula be xlowast Since 120582is chosen arbitrarily we do not guarantee that xlowast satisfies theconstraints of the original problem Nevertheless letting thecapacity be blowast = 119882xlowast instead of bmakes xlowast be the optimalsolution by the following proposition [13 19] We call thisprocedure Lagrangian method for the 0-1MKP (LMMKP)

Proposition 1 The vector xlowast obtained by applying LMMKPwith given 120582 is the maximizer of the following problem

maximize k119879x

subject to 119882x le blowast x isin 0 1119899(4)

In particular in the case that 120582119896is 0 the 119896th constraint is

ignored That is xlowast is the maximizer of the problems whichhave the capacities crsquos such that 119888

119896ge 119887lowast

119896and 119888119894= 119887lowast

119894for all

119894 = 119896 In general the following proposition [19] holds

Proposition 2 In particular if LMMKP is applied with 120582 suchthat 120582

1198961

= 0 1205821198962

= 0 119886119899119889 120582119896119903

= 0 replacing the capacityblowast by c such that

119888119894= 119887lowast

119894 if 119894 = 119896

119895forallj

119888119894ge 119887lowast

119894 otherwise

(5)

in Proposition 1 makes the proposition still hold

Instead of finding the optimal solution of the original0-1MKP directly we consider the problem of finding 120582corresponding to given constraints That is we transform theproblem of dealing with 119899-dimensional binary vector x intothe one of dealing with119898-dimensional real vector 120582 If thereare Lagrange multipliers corresponding to given constraintsand we find them we easily get the optimal solution of the0-1MKP Otherwise we try to get the solution close to theoptimum by devoting to find Lagrange multipliers whichsatisfy given constraints and are nearest to them

22 Prior Work In this subsection we briefly examine exist-ing Lagrangian heuristics for discrete optimization problemsCoping with nondifferentiability of the Lagrangian led to thelast technical development subgradient algorithm Subgradi-ent algorithm is a fundamentally simple procedure Typicallythe subgradient algorithm has been used as a techniquefor generating good upper bounds for branch-and-boundmethods where it is known as Lagrangian relaxation Thereader is referred to [24] for the deep survey of Lagrangianrelaxation At each iteration of the subgradient algorithmone takes a step from the present Lagrange multiplier in thedirection opposite to a subgradient which is the direction of

Discrete Dynamics in Nature and Society 3

(blowast minus b) where blowast is the capacity obtained by LMMKP andb is the original capacity

The only previous attempt to find lower bounds usingLagrangian method is CONS [10] however CONS withouthybridization with other metaheuristics could not showsatisfactory results LM-GA by [23] obtained better resultsby the hybridization of weight-coded genetic algorithm andCONS In LM-GA a candidate solution is represented by avector (119908

1 1199082 119908

119899) of weights Weight 119908

119894is associated

with object 119894 Each profit 119901119894is modified by applying several

biasing techniques with these weights that is we can obtain amodified problem instance1198751015840 which has the same constraintsas those of the original problem instance but has a differentobject function And then solutions for this modified prob-lem instance are obtained by applying a decoding heuristic Inparticular LM-GA used CONS as a decoding heuristic Thefeasible solutions for the modified problem instance are alsofeasible for the original problem instance since they satisfy thesame constraints So weight-coding does not need an explicitrepairing algorithm

The proposed heuristic is different from LM-GA in that itimproves CONS itself by using properties of Lagrange mul-tipliers but LM-GA just uses CONS as evaluation functionLagrange multipliers in the proposed heuristic can move tomore diverse directions than CONS because of its randomfactor

23 Randomized Constructive Heuristic Yoon et al [13 19]proposed a randomized constructive heuristic (R-CONS)as an improved variant of CONS First 120582 is set to be 0Consequently 119909

119894becomes 1 for each V

119894gt 0 It means that

all positive-valued objects are put in the knapsack and soalmost all constraints are violated If 120582 is increased someobjects become taken out We increase 120582 adequately for onlyone object to be taken out We change only one Lagrangemultiplier at a time We randomly choose one number 119896 andchange 120582

119896

Reconsider (3) Making (V119894minus sum119898

119895=1120582119895119908119895119894) be negative by

increasing 120582119896let 119909119894= 0 by LMMKP For each object 119894 such

that 119909119894= 1 let 120572

119894be the increment of 120582

119896to make 119909

119894be 0

Then (V119894minus sum119898

119895=1120582119895119908119895119894minus 120572119894119908119896119894) has to be negative That is if

we increase 120582119896by 120572119894such that 120572

119894gt (V119894minus sum119898

119895=1120582119895119908119895119894)119908119896119894 the

119894th object is taken out So if we just change 120582119896to 120582119896+min

119894120572119894

leave 120582119894as it is for 119894 = 119896 and apply LMMKP again exactly one

object is taken outWe take out objects one by one in this wayand stop this procedure if every constraint is satisfied

Algorithm 1 shows the pseudo code of R-CONS Thenumber of operations to take out the object is at most 119899 andcomputing 120572

119894for each object 119894 takes 119874(119898) time Hence the

total time complexity becomes 119874(1198992119898)

24 Local Improvement Heuristic Our goal is to improvea real vector 120582 obtained by R-CONS whose correspondingcapacity is quite close to the capacity of the given prob-lem instance To devise a local improvement heuristic weexploited the following proposition [13 19]

Proposition 3 Suppose that 120582 and 1205821015840 correspond to x b andx1015840 b1015840 by the LMMKP respectively Let 120582 = (120582

1 1205822 120582

119898)

and 1205821015840 = (1205821015840

1 1205821015840

2 120582

1015840

119898) where 120582

119894= 1205821015840

119894for 119894 = 119896 and 120582

119896= 1205821015840

119896

Then if 120582119896lt 1205821015840

119896 119887119896ge 1198871015840

119896 and if 120582

119896gt 1205821015840

119896 119887119896le 1198871015840

119896

Let b be the capacity of the given problem instance and letblowast be the capacity obtained by LMMKP with 120582 By the abovetheorem if 119887lowast

119896gt 119887119896 choosing 1205821015840 = (120582

1 120582

1015840

119896 120582

119898)

such that 1205821015840119896gt 120582119896and applying LMMKP with 1205821015840 makes the

value of 119887lowast119896smaller Itmakes the 119896th constraint satisfied or the

exceeded amount for the 119896th capacity decreased Of courseanother constraint may become violated by this operationAlso which 120582

119896to be changed is at issue in the case that

several constraints are not satisfied Hence it is necessary toset efficient rules about which 120582

119896to be changed and how

much to change it If good rules are made we can find outbetter Lagrange multipliers than randomly generated onesquickly

We selected the method that chooses a random number119896(le 119898) and increases or decreases the value of 120582

119896by the

above theorem iteratively In each iteration a capacity vectorblowast is obtained by applying LMMKP with 120582 If blowast le ball constraints are satisfied and hence the best solution isupdated Since the possibility to find a better capacity existsthe algorithmdoes not stop here Instead it chooses a randomnumber 119896 and decreases the value of 120582

119896 If blowast ≰ b we focus

on satisfying constraints preferentially For this we choosea random number 119896 among the numbers such that theirconstraints are not satisfied and increase 120582k hoping the 119896thvalue of corresponding capacity to be decreased and thenthe 119896th constraint to be satisfied We set the amount of 120582

119896rsquos

change to be the fixed value 120575Most Lagrangian heuristics for discrete problems have

focused on obtaining good upper bounds but this algorithmis distinguished in that it primarily pursues finding feasiblesolutions Algorithm 2 shows the pseudo code of this localimprovement heuristic It takes119874(119899119898119873) time where119873 is thenumber of iterations

The direction by our local improvement heuristic lookssimilar to that by the subgradient algorithm described inSection 22 but the main difference lies in that in eachiteration the subgradient algorithm changes all coordinatevalues by the subgradient direction but our local improve-ment heuristic changes only one coordinate value Conse-quently this couldmake our local improvement heuristic findlower boundsmore easily than subgradient algorithm usuallyproducing upper bounds

25 Investigation of the Lagrangian Space Thestructure of theproblem space is an important factor to indicate the problemdifficulty and the analysis of the structure helps efficientsearch in the problem space [25ndash27] Recently Puchinger et al[28] gave some insight into the solution structure of 0-1MKPIn this subsection we conduct some experiments and getsome insight into the global structure of the 0-1MKP space

In Section 21 we showed that there is a correspondencebetween binary solution vector and Lagrange multiplier

4 Discrete Dynamics in Nature and Society

R-CONS(0-1MKP instance)

120582larr 0119868 larr 1 2 119899do119896 larr random integer in [1 119898]for 119894 isin 119868120572119894larr (V119894minus sum119898

119895=1120582119895119908119895119894)119908119896119894

120582119896larr 120582119896+min

119894isin119868120572119894

119868 larr 119868 arg min119894isin119868120572119894

(xlowast blowast) = LMMKP(120582)until xlowast satisfies all the constraintsreturn 120582

Algorithm 1 Randomized constructive heuristic [13 19]

LocalImprovementHeuristic(120582 0-1MKP instance)

for 119894 larr 1 to 119873

(xlowast blowast) larr LMMKP(120582)119868 larr 119894 119887

lowast

119894le 119887119894 and 119869 larr 119894 119887

lowast

119894gt 119887119894

if 119868 = 1 2 119898

update the best solutionchoose a random element 119896 among 119868120582119896larr 120582119896minus 120575

elsechoose a random element 119896 among 119869120582119896larr 120582119896+ 120575

return 120582 corresponding to the best solution

Algorithm 2 Local improvement heuristic

vector Strictly speaking there cannot be a one-to-one cor-respondence in the technical sense of bijection The binarysolution vector has only binary components so there are onlycountably many such vectors But the Lagrange multipliersare real numbers so there are uncountably many Lagrangemultiplier vectors Several multiplier vectors may correspondto the same binary solution vectorMoreover somemultipliervectors may have multiple binary solution vectors

Instead of directly finding an optimal binary solution wedeal with Lagrange multipliers In this subsection we empir-ically investigate the relationship between binary solutionspace and Lagrangian space (ie x

120582s and 120582s)

We made experiments on nine instances (119898 sdot 119899) chang-ing the number of constraints (119898) from 5 to 30 and thenumber of objects from 100 to 500 We chose a thousandof randomly generated Lagrange multipliers and plottedfor each pair of Lagrange multiplier vectors the relationbetween the Hamming distance in binary solution space andthe Euclidean distance in Lagrangian space Figure 1 showsthe plotting results The smaller the number of constraints(119898) is the larger the Pearson correlation coefficient (120588) isWe also made the same experiments on locally optimalLagrange multipliers Figure 2 shows the plotting results

Locally optimal Lagrange multipliers show much strongercorrelation than randomly generated ones They show strongpositive correlation (much greater than 05) It means thatbinary solution space and locally optimal Lagrangian spaceare roughly isometricThe results show that both spaces havesimilar neighborhood structures So this hints that it is easyto find high-quality Lagrange multipliers satisfying all thecapacity constraints by using memetic algorithms on locallyoptimal Lagrangian space That is memetic algorithms canbe a good choice for searching Lagrangian space directly

26 Proposed Memetic Algorithm A genetic algorithm (GA)is a problem-solving technique motivated by Darwinrsquos the-ory of natural selection in evolution A GA starts with aset of initial solutions which is called a population Eachsolution in the population is called a chromosome whichis typically represented by a linear string This populationthen evolves into different populations for a number ofiterations (generations) At the end the algorithm returnsthe best chromosome of the population as the solution tothe problem For each iteration the evolution proceeds inthe following Two solutions of the population are chosenbased on some probability distribution These two solutions

Discrete Dynamics in Nature and Society 5

100908070605040302010

00 01 02 03 04 05 06 07 08 09 1

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

(a) 5100 120588 = 044

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

00 01 02 03 04 05 06 07 08 09 1

Distance between Lagrangian vectors

(b) 5250 120588 = 050

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

0 01 02 03 04 05 06 07 08 09 1Distance between Lagrangian vectors

500450400350300250200150100

500

(c) 5500 120588 = 045

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(d) 10100 120588 = 035

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(e) 10250 120588 = 037

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(f) 10500 120588 = 037

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors005 01 015 02 025 03 035

(g) 30100 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors005 01 015 02 025 03 035

(h) 30250 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors005 01 015 02 025 03 035

(i) 30500 120588 = 019

Figure 1 Relationship between distances on solution space and those on Lagrangian space (among randomly generated Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

are then combined through a crossover operator to producean offspring With low probability this offspring is thenmodified by a mutation operator to introduce unexploredsearch space into the population enhancing the diversity ofthe population In this way offsprings are generated and theyreplace part of or the whole populationThe evolution processis repeated until a certain condition is satisfied for exampleafter a fixed number of iterations A GA that generates aconsiderable number of offsprings per iteration is called agenerational GA as opposed to a steady-state GA whichgenerates only one offspring per iteration If we apply a localimprovement heuristic typically after the mutation step theGA is called amemetic algorithm (MA) Algorithm 3 shows atypical generational MA

We propose an MA for optimizing Lagrange multipli-ers It conducts search using an evaluation function withpenalties for violated capacity constraints Our MA providesan alternative search method to find a good solution byoptimizing119898 Lagrangemultipliers instead of directly dealingwith binary vectors with length 119899 (119898 ≪ 119899)

The general framework of an MA is used in our study Inthe following we describe each part of the MA

Encoding Each solution in the population is representedby a chromosome Each chromosome consists of 119898 genescorresponding to Lagrange multipliers A real encoding isused for representing the chromosome 120582

InitializationTheMA first creates initial chromosomes usingR-CONS described in Section 23 We set the population size119875 to be 100

Mating and Crossover To select two parents we use a randommating scheme A crossover operator creates a new offspringby combining parts of the parents We use the uniformcrossover

Mutation After the crossover mutation operator is applied tothe offspring We use a gene-wise mutation After generatinga random number 119903 from 1 to 119898 the value of each gene isdivided by 119903

6 Discrete Dynamics in Nature and SocietyD

istan

ce b

etw

een

bina

ry

solu

tion

vect

ors

Distance between Lagrangian vectors

353025201510

50

0 02 04 06 08 1 12

(a) 5100 120588 = 091

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09

(b) 5250 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

160140120100

80604020

00 01 02 03 04 05 06 07 08 09

(c) 5500 120588 = 097

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

3540

3025201510

50

0 02 04 06 08 1 12

(d) 10100 120588 = 080

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09 1

(e) 10250 120588 = 092

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(f) 10500 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

3540

3025201510

50

Distance between Lagrangian vectors01 02 03 04 05 06 07 08 09 1 11

(g) 30100 120588 = 060

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

8090

70605040302010

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09 1

(h) 30250 120588 = 080

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(i) 30500 120588 = 091

Figure 2 Relationship between distances on solution space and those on Lagrangian space (among locally optimal Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

create an initial population of a fixed size 119875do

for 119894 larr 1 to 1198752

choose parent1119894and parent2

119894from population

119900119891119891119904119901119903119894119899119892119894larr crossover(parent1

119894 parent2

119894)

mutation(119900119891119891119904119901119903119894119899119892119894)

apply a local improvement heuristic to 119900119891119891119904119901119903119894119899119892119894

replace(population 119900119891119891119904119901119903119894119899119892119894)

until (stopping condition)return the best individual

Algorithm 3 Framework of our memetic algorithm

Local Improvement We use a local improvement heuristicdescribed in Section 24 The number of iterations (119873) is setto be 30000 We set 120575 to 00002

Replacement and Stopping Condition After generating 1198752

offspring our MA chooses the best 119875 individual among thetotal 31198752 ones as the population of the next generation OurMA stops when the number of generations reaches 100

Evaluation Function Our evaluation function is to find aLagrange multiplier vector 120582 that has a high fitness satisfyingthe capacity constraints as much as possible In our MA thefollowing is used as the objective function tomaximize whichis the function obtained by subtracting the penalty from theobjective function of the 0-1MKP

k119879xlowast minus 120574 sum

blowast119894gtb119894

(blowast119894minus b119894) (6)

Discrete Dynamics in Nature and Society 7

Table 1 Results of local search heuristics on benchmark data

Instance CONS [10] R-CONS [13 19] R-CONS-LResult1 Best2 Ave2 CPU3 Best2 Ave2 CPU3

5100ndash025 1370 971 2232 220 247 320 2865100ndash050 725 573 1400 157 110 135 2925100ndash075 514 370 798 087 072 083 278Average (5100minuslowast) 870 638 1477 155 143 179 2855250ndash025 677 762 1765 1323 092 103 7035250ndash050 527 494 1127 924 049 056 7225250ndash075 355 302 626 506 026 029 679Average (5250minuslowast) 520 519 1173 918 056 063 7015500ndash025 493 780 1527 5300 048 050 14145500ndash050 265 418 945 3656 020 022 14645500ndash075 222 263 543 1975 014 015 1367Average (5500minuslowast) 327 487 1005 3644 027 029 1415Average (5lowastminuslowast) 572 548 1218 1572 075 091 80010100ndash025 1588 1149 2293 366 275 630 53610100ndash050 1001 775 1476 263 147 368 54310100ndash075 657 361 805 142 091 202 532Average (10100minuslowast) 1082 762 1525 257 171 400 53710250ndash025 1126 1032 1726 2219 122 387 132110250ndash050 649 646 1151 1565 058 207 134410250ndash075 416 343 605 840 036 123 1306Average (10250minuslowast) 730 674 1161 1541 072 239 132310500ndash025 827 959 1491 8830 064 289 264110500ndash050 526 556 928 6733 027 139 268610500ndash075 349 328 524 3250 022 090 2601Average (10500minuslowast) 567 614 981 6271 037 173 2642Average (10lowastminuslowast) 793 683 1222 2690 093 271 150130100ndash025 1663 1175 2202 955 506 917 136530100ndash050 1031 751 1457 680 280 552 137430100ndash075 660 420 797 379 164 322 1363Average (30100minuslowast) 1118 782 1485 671 317 597 136730250ndash025 1332 1136 1701 5846 442 684 336430250ndash050 819 671 1087 4051 236 411 338830250ndash075 445 356 586 2173 144 239 3348Average (30250minuslowast) 865 721 1125 4023 274 445 336630500ndash025 1034 939 1402 22856 400 594 672630500ndash050 680 617 920 15749 223 369 675530500ndash075 382 334 490 8343 127 214 6659Average (30500minuslowast) 699 630 937 15649 250 392 6713Average (30lowastminuslowast) 894 711 1182 6781 280 478 3815Total average 753 647 1208 3681 150 280 20391Since CONS is a deterministic algorithm each run always outputs the same result2Results from 1000 runs3Total CPU seconds on Pentium III 997MHz

where 120574 is a constant which indicates the degree of penaltyand we used a fixed value 07

3 Experiments

We made experiments on well-known benchmark data pub-licly available from the OR-Library [29] which are the sameas those used in [6] They are composed of 270 instanceswith 5 10 and 30 constraints They have different numbers

of objects and different tightness ratios The tightness ratiomeans 120572 such that 119887

119895= 120572sum

119899

119894=1119908119895119894for each 119895 = 1 2 119898

The class of instances are briefly described below

119898119899 minus 120572 119898 constraints 119899 objects and tightness ratio120572 Each class has 10 instances

The proposed algorithms were implemented with gcccompiler on a Pentium III PC (997MHz) using Linuxoperating system As the measure of performance we

8 Discrete Dynamics in Nature and Society

Table 2 Results of memetic Lagrangian heuristic on benchmark data

Instance Multistart R-CONS-L1 Memetic Lagrangian heuristic Number improvednumber equalledResult CPU2 Result CPU2

5100ndash025 240 1435 232 1466 005100ndash050 108 1471 108 1500 005100ndash075 072 1394 072 1431 00Average (5100minuslowast) 140 1433 137 1466 Total 005250ndash025 092 3513 092 3619 005250ndash050 049 3606 049 3748 005250ndash075 026 3395 025 3529 00Average (5250minuslowast) 056 3505 056 3632 Total 005500ndash025 048 7072 048 7124 005500ndash050 020 7318 020 7546 005500ndash075 014 6846 014 6985 00Average (5500minuslowast) 027 7079 027 7218 Total 00Average (5lowastminuslowast) 074 4006 073 4105 Total 0010100ndash025 245 2679 227 2630 0010100ndash050 146 2717 114 2650 0110100ndash075 089 2657 069 2598 00Average (10100minuslowast) 160 2684 137 2626 Total 0110250ndash025 114 6599 088 6441 0010250ndash050 057 6718 045 6558 0010250ndash075 035 6535 024 6408 0-1Average (10250minuslowast) 069 6617 052 6469 Total 0110500ndash025 058 13196 050 12811 0010500ndash050 027 13425 024 13057 0010500ndash075 021 13006 014 12750 00Average (10500minuslowast) 035 13209 029 12873 Total 00Average (10lowastminuslowast) 088 7504 073 7323 Total 0230100ndash025 492 6824 319 6796 0330100ndash050 267 6870 143 7052 0330100ndash075 156 6817 089 6780 02Average (30100minuslowast) 305 6837 184 6876 Total 0830250ndash025 424 16819 128 16799 2230250ndash050 223 16941 059 16958 0030250ndash075 140 16742 034 16837 00Average (30250minuslowast) 262 16834 074 16865 Total 2230500ndash025 381 33630 068 33414 2030500ndash050 213 33802 029 33737 1030500ndash075 124 33318 019 33440 00Average (30500minuslowast) 239 33583 039 33530 Total 30Average (30lowastminuslowast) 269 19085 099 19090 Total 510Total average 144 10198 082 10173 Total 5121Multistart R-CONS-L returns the best result from 5000 independent runs of R-CONS-L2Average CPU seconds on Pentium III 997MHz

used the percentage difference-ratio 100 times |119871119875 119900119901119905119894119898119906119898 minus

119900119906119905119901119906119905|119900119906119905119901119906119905 which was used in [6] where 119871119875 119900119901119905119894119898119906119898

is the optimal solution of the linear programming relaxationoverR It has a value in the range of [0 100] The smaller thevalue is the smaller the difference from the optimum is

First we compared constructive heuristics and localimprovement heuristic Table 1 shows the results of CONS

R-CONS and R-CONS-L where R-CONS-L starts with asolution produced by R-CONS and locally improves it by thelocal improvement heuristic described in Section 24We cansee that R-CONS-L largely improves the results of R-CONS

Next to verify the effectiveness of the proposed MAwe compared the results with a multistart method using R-CONS-L Multistart R-CONS-L returns the best result from

Discrete Dynamics in Nature and Society 9

5000 independent runs of R-CONS-L Table 2 shows theresults of multistart R-CONS-L and our MA The proposedMA outperformed multistart R-CONS-L

The last column in Table 2 corresponds to the numbers ofimproved and equalled results compared with the best resultsof [6] Surprisingly the proposed memetic algorithm couldfind better results than one of the state-of-the-art geneticalgorithms [6] for some instances with 30 constraints (5 bestover 90 instances)

4 Concluding Remarks

In this paper we tried to find good feasible solutions by usinga newmemetic algorithm based on Lagrangian relaxation Tothe best of the authorrsquos knowledge it is the first trial to use amemetic algorithm when optimizing Lagrange multipliers

The Lagrangian method guarantees optimality with thecapacity constraints which may be different from those ofthe original problem But it is not easy to find Lagrangemultipliers that accord with all the capacity constraintsThrough empirical investigation of Lagrangian space weknew that the space of locally optimal Lagrangian vectorsand that of their corresponding binary solutions are roughlyisometric Based on this fact we applied amemetic algorithmwhich searches only locally optimal Lagrangian vectorsto the transformed problem encoded by real values thatis Lagrange multipliers Then we could find high-qualityLagrange multipliers by the proposed memetic algorithmWe obtained a significant performance improvement over R-CONS and multistart heuristic

In recent years there have been researches showing betterperformance than Chu and Beasley [6] on the benchmarkdata of 0-1MKP [4 5 8 9 11ndash13] Although the presentedmemetic algorithm did not dominate such state-of-the-art methods this study could show the potentiality ofmemetic search on Lagrangian space Becausewe used simplegenetic operators we believe that there is room for furtherimprovement on our memetic search The improvementusing enhanced genetic operators tailored to real encodingfor example [30] will be a good future work

Acknowledgments

The present research has been conducted by the researchgrant of Kwangwoon University in 2013 This research wassupported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) and fundedby the Ministry of Education Science and Technology (2012-0001855)

References

[1] M R Garey and D S Johnson Computers and IntractabilityA Guide to the Theory of NP-Completeness W H Freeman andCompany San Francisco Calif USA 1979

[2] R Mansini and M G Speranza ldquoCORAL an exact algorithmfor the multidimensional knapsack problemrdquo INFORMS Jour-nal on Computing vol 24 no 3 pp 399ndash415 2012

[3] S Martello and P Toth ldquoAn exact algorithm for the two-constraint 0-1 knapsack problemrdquo Operations Research vol 51no 5 pp 826ndash835 2003

[4] S Boussier M Vasquez Y Vimont S Hanafi and P MichelonldquoA multi-level search strategy for the 0-1 multidimensionalknapsack problemrdquo Discrete Applied Mathematics vol 158 no2 pp 97ndash109 2010

[5] V Boyer M Elkihel and D El Baz ldquoHeuristics for the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 199 no 3 pp 658ndash664 2009

[6] P C Chu and J E Beasley ldquoA genetic algorithm for themultidimensional Knapsack problemrdquo Journal ofHeuristics vol4 no 1 pp 63ndash86 1998

[7] F Della Croce and A Grosso ldquoImproved core problem basedheuristics for the 0-1 multi-dimensional knapsack problemrdquoComputers amp Operations Research vol 39 no 1 pp 27ndash31 2012

[8] K Fleszar and K S Hindi ldquoFast effective heuristics forthe 0-1 multi-dimensional knapsack problemrdquo Computers ampOperations Research vol 36 no 5 pp 1602ndash1607 2009

[9] S Hanafi and C Wilbaut ldquoImproved convergent heuristicsfor the 0-1 multidimensional knapsack problemrdquo Annals ofOperations Research vol 183 no 1 pp 125ndash142 2011

[10] M J Magazine and O Oguz ldquoA heuristic algorithm forthe multidimensional zero-one knapsack problemrdquo EuropeanJournal of Operational Research vol 16 no 3 pp 319ndash326 1984

[11] M Vasquez and Y Vimont ldquoImproved results on the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 165 no 1 pp 70ndash81 2005

[12] Y Vimont S Boussier and M Vasquez ldquoReduced costspropagation in an efficient implicit enumeration for the 01multidimensional knapsack problemrdquo Journal of CombinatorialOptimization vol 15 no 2 pp 165ndash178 2008

[13] Y Yoon Y-H Kim and B-R Moon ldquoA theoretical andempirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 218 no 2 pp 366ndash376 2012

[14] C Cotta and J M Troya ldquoA hybrid genetic algorithm for the 0-1multiple knapsack problemrdquo in Artificial Neural Networks andGenetic Algorithms G D Smith N C Steele and R F AlbrechtEds vol 3 pp 251ndash255 Springer New york NY USA 1997

[15] J Gottlieb Evolutionary algorithms for constrained optimizationproblems [PhD thesis] Department of Computer ScienceTechnical University of Clausthal Clausthal Germany 1999

[16] S Khuri T Back and J Heitkotter ldquoThe zeroone multipleknapsack problem and genetic algorithmspagesrdquo in Proceedingsof the ACM Symposium on Applied Computing pp 188ndash193ACM Press 1994

[17] G R Raidl ldquoImproved genetic algorithm for the multicon-strained 0-1 knapsack problemrdquo in Proceedings of the IEEEInternational Conference on Evolutionary Computation (ICECrsquo98) pp 207ndash211 May 1998

[18] J Thiel and S Voss ldquoSome experiences on solving multicon-straint zero-one knapsack problems with genetic algorithmsrdquoINFOR vol 32 no 4 pp 226ndash242 1994

[19] Y Yoon Y-H Kim and B-R Moon ldquoAn evolutionaryLagrangian method for the 0-1 multiple knapsack problemrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GECCO rsquo05) pp 629ndash635 June 2005

[20] A Freville ldquoThe multidimensional 0-1 knapsack problem anoverviewrdquo European Journal of Operational Research vol 155no 1 pp 1ndash21 2004

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

2 Discrete Dynamics in Nature and Society

the problem into a real space instead of directly managing theoriginal discrete spaceThe 0-1MKP is the optimization prob-lem with multiple constraints We transform the problemusing the multiple Lagrange multipliers However we havea lot of limitations since the domain is not continuous butdiscrete Lagrangian heuristics have been mainly used to getgood upper bounds of the integer problems by the duality Toget good upper bounds a number of researchers have studieddual solvings using Lagrangian duality surrogate duality andcomposite duality and there was a recent study using cuttingplane method for 0-1MKP However in this paper we focuson finding good lower bounds that is good feasible solutions

There have been a number of studies about Lagrangianmethod for discrete problemsThere were also a fewmethodsthat used Lagrange multipliers for 0-1MKP Typically mostof them found just good upper bounds to be used in abranch-and-bound algorithm by dualizing constraints andhence could not find a feasible solution directly To the best ofthe authorrsquos knowledge there have been only two Lagrangianmethods to find lower bounds (feasible solutions) One isthe constructive heuristic (CONS) proposed by Magazineand Oguz [10] and the other is its improvement usingrandomization (R-CONS) [13 19] There was also a methodcalled LM-GA that improved the performance by combiningCONS with genetic algorithms [23] LM-GA used the real-valued weight-codings to make a variant of the originalproblem and then applied CONS (see Section 22 for details)LM-GA provided a new viewpoint to solve 0-1MKP but itjust used CONS for fitness evaluation and did not give anycontribution in the aspect of Lagrangian theory

In this paper we present a local improvement heuristicfor optimizing Lagrange multipliers However it is not easyto obtain good solutions by just using the heuristic itselfThrough empirical investigation of Lagrangian space wedevise a novel memetic Lagrangian heuristic combined withgenetic algorithms The remainder of this paper is organizedas follows In Section 2 we present our memetic Lagrangianmethod together with some literature survey In Section 3we give our experimental results on well-known benchmarkdata and we make conclusions in Section 4

2 Lagrangian Optimization

21 Preliminaries The 0-1MKP is a maximization problemwith constraints It is possible to transform the original opti-mization problem into the following problem using Lagrangemultipliers

maximize k119879x minus ⟨120582119882x minus b⟩

subject to x isin 0 1119899(2)

It is easy to find the maximum of the transformed problemusing the following formula

k119879x minus ⟨120582119882x minus b⟩ =

119899

sum

119894=1

119909119894(V119894minus

119898

sum

119895=1

120582119895119908119895119894) + ⟨120582 b⟩ (3)

Tomaximize the above formula for the fixed 120582 we have to set119909119894to be 1 only if V

119894gt sum119898

119895=1120582119895119908119895119894for each 119894 Since each V

119894does

not have an effect on the others getting themaximum is fairlyeasy Since this algorithm computes just sum119898

119895=1120582119895119908119895119894for each

119894 its time complexity becomes 119874(119898119899)If we only find out 120582 for the problem we get the

optimal solution of the 0-1MKP in polynomial time Wemay have the problem that such 120582 never exists or it isdifficult to find it although it exists However this methodis not entirely useless For arbitrary 120582 let the vector x whichachieves the maximum in the above formula be xlowast Since 120582is chosen arbitrarily we do not guarantee that xlowast satisfies theconstraints of the original problem Nevertheless letting thecapacity be blowast = 119882xlowast instead of bmakes xlowast be the optimalsolution by the following proposition [13 19] We call thisprocedure Lagrangian method for the 0-1MKP (LMMKP)

Proposition 1 The vector xlowast obtained by applying LMMKPwith given 120582 is the maximizer of the following problem

maximize k119879x

subject to 119882x le blowast x isin 0 1119899(4)

In particular in the case that 120582119896is 0 the 119896th constraint is

ignored That is xlowast is the maximizer of the problems whichhave the capacities crsquos such that 119888

119896ge 119887lowast

119896and 119888119894= 119887lowast

119894for all

119894 = 119896 In general the following proposition [19] holds

Proposition 2 In particular if LMMKP is applied with 120582 suchthat 120582

1198961

= 0 1205821198962

= 0 119886119899119889 120582119896119903

= 0 replacing the capacityblowast by c such that

119888119894= 119887lowast

119894 if 119894 = 119896

119895forallj

119888119894ge 119887lowast

119894 otherwise

(5)

in Proposition 1 makes the proposition still hold

Instead of finding the optimal solution of the original0-1MKP directly we consider the problem of finding 120582corresponding to given constraints That is we transform theproblem of dealing with 119899-dimensional binary vector x intothe one of dealing with119898-dimensional real vector 120582 If thereare Lagrange multipliers corresponding to given constraintsand we find them we easily get the optimal solution of the0-1MKP Otherwise we try to get the solution close to theoptimum by devoting to find Lagrange multipliers whichsatisfy given constraints and are nearest to them

22 Prior Work In this subsection we briefly examine exist-ing Lagrangian heuristics for discrete optimization problemsCoping with nondifferentiability of the Lagrangian led to thelast technical development subgradient algorithm Subgradi-ent algorithm is a fundamentally simple procedure Typicallythe subgradient algorithm has been used as a techniquefor generating good upper bounds for branch-and-boundmethods where it is known as Lagrangian relaxation Thereader is referred to [24] for the deep survey of Lagrangianrelaxation At each iteration of the subgradient algorithmone takes a step from the present Lagrange multiplier in thedirection opposite to a subgradient which is the direction of

Discrete Dynamics in Nature and Society 3

(blowast minus b) where blowast is the capacity obtained by LMMKP andb is the original capacity

The only previous attempt to find lower bounds usingLagrangian method is CONS [10] however CONS withouthybridization with other metaheuristics could not showsatisfactory results LM-GA by [23] obtained better resultsby the hybridization of weight-coded genetic algorithm andCONS In LM-GA a candidate solution is represented by avector (119908

1 1199082 119908

119899) of weights Weight 119908

119894is associated

with object 119894 Each profit 119901119894is modified by applying several

biasing techniques with these weights that is we can obtain amodified problem instance1198751015840 which has the same constraintsas those of the original problem instance but has a differentobject function And then solutions for this modified prob-lem instance are obtained by applying a decoding heuristic Inparticular LM-GA used CONS as a decoding heuristic Thefeasible solutions for the modified problem instance are alsofeasible for the original problem instance since they satisfy thesame constraints So weight-coding does not need an explicitrepairing algorithm

The proposed heuristic is different from LM-GA in that itimproves CONS itself by using properties of Lagrange mul-tipliers but LM-GA just uses CONS as evaluation functionLagrange multipliers in the proposed heuristic can move tomore diverse directions than CONS because of its randomfactor

23 Randomized Constructive Heuristic Yoon et al [13 19]proposed a randomized constructive heuristic (R-CONS)as an improved variant of CONS First 120582 is set to be 0Consequently 119909

119894becomes 1 for each V

119894gt 0 It means that

all positive-valued objects are put in the knapsack and soalmost all constraints are violated If 120582 is increased someobjects become taken out We increase 120582 adequately for onlyone object to be taken out We change only one Lagrangemultiplier at a time We randomly choose one number 119896 andchange 120582

119896

Reconsider (3) Making (V119894minus sum119898

119895=1120582119895119908119895119894) be negative by

increasing 120582119896let 119909119894= 0 by LMMKP For each object 119894 such

that 119909119894= 1 let 120572

119894be the increment of 120582

119896to make 119909

119894be 0

Then (V119894minus sum119898

119895=1120582119895119908119895119894minus 120572119894119908119896119894) has to be negative That is if

we increase 120582119896by 120572119894such that 120572

119894gt (V119894minus sum119898

119895=1120582119895119908119895119894)119908119896119894 the

119894th object is taken out So if we just change 120582119896to 120582119896+min

119894120572119894

leave 120582119894as it is for 119894 = 119896 and apply LMMKP again exactly one

object is taken outWe take out objects one by one in this wayand stop this procedure if every constraint is satisfied

Algorithm 1 shows the pseudo code of R-CONS Thenumber of operations to take out the object is at most 119899 andcomputing 120572

119894for each object 119894 takes 119874(119898) time Hence the

total time complexity becomes 119874(1198992119898)

24 Local Improvement Heuristic Our goal is to improvea real vector 120582 obtained by R-CONS whose correspondingcapacity is quite close to the capacity of the given prob-lem instance To devise a local improvement heuristic weexploited the following proposition [13 19]

Proposition 3 Suppose that 120582 and 1205821015840 correspond to x b andx1015840 b1015840 by the LMMKP respectively Let 120582 = (120582

1 1205822 120582

119898)

and 1205821015840 = (1205821015840

1 1205821015840

2 120582

1015840

119898) where 120582

119894= 1205821015840

119894for 119894 = 119896 and 120582

119896= 1205821015840

119896

Then if 120582119896lt 1205821015840

119896 119887119896ge 1198871015840

119896 and if 120582

119896gt 1205821015840

119896 119887119896le 1198871015840

119896

Let b be the capacity of the given problem instance and letblowast be the capacity obtained by LMMKP with 120582 By the abovetheorem if 119887lowast

119896gt 119887119896 choosing 1205821015840 = (120582

1 120582

1015840

119896 120582

119898)

such that 1205821015840119896gt 120582119896and applying LMMKP with 1205821015840 makes the

value of 119887lowast119896smaller Itmakes the 119896th constraint satisfied or the

exceeded amount for the 119896th capacity decreased Of courseanother constraint may become violated by this operationAlso which 120582

119896to be changed is at issue in the case that

several constraints are not satisfied Hence it is necessary toset efficient rules about which 120582

119896to be changed and how

much to change it If good rules are made we can find outbetter Lagrange multipliers than randomly generated onesquickly

We selected the method that chooses a random number119896(le 119898) and increases or decreases the value of 120582

119896by the

above theorem iteratively In each iteration a capacity vectorblowast is obtained by applying LMMKP with 120582 If blowast le ball constraints are satisfied and hence the best solution isupdated Since the possibility to find a better capacity existsthe algorithmdoes not stop here Instead it chooses a randomnumber 119896 and decreases the value of 120582

119896 If blowast ≰ b we focus

on satisfying constraints preferentially For this we choosea random number 119896 among the numbers such that theirconstraints are not satisfied and increase 120582k hoping the 119896thvalue of corresponding capacity to be decreased and thenthe 119896th constraint to be satisfied We set the amount of 120582

119896rsquos

change to be the fixed value 120575Most Lagrangian heuristics for discrete problems have

focused on obtaining good upper bounds but this algorithmis distinguished in that it primarily pursues finding feasiblesolutions Algorithm 2 shows the pseudo code of this localimprovement heuristic It takes119874(119899119898119873) time where119873 is thenumber of iterations

The direction by our local improvement heuristic lookssimilar to that by the subgradient algorithm described inSection 22 but the main difference lies in that in eachiteration the subgradient algorithm changes all coordinatevalues by the subgradient direction but our local improve-ment heuristic changes only one coordinate value Conse-quently this couldmake our local improvement heuristic findlower boundsmore easily than subgradient algorithm usuallyproducing upper bounds

25 Investigation of the Lagrangian Space Thestructure of theproblem space is an important factor to indicate the problemdifficulty and the analysis of the structure helps efficientsearch in the problem space [25ndash27] Recently Puchinger et al[28] gave some insight into the solution structure of 0-1MKPIn this subsection we conduct some experiments and getsome insight into the global structure of the 0-1MKP space

In Section 21 we showed that there is a correspondencebetween binary solution vector and Lagrange multiplier

4 Discrete Dynamics in Nature and Society

R-CONS(0-1MKP instance)

120582larr 0119868 larr 1 2 119899do119896 larr random integer in [1 119898]for 119894 isin 119868120572119894larr (V119894minus sum119898

119895=1120582119895119908119895119894)119908119896119894

120582119896larr 120582119896+min

119894isin119868120572119894

119868 larr 119868 arg min119894isin119868120572119894

(xlowast blowast) = LMMKP(120582)until xlowast satisfies all the constraintsreturn 120582

Algorithm 1 Randomized constructive heuristic [13 19]

LocalImprovementHeuristic(120582 0-1MKP instance)

for 119894 larr 1 to 119873

(xlowast blowast) larr LMMKP(120582)119868 larr 119894 119887

lowast

119894le 119887119894 and 119869 larr 119894 119887

lowast

119894gt 119887119894

if 119868 = 1 2 119898

update the best solutionchoose a random element 119896 among 119868120582119896larr 120582119896minus 120575

elsechoose a random element 119896 among 119869120582119896larr 120582119896+ 120575

return 120582 corresponding to the best solution

Algorithm 2 Local improvement heuristic

vector Strictly speaking there cannot be a one-to-one cor-respondence in the technical sense of bijection The binarysolution vector has only binary components so there are onlycountably many such vectors But the Lagrange multipliersare real numbers so there are uncountably many Lagrangemultiplier vectors Several multiplier vectors may correspondto the same binary solution vectorMoreover somemultipliervectors may have multiple binary solution vectors

Instead of directly finding an optimal binary solution wedeal with Lagrange multipliers In this subsection we empir-ically investigate the relationship between binary solutionspace and Lagrangian space (ie x

120582s and 120582s)

We made experiments on nine instances (119898 sdot 119899) chang-ing the number of constraints (119898) from 5 to 30 and thenumber of objects from 100 to 500 We chose a thousandof randomly generated Lagrange multipliers and plottedfor each pair of Lagrange multiplier vectors the relationbetween the Hamming distance in binary solution space andthe Euclidean distance in Lagrangian space Figure 1 showsthe plotting results The smaller the number of constraints(119898) is the larger the Pearson correlation coefficient (120588) isWe also made the same experiments on locally optimalLagrange multipliers Figure 2 shows the plotting results

Locally optimal Lagrange multipliers show much strongercorrelation than randomly generated ones They show strongpositive correlation (much greater than 05) It means thatbinary solution space and locally optimal Lagrangian spaceare roughly isometricThe results show that both spaces havesimilar neighborhood structures So this hints that it is easyto find high-quality Lagrange multipliers satisfying all thecapacity constraints by using memetic algorithms on locallyoptimal Lagrangian space That is memetic algorithms canbe a good choice for searching Lagrangian space directly

26 Proposed Memetic Algorithm A genetic algorithm (GA)is a problem-solving technique motivated by Darwinrsquos the-ory of natural selection in evolution A GA starts with aset of initial solutions which is called a population Eachsolution in the population is called a chromosome whichis typically represented by a linear string This populationthen evolves into different populations for a number ofiterations (generations) At the end the algorithm returnsthe best chromosome of the population as the solution tothe problem For each iteration the evolution proceeds inthe following Two solutions of the population are chosenbased on some probability distribution These two solutions

Discrete Dynamics in Nature and Society 5

100908070605040302010

00 01 02 03 04 05 06 07 08 09 1

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

(a) 5100 120588 = 044

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

00 01 02 03 04 05 06 07 08 09 1

Distance between Lagrangian vectors

(b) 5250 120588 = 050

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

0 01 02 03 04 05 06 07 08 09 1Distance between Lagrangian vectors

500450400350300250200150100

500

(c) 5500 120588 = 045

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(d) 10100 120588 = 035

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(e) 10250 120588 = 037

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(f) 10500 120588 = 037

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors005 01 015 02 025 03 035

(g) 30100 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors005 01 015 02 025 03 035

(h) 30250 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors005 01 015 02 025 03 035

(i) 30500 120588 = 019

Figure 1 Relationship between distances on solution space and those on Lagrangian space (among randomly generated Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

are then combined through a crossover operator to producean offspring With low probability this offspring is thenmodified by a mutation operator to introduce unexploredsearch space into the population enhancing the diversity ofthe population In this way offsprings are generated and theyreplace part of or the whole populationThe evolution processis repeated until a certain condition is satisfied for exampleafter a fixed number of iterations A GA that generates aconsiderable number of offsprings per iteration is called agenerational GA as opposed to a steady-state GA whichgenerates only one offspring per iteration If we apply a localimprovement heuristic typically after the mutation step theGA is called amemetic algorithm (MA) Algorithm 3 shows atypical generational MA

We propose an MA for optimizing Lagrange multipli-ers It conducts search using an evaluation function withpenalties for violated capacity constraints Our MA providesan alternative search method to find a good solution byoptimizing119898 Lagrangemultipliers instead of directly dealingwith binary vectors with length 119899 (119898 ≪ 119899)

The general framework of an MA is used in our study Inthe following we describe each part of the MA

Encoding Each solution in the population is representedby a chromosome Each chromosome consists of 119898 genescorresponding to Lagrange multipliers A real encoding isused for representing the chromosome 120582

InitializationTheMA first creates initial chromosomes usingR-CONS described in Section 23 We set the population size119875 to be 100

Mating and Crossover To select two parents we use a randommating scheme A crossover operator creates a new offspringby combining parts of the parents We use the uniformcrossover

Mutation After the crossover mutation operator is applied tothe offspring We use a gene-wise mutation After generatinga random number 119903 from 1 to 119898 the value of each gene isdivided by 119903

6 Discrete Dynamics in Nature and SocietyD

istan

ce b

etw

een

bina

ry

solu

tion

vect

ors

Distance between Lagrangian vectors

353025201510

50

0 02 04 06 08 1 12

(a) 5100 120588 = 091

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09

(b) 5250 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

160140120100

80604020

00 01 02 03 04 05 06 07 08 09

(c) 5500 120588 = 097

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

3540

3025201510

50

0 02 04 06 08 1 12

(d) 10100 120588 = 080

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09 1

(e) 10250 120588 = 092

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(f) 10500 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

3540

3025201510

50

Distance between Lagrangian vectors01 02 03 04 05 06 07 08 09 1 11

(g) 30100 120588 = 060

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

8090

70605040302010

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09 1

(h) 30250 120588 = 080

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(i) 30500 120588 = 091

Figure 2 Relationship between distances on solution space and those on Lagrangian space (among locally optimal Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

create an initial population of a fixed size 119875do

for 119894 larr 1 to 1198752

choose parent1119894and parent2

119894from population

119900119891119891119904119901119903119894119899119892119894larr crossover(parent1

119894 parent2

119894)

mutation(119900119891119891119904119901119903119894119899119892119894)

apply a local improvement heuristic to 119900119891119891119904119901119903119894119899119892119894

replace(population 119900119891119891119904119901119903119894119899119892119894)

until (stopping condition)return the best individual

Algorithm 3 Framework of our memetic algorithm

Local Improvement We use a local improvement heuristicdescribed in Section 24 The number of iterations (119873) is setto be 30000 We set 120575 to 00002

Replacement and Stopping Condition After generating 1198752

offspring our MA chooses the best 119875 individual among thetotal 31198752 ones as the population of the next generation OurMA stops when the number of generations reaches 100

Evaluation Function Our evaluation function is to find aLagrange multiplier vector 120582 that has a high fitness satisfyingthe capacity constraints as much as possible In our MA thefollowing is used as the objective function tomaximize whichis the function obtained by subtracting the penalty from theobjective function of the 0-1MKP

k119879xlowast minus 120574 sum

blowast119894gtb119894

(blowast119894minus b119894) (6)

Discrete Dynamics in Nature and Society 7

Table 1 Results of local search heuristics on benchmark data

Instance CONS [10] R-CONS [13 19] R-CONS-LResult1 Best2 Ave2 CPU3 Best2 Ave2 CPU3

5100ndash025 1370 971 2232 220 247 320 2865100ndash050 725 573 1400 157 110 135 2925100ndash075 514 370 798 087 072 083 278Average (5100minuslowast) 870 638 1477 155 143 179 2855250ndash025 677 762 1765 1323 092 103 7035250ndash050 527 494 1127 924 049 056 7225250ndash075 355 302 626 506 026 029 679Average (5250minuslowast) 520 519 1173 918 056 063 7015500ndash025 493 780 1527 5300 048 050 14145500ndash050 265 418 945 3656 020 022 14645500ndash075 222 263 543 1975 014 015 1367Average (5500minuslowast) 327 487 1005 3644 027 029 1415Average (5lowastminuslowast) 572 548 1218 1572 075 091 80010100ndash025 1588 1149 2293 366 275 630 53610100ndash050 1001 775 1476 263 147 368 54310100ndash075 657 361 805 142 091 202 532Average (10100minuslowast) 1082 762 1525 257 171 400 53710250ndash025 1126 1032 1726 2219 122 387 132110250ndash050 649 646 1151 1565 058 207 134410250ndash075 416 343 605 840 036 123 1306Average (10250minuslowast) 730 674 1161 1541 072 239 132310500ndash025 827 959 1491 8830 064 289 264110500ndash050 526 556 928 6733 027 139 268610500ndash075 349 328 524 3250 022 090 2601Average (10500minuslowast) 567 614 981 6271 037 173 2642Average (10lowastminuslowast) 793 683 1222 2690 093 271 150130100ndash025 1663 1175 2202 955 506 917 136530100ndash050 1031 751 1457 680 280 552 137430100ndash075 660 420 797 379 164 322 1363Average (30100minuslowast) 1118 782 1485 671 317 597 136730250ndash025 1332 1136 1701 5846 442 684 336430250ndash050 819 671 1087 4051 236 411 338830250ndash075 445 356 586 2173 144 239 3348Average (30250minuslowast) 865 721 1125 4023 274 445 336630500ndash025 1034 939 1402 22856 400 594 672630500ndash050 680 617 920 15749 223 369 675530500ndash075 382 334 490 8343 127 214 6659Average (30500minuslowast) 699 630 937 15649 250 392 6713Average (30lowastminuslowast) 894 711 1182 6781 280 478 3815Total average 753 647 1208 3681 150 280 20391Since CONS is a deterministic algorithm each run always outputs the same result2Results from 1000 runs3Total CPU seconds on Pentium III 997MHz

where 120574 is a constant which indicates the degree of penaltyand we used a fixed value 07

3 Experiments

We made experiments on well-known benchmark data pub-licly available from the OR-Library [29] which are the sameas those used in [6] They are composed of 270 instanceswith 5 10 and 30 constraints They have different numbers

of objects and different tightness ratios The tightness ratiomeans 120572 such that 119887

119895= 120572sum

119899

119894=1119908119895119894for each 119895 = 1 2 119898

The class of instances are briefly described below

119898119899 minus 120572 119898 constraints 119899 objects and tightness ratio120572 Each class has 10 instances

The proposed algorithms were implemented with gcccompiler on a Pentium III PC (997MHz) using Linuxoperating system As the measure of performance we

8 Discrete Dynamics in Nature and Society

Table 2 Results of memetic Lagrangian heuristic on benchmark data

Instance Multistart R-CONS-L1 Memetic Lagrangian heuristic Number improvednumber equalledResult CPU2 Result CPU2

5100ndash025 240 1435 232 1466 005100ndash050 108 1471 108 1500 005100ndash075 072 1394 072 1431 00Average (5100minuslowast) 140 1433 137 1466 Total 005250ndash025 092 3513 092 3619 005250ndash050 049 3606 049 3748 005250ndash075 026 3395 025 3529 00Average (5250minuslowast) 056 3505 056 3632 Total 005500ndash025 048 7072 048 7124 005500ndash050 020 7318 020 7546 005500ndash075 014 6846 014 6985 00Average (5500minuslowast) 027 7079 027 7218 Total 00Average (5lowastminuslowast) 074 4006 073 4105 Total 0010100ndash025 245 2679 227 2630 0010100ndash050 146 2717 114 2650 0110100ndash075 089 2657 069 2598 00Average (10100minuslowast) 160 2684 137 2626 Total 0110250ndash025 114 6599 088 6441 0010250ndash050 057 6718 045 6558 0010250ndash075 035 6535 024 6408 0-1Average (10250minuslowast) 069 6617 052 6469 Total 0110500ndash025 058 13196 050 12811 0010500ndash050 027 13425 024 13057 0010500ndash075 021 13006 014 12750 00Average (10500minuslowast) 035 13209 029 12873 Total 00Average (10lowastminuslowast) 088 7504 073 7323 Total 0230100ndash025 492 6824 319 6796 0330100ndash050 267 6870 143 7052 0330100ndash075 156 6817 089 6780 02Average (30100minuslowast) 305 6837 184 6876 Total 0830250ndash025 424 16819 128 16799 2230250ndash050 223 16941 059 16958 0030250ndash075 140 16742 034 16837 00Average (30250minuslowast) 262 16834 074 16865 Total 2230500ndash025 381 33630 068 33414 2030500ndash050 213 33802 029 33737 1030500ndash075 124 33318 019 33440 00Average (30500minuslowast) 239 33583 039 33530 Total 30Average (30lowastminuslowast) 269 19085 099 19090 Total 510Total average 144 10198 082 10173 Total 5121Multistart R-CONS-L returns the best result from 5000 independent runs of R-CONS-L2Average CPU seconds on Pentium III 997MHz

used the percentage difference-ratio 100 times |119871119875 119900119901119905119894119898119906119898 minus

119900119906119905119901119906119905|119900119906119905119901119906119905 which was used in [6] where 119871119875 119900119901119905119894119898119906119898

is the optimal solution of the linear programming relaxationoverR It has a value in the range of [0 100] The smaller thevalue is the smaller the difference from the optimum is

First we compared constructive heuristics and localimprovement heuristic Table 1 shows the results of CONS

R-CONS and R-CONS-L where R-CONS-L starts with asolution produced by R-CONS and locally improves it by thelocal improvement heuristic described in Section 24We cansee that R-CONS-L largely improves the results of R-CONS

Next to verify the effectiveness of the proposed MAwe compared the results with a multistart method using R-CONS-L Multistart R-CONS-L returns the best result from

Discrete Dynamics in Nature and Society 9

5000 independent runs of R-CONS-L Table 2 shows theresults of multistart R-CONS-L and our MA The proposedMA outperformed multistart R-CONS-L

The last column in Table 2 corresponds to the numbers ofimproved and equalled results compared with the best resultsof [6] Surprisingly the proposed memetic algorithm couldfind better results than one of the state-of-the-art geneticalgorithms [6] for some instances with 30 constraints (5 bestover 90 instances)

4 Concluding Remarks

In this paper we tried to find good feasible solutions by usinga newmemetic algorithm based on Lagrangian relaxation Tothe best of the authorrsquos knowledge it is the first trial to use amemetic algorithm when optimizing Lagrange multipliers

The Lagrangian method guarantees optimality with thecapacity constraints which may be different from those ofthe original problem But it is not easy to find Lagrangemultipliers that accord with all the capacity constraintsThrough empirical investigation of Lagrangian space weknew that the space of locally optimal Lagrangian vectorsand that of their corresponding binary solutions are roughlyisometric Based on this fact we applied amemetic algorithmwhich searches only locally optimal Lagrangian vectorsto the transformed problem encoded by real values thatis Lagrange multipliers Then we could find high-qualityLagrange multipliers by the proposed memetic algorithmWe obtained a significant performance improvement over R-CONS and multistart heuristic

In recent years there have been researches showing betterperformance than Chu and Beasley [6] on the benchmarkdata of 0-1MKP [4 5 8 9 11ndash13] Although the presentedmemetic algorithm did not dominate such state-of-the-art methods this study could show the potentiality ofmemetic search on Lagrangian space Becausewe used simplegenetic operators we believe that there is room for furtherimprovement on our memetic search The improvementusing enhanced genetic operators tailored to real encodingfor example [30] will be a good future work

Acknowledgments

The present research has been conducted by the researchgrant of Kwangwoon University in 2013 This research wassupported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) and fundedby the Ministry of Education Science and Technology (2012-0001855)

References

[1] M R Garey and D S Johnson Computers and IntractabilityA Guide to the Theory of NP-Completeness W H Freeman andCompany San Francisco Calif USA 1979

[2] R Mansini and M G Speranza ldquoCORAL an exact algorithmfor the multidimensional knapsack problemrdquo INFORMS Jour-nal on Computing vol 24 no 3 pp 399ndash415 2012

[3] S Martello and P Toth ldquoAn exact algorithm for the two-constraint 0-1 knapsack problemrdquo Operations Research vol 51no 5 pp 826ndash835 2003

[4] S Boussier M Vasquez Y Vimont S Hanafi and P MichelonldquoA multi-level search strategy for the 0-1 multidimensionalknapsack problemrdquo Discrete Applied Mathematics vol 158 no2 pp 97ndash109 2010

[5] V Boyer M Elkihel and D El Baz ldquoHeuristics for the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 199 no 3 pp 658ndash664 2009

[6] P C Chu and J E Beasley ldquoA genetic algorithm for themultidimensional Knapsack problemrdquo Journal ofHeuristics vol4 no 1 pp 63ndash86 1998

[7] F Della Croce and A Grosso ldquoImproved core problem basedheuristics for the 0-1 multi-dimensional knapsack problemrdquoComputers amp Operations Research vol 39 no 1 pp 27ndash31 2012

[8] K Fleszar and K S Hindi ldquoFast effective heuristics forthe 0-1 multi-dimensional knapsack problemrdquo Computers ampOperations Research vol 36 no 5 pp 1602ndash1607 2009

[9] S Hanafi and C Wilbaut ldquoImproved convergent heuristicsfor the 0-1 multidimensional knapsack problemrdquo Annals ofOperations Research vol 183 no 1 pp 125ndash142 2011

[10] M J Magazine and O Oguz ldquoA heuristic algorithm forthe multidimensional zero-one knapsack problemrdquo EuropeanJournal of Operational Research vol 16 no 3 pp 319ndash326 1984

[11] M Vasquez and Y Vimont ldquoImproved results on the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 165 no 1 pp 70ndash81 2005

[12] Y Vimont S Boussier and M Vasquez ldquoReduced costspropagation in an efficient implicit enumeration for the 01multidimensional knapsack problemrdquo Journal of CombinatorialOptimization vol 15 no 2 pp 165ndash178 2008

[13] Y Yoon Y-H Kim and B-R Moon ldquoA theoretical andempirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 218 no 2 pp 366ndash376 2012

[14] C Cotta and J M Troya ldquoA hybrid genetic algorithm for the 0-1multiple knapsack problemrdquo in Artificial Neural Networks andGenetic Algorithms G D Smith N C Steele and R F AlbrechtEds vol 3 pp 251ndash255 Springer New york NY USA 1997

[15] J Gottlieb Evolutionary algorithms for constrained optimizationproblems [PhD thesis] Department of Computer ScienceTechnical University of Clausthal Clausthal Germany 1999

[16] S Khuri T Back and J Heitkotter ldquoThe zeroone multipleknapsack problem and genetic algorithmspagesrdquo in Proceedingsof the ACM Symposium on Applied Computing pp 188ndash193ACM Press 1994

[17] G R Raidl ldquoImproved genetic algorithm for the multicon-strained 0-1 knapsack problemrdquo in Proceedings of the IEEEInternational Conference on Evolutionary Computation (ICECrsquo98) pp 207ndash211 May 1998

[18] J Thiel and S Voss ldquoSome experiences on solving multicon-straint zero-one knapsack problems with genetic algorithmsrdquoINFOR vol 32 no 4 pp 226ndash242 1994

[19] Y Yoon Y-H Kim and B-R Moon ldquoAn evolutionaryLagrangian method for the 0-1 multiple knapsack problemrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GECCO rsquo05) pp 629ndash635 June 2005

[20] A Freville ldquoThe multidimensional 0-1 knapsack problem anoverviewrdquo European Journal of Operational Research vol 155no 1 pp 1ndash21 2004

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

Discrete Dynamics in Nature and Society 3

(blowast minus b) where blowast is the capacity obtained by LMMKP andb is the original capacity

The only previous attempt to find lower bounds usingLagrangian method is CONS [10] however CONS withouthybridization with other metaheuristics could not showsatisfactory results LM-GA by [23] obtained better resultsby the hybridization of weight-coded genetic algorithm andCONS In LM-GA a candidate solution is represented by avector (119908

1 1199082 119908

119899) of weights Weight 119908

119894is associated

with object 119894 Each profit 119901119894is modified by applying several

biasing techniques with these weights that is we can obtain amodified problem instance1198751015840 which has the same constraintsas those of the original problem instance but has a differentobject function And then solutions for this modified prob-lem instance are obtained by applying a decoding heuristic Inparticular LM-GA used CONS as a decoding heuristic Thefeasible solutions for the modified problem instance are alsofeasible for the original problem instance since they satisfy thesame constraints So weight-coding does not need an explicitrepairing algorithm

The proposed heuristic is different from LM-GA in that itimproves CONS itself by using properties of Lagrange mul-tipliers but LM-GA just uses CONS as evaluation functionLagrange multipliers in the proposed heuristic can move tomore diverse directions than CONS because of its randomfactor

23 Randomized Constructive Heuristic Yoon et al [13 19]proposed a randomized constructive heuristic (R-CONS)as an improved variant of CONS First 120582 is set to be 0Consequently 119909

119894becomes 1 for each V

119894gt 0 It means that

all positive-valued objects are put in the knapsack and soalmost all constraints are violated If 120582 is increased someobjects become taken out We increase 120582 adequately for onlyone object to be taken out We change only one Lagrangemultiplier at a time We randomly choose one number 119896 andchange 120582

119896

Reconsider (3) Making (V119894minus sum119898

119895=1120582119895119908119895119894) be negative by

increasing 120582119896let 119909119894= 0 by LMMKP For each object 119894 such

that 119909119894= 1 let 120572

119894be the increment of 120582

119896to make 119909

119894be 0

Then (V119894minus sum119898

119895=1120582119895119908119895119894minus 120572119894119908119896119894) has to be negative That is if

we increase 120582119896by 120572119894such that 120572

119894gt (V119894minus sum119898

119895=1120582119895119908119895119894)119908119896119894 the

119894th object is taken out So if we just change 120582119896to 120582119896+min

119894120572119894

leave 120582119894as it is for 119894 = 119896 and apply LMMKP again exactly one

object is taken outWe take out objects one by one in this wayand stop this procedure if every constraint is satisfied

Algorithm 1 shows the pseudo code of R-CONS Thenumber of operations to take out the object is at most 119899 andcomputing 120572

119894for each object 119894 takes 119874(119898) time Hence the

total time complexity becomes 119874(1198992119898)

24 Local Improvement Heuristic Our goal is to improvea real vector 120582 obtained by R-CONS whose correspondingcapacity is quite close to the capacity of the given prob-lem instance To devise a local improvement heuristic weexploited the following proposition [13 19]

Proposition 3 Suppose that 120582 and 1205821015840 correspond to x b andx1015840 b1015840 by the LMMKP respectively Let 120582 = (120582

1 1205822 120582

119898)

and 1205821015840 = (1205821015840

1 1205821015840

2 120582

1015840

119898) where 120582

119894= 1205821015840

119894for 119894 = 119896 and 120582

119896= 1205821015840

119896

Then if 120582119896lt 1205821015840

119896 119887119896ge 1198871015840

119896 and if 120582

119896gt 1205821015840

119896 119887119896le 1198871015840

119896

Let b be the capacity of the given problem instance and letblowast be the capacity obtained by LMMKP with 120582 By the abovetheorem if 119887lowast

119896gt 119887119896 choosing 1205821015840 = (120582

1 120582

1015840

119896 120582

119898)

such that 1205821015840119896gt 120582119896and applying LMMKP with 1205821015840 makes the

value of 119887lowast119896smaller Itmakes the 119896th constraint satisfied or the

exceeded amount for the 119896th capacity decreased Of courseanother constraint may become violated by this operationAlso which 120582

119896to be changed is at issue in the case that

several constraints are not satisfied Hence it is necessary toset efficient rules about which 120582

119896to be changed and how

much to change it If good rules are made we can find outbetter Lagrange multipliers than randomly generated onesquickly

We selected the method that chooses a random number119896(le 119898) and increases or decreases the value of 120582

119896by the

above theorem iteratively In each iteration a capacity vectorblowast is obtained by applying LMMKP with 120582 If blowast le ball constraints are satisfied and hence the best solution isupdated Since the possibility to find a better capacity existsthe algorithmdoes not stop here Instead it chooses a randomnumber 119896 and decreases the value of 120582

119896 If blowast ≰ b we focus

on satisfying constraints preferentially For this we choosea random number 119896 among the numbers such that theirconstraints are not satisfied and increase 120582k hoping the 119896thvalue of corresponding capacity to be decreased and thenthe 119896th constraint to be satisfied We set the amount of 120582

119896rsquos

change to be the fixed value 120575Most Lagrangian heuristics for discrete problems have

focused on obtaining good upper bounds but this algorithmis distinguished in that it primarily pursues finding feasiblesolutions Algorithm 2 shows the pseudo code of this localimprovement heuristic It takes119874(119899119898119873) time where119873 is thenumber of iterations

The direction by our local improvement heuristic lookssimilar to that by the subgradient algorithm described inSection 22 but the main difference lies in that in eachiteration the subgradient algorithm changes all coordinatevalues by the subgradient direction but our local improve-ment heuristic changes only one coordinate value Conse-quently this couldmake our local improvement heuristic findlower boundsmore easily than subgradient algorithm usuallyproducing upper bounds

25 Investigation of the Lagrangian Space Thestructure of theproblem space is an important factor to indicate the problemdifficulty and the analysis of the structure helps efficientsearch in the problem space [25ndash27] Recently Puchinger et al[28] gave some insight into the solution structure of 0-1MKPIn this subsection we conduct some experiments and getsome insight into the global structure of the 0-1MKP space

In Section 21 we showed that there is a correspondencebetween binary solution vector and Lagrange multiplier

4 Discrete Dynamics in Nature and Society

R-CONS(0-1MKP instance)

120582larr 0119868 larr 1 2 119899do119896 larr random integer in [1 119898]for 119894 isin 119868120572119894larr (V119894minus sum119898

119895=1120582119895119908119895119894)119908119896119894

120582119896larr 120582119896+min

119894isin119868120572119894

119868 larr 119868 arg min119894isin119868120572119894

(xlowast blowast) = LMMKP(120582)until xlowast satisfies all the constraintsreturn 120582

Algorithm 1 Randomized constructive heuristic [13 19]

LocalImprovementHeuristic(120582 0-1MKP instance)

for 119894 larr 1 to 119873

(xlowast blowast) larr LMMKP(120582)119868 larr 119894 119887

lowast

119894le 119887119894 and 119869 larr 119894 119887

lowast

119894gt 119887119894

if 119868 = 1 2 119898

update the best solutionchoose a random element 119896 among 119868120582119896larr 120582119896minus 120575

elsechoose a random element 119896 among 119869120582119896larr 120582119896+ 120575

return 120582 corresponding to the best solution

Algorithm 2 Local improvement heuristic

vector Strictly speaking there cannot be a one-to-one cor-respondence in the technical sense of bijection The binarysolution vector has only binary components so there are onlycountably many such vectors But the Lagrange multipliersare real numbers so there are uncountably many Lagrangemultiplier vectors Several multiplier vectors may correspondto the same binary solution vectorMoreover somemultipliervectors may have multiple binary solution vectors

Instead of directly finding an optimal binary solution wedeal with Lagrange multipliers In this subsection we empir-ically investigate the relationship between binary solutionspace and Lagrangian space (ie x

120582s and 120582s)

We made experiments on nine instances (119898 sdot 119899) chang-ing the number of constraints (119898) from 5 to 30 and thenumber of objects from 100 to 500 We chose a thousandof randomly generated Lagrange multipliers and plottedfor each pair of Lagrange multiplier vectors the relationbetween the Hamming distance in binary solution space andthe Euclidean distance in Lagrangian space Figure 1 showsthe plotting results The smaller the number of constraints(119898) is the larger the Pearson correlation coefficient (120588) isWe also made the same experiments on locally optimalLagrange multipliers Figure 2 shows the plotting results

Locally optimal Lagrange multipliers show much strongercorrelation than randomly generated ones They show strongpositive correlation (much greater than 05) It means thatbinary solution space and locally optimal Lagrangian spaceare roughly isometricThe results show that both spaces havesimilar neighborhood structures So this hints that it is easyto find high-quality Lagrange multipliers satisfying all thecapacity constraints by using memetic algorithms on locallyoptimal Lagrangian space That is memetic algorithms canbe a good choice for searching Lagrangian space directly

26 Proposed Memetic Algorithm A genetic algorithm (GA)is a problem-solving technique motivated by Darwinrsquos the-ory of natural selection in evolution A GA starts with aset of initial solutions which is called a population Eachsolution in the population is called a chromosome whichis typically represented by a linear string This populationthen evolves into different populations for a number ofiterations (generations) At the end the algorithm returnsthe best chromosome of the population as the solution tothe problem For each iteration the evolution proceeds inthe following Two solutions of the population are chosenbased on some probability distribution These two solutions

Discrete Dynamics in Nature and Society 5

100908070605040302010

00 01 02 03 04 05 06 07 08 09 1

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

(a) 5100 120588 = 044

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

00 01 02 03 04 05 06 07 08 09 1

Distance between Lagrangian vectors

(b) 5250 120588 = 050

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

0 01 02 03 04 05 06 07 08 09 1Distance between Lagrangian vectors

500450400350300250200150100

500

(c) 5500 120588 = 045

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(d) 10100 120588 = 035

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(e) 10250 120588 = 037

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(f) 10500 120588 = 037

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors005 01 015 02 025 03 035

(g) 30100 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors005 01 015 02 025 03 035

(h) 30250 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors005 01 015 02 025 03 035

(i) 30500 120588 = 019

Figure 1 Relationship between distances on solution space and those on Lagrangian space (among randomly generated Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

are then combined through a crossover operator to producean offspring With low probability this offspring is thenmodified by a mutation operator to introduce unexploredsearch space into the population enhancing the diversity ofthe population In this way offsprings are generated and theyreplace part of or the whole populationThe evolution processis repeated until a certain condition is satisfied for exampleafter a fixed number of iterations A GA that generates aconsiderable number of offsprings per iteration is called agenerational GA as opposed to a steady-state GA whichgenerates only one offspring per iteration If we apply a localimprovement heuristic typically after the mutation step theGA is called amemetic algorithm (MA) Algorithm 3 shows atypical generational MA

We propose an MA for optimizing Lagrange multipli-ers It conducts search using an evaluation function withpenalties for violated capacity constraints Our MA providesan alternative search method to find a good solution byoptimizing119898 Lagrangemultipliers instead of directly dealingwith binary vectors with length 119899 (119898 ≪ 119899)

The general framework of an MA is used in our study Inthe following we describe each part of the MA

Encoding Each solution in the population is representedby a chromosome Each chromosome consists of 119898 genescorresponding to Lagrange multipliers A real encoding isused for representing the chromosome 120582

InitializationTheMA first creates initial chromosomes usingR-CONS described in Section 23 We set the population size119875 to be 100

Mating and Crossover To select two parents we use a randommating scheme A crossover operator creates a new offspringby combining parts of the parents We use the uniformcrossover

Mutation After the crossover mutation operator is applied tothe offspring We use a gene-wise mutation After generatinga random number 119903 from 1 to 119898 the value of each gene isdivided by 119903

6 Discrete Dynamics in Nature and SocietyD

istan

ce b

etw

een

bina

ry

solu

tion

vect

ors

Distance between Lagrangian vectors

353025201510

50

0 02 04 06 08 1 12

(a) 5100 120588 = 091

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09

(b) 5250 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

160140120100

80604020

00 01 02 03 04 05 06 07 08 09

(c) 5500 120588 = 097

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

3540

3025201510

50

0 02 04 06 08 1 12

(d) 10100 120588 = 080

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09 1

(e) 10250 120588 = 092

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(f) 10500 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

3540

3025201510

50

Distance between Lagrangian vectors01 02 03 04 05 06 07 08 09 1 11

(g) 30100 120588 = 060

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

8090

70605040302010

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09 1

(h) 30250 120588 = 080

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(i) 30500 120588 = 091

Figure 2 Relationship between distances on solution space and those on Lagrangian space (among locally optimal Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

create an initial population of a fixed size 119875do

for 119894 larr 1 to 1198752

choose parent1119894and parent2

119894from population

119900119891119891119904119901119903119894119899119892119894larr crossover(parent1

119894 parent2

119894)

mutation(119900119891119891119904119901119903119894119899119892119894)

apply a local improvement heuristic to 119900119891119891119904119901119903119894119899119892119894

replace(population 119900119891119891119904119901119903119894119899119892119894)

until (stopping condition)return the best individual

Algorithm 3 Framework of our memetic algorithm

Local Improvement We use a local improvement heuristicdescribed in Section 24 The number of iterations (119873) is setto be 30000 We set 120575 to 00002

Replacement and Stopping Condition After generating 1198752

offspring our MA chooses the best 119875 individual among thetotal 31198752 ones as the population of the next generation OurMA stops when the number of generations reaches 100

Evaluation Function Our evaluation function is to find aLagrange multiplier vector 120582 that has a high fitness satisfyingthe capacity constraints as much as possible In our MA thefollowing is used as the objective function tomaximize whichis the function obtained by subtracting the penalty from theobjective function of the 0-1MKP

k119879xlowast minus 120574 sum

blowast119894gtb119894

(blowast119894minus b119894) (6)

Discrete Dynamics in Nature and Society 7

Table 1 Results of local search heuristics on benchmark data

Instance CONS [10] R-CONS [13 19] R-CONS-LResult1 Best2 Ave2 CPU3 Best2 Ave2 CPU3

5100ndash025 1370 971 2232 220 247 320 2865100ndash050 725 573 1400 157 110 135 2925100ndash075 514 370 798 087 072 083 278Average (5100minuslowast) 870 638 1477 155 143 179 2855250ndash025 677 762 1765 1323 092 103 7035250ndash050 527 494 1127 924 049 056 7225250ndash075 355 302 626 506 026 029 679Average (5250minuslowast) 520 519 1173 918 056 063 7015500ndash025 493 780 1527 5300 048 050 14145500ndash050 265 418 945 3656 020 022 14645500ndash075 222 263 543 1975 014 015 1367Average (5500minuslowast) 327 487 1005 3644 027 029 1415Average (5lowastminuslowast) 572 548 1218 1572 075 091 80010100ndash025 1588 1149 2293 366 275 630 53610100ndash050 1001 775 1476 263 147 368 54310100ndash075 657 361 805 142 091 202 532Average (10100minuslowast) 1082 762 1525 257 171 400 53710250ndash025 1126 1032 1726 2219 122 387 132110250ndash050 649 646 1151 1565 058 207 134410250ndash075 416 343 605 840 036 123 1306Average (10250minuslowast) 730 674 1161 1541 072 239 132310500ndash025 827 959 1491 8830 064 289 264110500ndash050 526 556 928 6733 027 139 268610500ndash075 349 328 524 3250 022 090 2601Average (10500minuslowast) 567 614 981 6271 037 173 2642Average (10lowastminuslowast) 793 683 1222 2690 093 271 150130100ndash025 1663 1175 2202 955 506 917 136530100ndash050 1031 751 1457 680 280 552 137430100ndash075 660 420 797 379 164 322 1363Average (30100minuslowast) 1118 782 1485 671 317 597 136730250ndash025 1332 1136 1701 5846 442 684 336430250ndash050 819 671 1087 4051 236 411 338830250ndash075 445 356 586 2173 144 239 3348Average (30250minuslowast) 865 721 1125 4023 274 445 336630500ndash025 1034 939 1402 22856 400 594 672630500ndash050 680 617 920 15749 223 369 675530500ndash075 382 334 490 8343 127 214 6659Average (30500minuslowast) 699 630 937 15649 250 392 6713Average (30lowastminuslowast) 894 711 1182 6781 280 478 3815Total average 753 647 1208 3681 150 280 20391Since CONS is a deterministic algorithm each run always outputs the same result2Results from 1000 runs3Total CPU seconds on Pentium III 997MHz

where 120574 is a constant which indicates the degree of penaltyand we used a fixed value 07

3 Experiments

We made experiments on well-known benchmark data pub-licly available from the OR-Library [29] which are the sameas those used in [6] They are composed of 270 instanceswith 5 10 and 30 constraints They have different numbers

of objects and different tightness ratios The tightness ratiomeans 120572 such that 119887

119895= 120572sum

119899

119894=1119908119895119894for each 119895 = 1 2 119898

The class of instances are briefly described below

119898119899 minus 120572 119898 constraints 119899 objects and tightness ratio120572 Each class has 10 instances

The proposed algorithms were implemented with gcccompiler on a Pentium III PC (997MHz) using Linuxoperating system As the measure of performance we

8 Discrete Dynamics in Nature and Society

Table 2 Results of memetic Lagrangian heuristic on benchmark data

Instance Multistart R-CONS-L1 Memetic Lagrangian heuristic Number improvednumber equalledResult CPU2 Result CPU2

5100ndash025 240 1435 232 1466 005100ndash050 108 1471 108 1500 005100ndash075 072 1394 072 1431 00Average (5100minuslowast) 140 1433 137 1466 Total 005250ndash025 092 3513 092 3619 005250ndash050 049 3606 049 3748 005250ndash075 026 3395 025 3529 00Average (5250minuslowast) 056 3505 056 3632 Total 005500ndash025 048 7072 048 7124 005500ndash050 020 7318 020 7546 005500ndash075 014 6846 014 6985 00Average (5500minuslowast) 027 7079 027 7218 Total 00Average (5lowastminuslowast) 074 4006 073 4105 Total 0010100ndash025 245 2679 227 2630 0010100ndash050 146 2717 114 2650 0110100ndash075 089 2657 069 2598 00Average (10100minuslowast) 160 2684 137 2626 Total 0110250ndash025 114 6599 088 6441 0010250ndash050 057 6718 045 6558 0010250ndash075 035 6535 024 6408 0-1Average (10250minuslowast) 069 6617 052 6469 Total 0110500ndash025 058 13196 050 12811 0010500ndash050 027 13425 024 13057 0010500ndash075 021 13006 014 12750 00Average (10500minuslowast) 035 13209 029 12873 Total 00Average (10lowastminuslowast) 088 7504 073 7323 Total 0230100ndash025 492 6824 319 6796 0330100ndash050 267 6870 143 7052 0330100ndash075 156 6817 089 6780 02Average (30100minuslowast) 305 6837 184 6876 Total 0830250ndash025 424 16819 128 16799 2230250ndash050 223 16941 059 16958 0030250ndash075 140 16742 034 16837 00Average (30250minuslowast) 262 16834 074 16865 Total 2230500ndash025 381 33630 068 33414 2030500ndash050 213 33802 029 33737 1030500ndash075 124 33318 019 33440 00Average (30500minuslowast) 239 33583 039 33530 Total 30Average (30lowastminuslowast) 269 19085 099 19090 Total 510Total average 144 10198 082 10173 Total 5121Multistart R-CONS-L returns the best result from 5000 independent runs of R-CONS-L2Average CPU seconds on Pentium III 997MHz

used the percentage difference-ratio 100 times |119871119875 119900119901119905119894119898119906119898 minus

119900119906119905119901119906119905|119900119906119905119901119906119905 which was used in [6] where 119871119875 119900119901119905119894119898119906119898

is the optimal solution of the linear programming relaxationoverR It has a value in the range of [0 100] The smaller thevalue is the smaller the difference from the optimum is

First we compared constructive heuristics and localimprovement heuristic Table 1 shows the results of CONS

R-CONS and R-CONS-L where R-CONS-L starts with asolution produced by R-CONS and locally improves it by thelocal improvement heuristic described in Section 24We cansee that R-CONS-L largely improves the results of R-CONS

Next to verify the effectiveness of the proposed MAwe compared the results with a multistart method using R-CONS-L Multistart R-CONS-L returns the best result from

Discrete Dynamics in Nature and Society 9

5000 independent runs of R-CONS-L Table 2 shows theresults of multistart R-CONS-L and our MA The proposedMA outperformed multistart R-CONS-L

The last column in Table 2 corresponds to the numbers ofimproved and equalled results compared with the best resultsof [6] Surprisingly the proposed memetic algorithm couldfind better results than one of the state-of-the-art geneticalgorithms [6] for some instances with 30 constraints (5 bestover 90 instances)

4 Concluding Remarks

In this paper we tried to find good feasible solutions by usinga newmemetic algorithm based on Lagrangian relaxation Tothe best of the authorrsquos knowledge it is the first trial to use amemetic algorithm when optimizing Lagrange multipliers

The Lagrangian method guarantees optimality with thecapacity constraints which may be different from those ofthe original problem But it is not easy to find Lagrangemultipliers that accord with all the capacity constraintsThrough empirical investigation of Lagrangian space weknew that the space of locally optimal Lagrangian vectorsand that of their corresponding binary solutions are roughlyisometric Based on this fact we applied amemetic algorithmwhich searches only locally optimal Lagrangian vectorsto the transformed problem encoded by real values thatis Lagrange multipliers Then we could find high-qualityLagrange multipliers by the proposed memetic algorithmWe obtained a significant performance improvement over R-CONS and multistart heuristic

In recent years there have been researches showing betterperformance than Chu and Beasley [6] on the benchmarkdata of 0-1MKP [4 5 8 9 11ndash13] Although the presentedmemetic algorithm did not dominate such state-of-the-art methods this study could show the potentiality ofmemetic search on Lagrangian space Becausewe used simplegenetic operators we believe that there is room for furtherimprovement on our memetic search The improvementusing enhanced genetic operators tailored to real encodingfor example [30] will be a good future work

Acknowledgments

The present research has been conducted by the researchgrant of Kwangwoon University in 2013 This research wassupported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) and fundedby the Ministry of Education Science and Technology (2012-0001855)

References

[1] M R Garey and D S Johnson Computers and IntractabilityA Guide to the Theory of NP-Completeness W H Freeman andCompany San Francisco Calif USA 1979

[2] R Mansini and M G Speranza ldquoCORAL an exact algorithmfor the multidimensional knapsack problemrdquo INFORMS Jour-nal on Computing vol 24 no 3 pp 399ndash415 2012

[3] S Martello and P Toth ldquoAn exact algorithm for the two-constraint 0-1 knapsack problemrdquo Operations Research vol 51no 5 pp 826ndash835 2003

[4] S Boussier M Vasquez Y Vimont S Hanafi and P MichelonldquoA multi-level search strategy for the 0-1 multidimensionalknapsack problemrdquo Discrete Applied Mathematics vol 158 no2 pp 97ndash109 2010

[5] V Boyer M Elkihel and D El Baz ldquoHeuristics for the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 199 no 3 pp 658ndash664 2009

[6] P C Chu and J E Beasley ldquoA genetic algorithm for themultidimensional Knapsack problemrdquo Journal ofHeuristics vol4 no 1 pp 63ndash86 1998

[7] F Della Croce and A Grosso ldquoImproved core problem basedheuristics for the 0-1 multi-dimensional knapsack problemrdquoComputers amp Operations Research vol 39 no 1 pp 27ndash31 2012

[8] K Fleszar and K S Hindi ldquoFast effective heuristics forthe 0-1 multi-dimensional knapsack problemrdquo Computers ampOperations Research vol 36 no 5 pp 1602ndash1607 2009

[9] S Hanafi and C Wilbaut ldquoImproved convergent heuristicsfor the 0-1 multidimensional knapsack problemrdquo Annals ofOperations Research vol 183 no 1 pp 125ndash142 2011

[10] M J Magazine and O Oguz ldquoA heuristic algorithm forthe multidimensional zero-one knapsack problemrdquo EuropeanJournal of Operational Research vol 16 no 3 pp 319ndash326 1984

[11] M Vasquez and Y Vimont ldquoImproved results on the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 165 no 1 pp 70ndash81 2005

[12] Y Vimont S Boussier and M Vasquez ldquoReduced costspropagation in an efficient implicit enumeration for the 01multidimensional knapsack problemrdquo Journal of CombinatorialOptimization vol 15 no 2 pp 165ndash178 2008

[13] Y Yoon Y-H Kim and B-R Moon ldquoA theoretical andempirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 218 no 2 pp 366ndash376 2012

[14] C Cotta and J M Troya ldquoA hybrid genetic algorithm for the 0-1multiple knapsack problemrdquo in Artificial Neural Networks andGenetic Algorithms G D Smith N C Steele and R F AlbrechtEds vol 3 pp 251ndash255 Springer New york NY USA 1997

[15] J Gottlieb Evolutionary algorithms for constrained optimizationproblems [PhD thesis] Department of Computer ScienceTechnical University of Clausthal Clausthal Germany 1999

[16] S Khuri T Back and J Heitkotter ldquoThe zeroone multipleknapsack problem and genetic algorithmspagesrdquo in Proceedingsof the ACM Symposium on Applied Computing pp 188ndash193ACM Press 1994

[17] G R Raidl ldquoImproved genetic algorithm for the multicon-strained 0-1 knapsack problemrdquo in Proceedings of the IEEEInternational Conference on Evolutionary Computation (ICECrsquo98) pp 207ndash211 May 1998

[18] J Thiel and S Voss ldquoSome experiences on solving multicon-straint zero-one knapsack problems with genetic algorithmsrdquoINFOR vol 32 no 4 pp 226ndash242 1994

[19] Y Yoon Y-H Kim and B-R Moon ldquoAn evolutionaryLagrangian method for the 0-1 multiple knapsack problemrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GECCO rsquo05) pp 629ndash635 June 2005

[20] A Freville ldquoThe multidimensional 0-1 knapsack problem anoverviewrdquo European Journal of Operational Research vol 155no 1 pp 1ndash21 2004

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

4 Discrete Dynamics in Nature and Society

R-CONS(0-1MKP instance)

120582larr 0119868 larr 1 2 119899do119896 larr random integer in [1 119898]for 119894 isin 119868120572119894larr (V119894minus sum119898

119895=1120582119895119908119895119894)119908119896119894

120582119896larr 120582119896+min

119894isin119868120572119894

119868 larr 119868 arg min119894isin119868120572119894

(xlowast blowast) = LMMKP(120582)until xlowast satisfies all the constraintsreturn 120582

Algorithm 1 Randomized constructive heuristic [13 19]

LocalImprovementHeuristic(120582 0-1MKP instance)

for 119894 larr 1 to 119873

(xlowast blowast) larr LMMKP(120582)119868 larr 119894 119887

lowast

119894le 119887119894 and 119869 larr 119894 119887

lowast

119894gt 119887119894

if 119868 = 1 2 119898

update the best solutionchoose a random element 119896 among 119868120582119896larr 120582119896minus 120575

elsechoose a random element 119896 among 119869120582119896larr 120582119896+ 120575

return 120582 corresponding to the best solution

Algorithm 2 Local improvement heuristic

vector Strictly speaking there cannot be a one-to-one cor-respondence in the technical sense of bijection The binarysolution vector has only binary components so there are onlycountably many such vectors But the Lagrange multipliersare real numbers so there are uncountably many Lagrangemultiplier vectors Several multiplier vectors may correspondto the same binary solution vectorMoreover somemultipliervectors may have multiple binary solution vectors

Instead of directly finding an optimal binary solution wedeal with Lagrange multipliers In this subsection we empir-ically investigate the relationship between binary solutionspace and Lagrangian space (ie x

120582s and 120582s)

We made experiments on nine instances (119898 sdot 119899) chang-ing the number of constraints (119898) from 5 to 30 and thenumber of objects from 100 to 500 We chose a thousandof randomly generated Lagrange multipliers and plottedfor each pair of Lagrange multiplier vectors the relationbetween the Hamming distance in binary solution space andthe Euclidean distance in Lagrangian space Figure 1 showsthe plotting results The smaller the number of constraints(119898) is the larger the Pearson correlation coefficient (120588) isWe also made the same experiments on locally optimalLagrange multipliers Figure 2 shows the plotting results

Locally optimal Lagrange multipliers show much strongercorrelation than randomly generated ones They show strongpositive correlation (much greater than 05) It means thatbinary solution space and locally optimal Lagrangian spaceare roughly isometricThe results show that both spaces havesimilar neighborhood structures So this hints that it is easyto find high-quality Lagrange multipliers satisfying all thecapacity constraints by using memetic algorithms on locallyoptimal Lagrangian space That is memetic algorithms canbe a good choice for searching Lagrangian space directly

26 Proposed Memetic Algorithm A genetic algorithm (GA)is a problem-solving technique motivated by Darwinrsquos the-ory of natural selection in evolution A GA starts with aset of initial solutions which is called a population Eachsolution in the population is called a chromosome whichis typically represented by a linear string This populationthen evolves into different populations for a number ofiterations (generations) At the end the algorithm returnsthe best chromosome of the population as the solution tothe problem For each iteration the evolution proceeds inthe following Two solutions of the population are chosenbased on some probability distribution These two solutions

Discrete Dynamics in Nature and Society 5

100908070605040302010

00 01 02 03 04 05 06 07 08 09 1

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

(a) 5100 120588 = 044

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

00 01 02 03 04 05 06 07 08 09 1

Distance between Lagrangian vectors

(b) 5250 120588 = 050

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

0 01 02 03 04 05 06 07 08 09 1Distance between Lagrangian vectors

500450400350300250200150100

500

(c) 5500 120588 = 045

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(d) 10100 120588 = 035

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(e) 10250 120588 = 037

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(f) 10500 120588 = 037

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors005 01 015 02 025 03 035

(g) 30100 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors005 01 015 02 025 03 035

(h) 30250 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors005 01 015 02 025 03 035

(i) 30500 120588 = 019

Figure 1 Relationship between distances on solution space and those on Lagrangian space (among randomly generated Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

are then combined through a crossover operator to producean offspring With low probability this offspring is thenmodified by a mutation operator to introduce unexploredsearch space into the population enhancing the diversity ofthe population In this way offsprings are generated and theyreplace part of or the whole populationThe evolution processis repeated until a certain condition is satisfied for exampleafter a fixed number of iterations A GA that generates aconsiderable number of offsprings per iteration is called agenerational GA as opposed to a steady-state GA whichgenerates only one offspring per iteration If we apply a localimprovement heuristic typically after the mutation step theGA is called amemetic algorithm (MA) Algorithm 3 shows atypical generational MA

We propose an MA for optimizing Lagrange multipli-ers It conducts search using an evaluation function withpenalties for violated capacity constraints Our MA providesan alternative search method to find a good solution byoptimizing119898 Lagrangemultipliers instead of directly dealingwith binary vectors with length 119899 (119898 ≪ 119899)

The general framework of an MA is used in our study Inthe following we describe each part of the MA

Encoding Each solution in the population is representedby a chromosome Each chromosome consists of 119898 genescorresponding to Lagrange multipliers A real encoding isused for representing the chromosome 120582

InitializationTheMA first creates initial chromosomes usingR-CONS described in Section 23 We set the population size119875 to be 100

Mating and Crossover To select two parents we use a randommating scheme A crossover operator creates a new offspringby combining parts of the parents We use the uniformcrossover

Mutation After the crossover mutation operator is applied tothe offspring We use a gene-wise mutation After generatinga random number 119903 from 1 to 119898 the value of each gene isdivided by 119903

6 Discrete Dynamics in Nature and SocietyD

istan

ce b

etw

een

bina

ry

solu

tion

vect

ors

Distance between Lagrangian vectors

353025201510

50

0 02 04 06 08 1 12

(a) 5100 120588 = 091

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09

(b) 5250 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

160140120100

80604020

00 01 02 03 04 05 06 07 08 09

(c) 5500 120588 = 097

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

3540

3025201510

50

0 02 04 06 08 1 12

(d) 10100 120588 = 080

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09 1

(e) 10250 120588 = 092

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(f) 10500 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

3540

3025201510

50

Distance between Lagrangian vectors01 02 03 04 05 06 07 08 09 1 11

(g) 30100 120588 = 060

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

8090

70605040302010

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09 1

(h) 30250 120588 = 080

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(i) 30500 120588 = 091

Figure 2 Relationship between distances on solution space and those on Lagrangian space (among locally optimal Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

create an initial population of a fixed size 119875do

for 119894 larr 1 to 1198752

choose parent1119894and parent2

119894from population

119900119891119891119904119901119903119894119899119892119894larr crossover(parent1

119894 parent2

119894)

mutation(119900119891119891119904119901119903119894119899119892119894)

apply a local improvement heuristic to 119900119891119891119904119901119903119894119899119892119894

replace(population 119900119891119891119904119901119903119894119899119892119894)

until (stopping condition)return the best individual

Algorithm 3 Framework of our memetic algorithm

Local Improvement We use a local improvement heuristicdescribed in Section 24 The number of iterations (119873) is setto be 30000 We set 120575 to 00002

Replacement and Stopping Condition After generating 1198752

offspring our MA chooses the best 119875 individual among thetotal 31198752 ones as the population of the next generation OurMA stops when the number of generations reaches 100

Evaluation Function Our evaluation function is to find aLagrange multiplier vector 120582 that has a high fitness satisfyingthe capacity constraints as much as possible In our MA thefollowing is used as the objective function tomaximize whichis the function obtained by subtracting the penalty from theobjective function of the 0-1MKP

k119879xlowast minus 120574 sum

blowast119894gtb119894

(blowast119894minus b119894) (6)

Discrete Dynamics in Nature and Society 7

Table 1 Results of local search heuristics on benchmark data

Instance CONS [10] R-CONS [13 19] R-CONS-LResult1 Best2 Ave2 CPU3 Best2 Ave2 CPU3

5100ndash025 1370 971 2232 220 247 320 2865100ndash050 725 573 1400 157 110 135 2925100ndash075 514 370 798 087 072 083 278Average (5100minuslowast) 870 638 1477 155 143 179 2855250ndash025 677 762 1765 1323 092 103 7035250ndash050 527 494 1127 924 049 056 7225250ndash075 355 302 626 506 026 029 679Average (5250minuslowast) 520 519 1173 918 056 063 7015500ndash025 493 780 1527 5300 048 050 14145500ndash050 265 418 945 3656 020 022 14645500ndash075 222 263 543 1975 014 015 1367Average (5500minuslowast) 327 487 1005 3644 027 029 1415Average (5lowastminuslowast) 572 548 1218 1572 075 091 80010100ndash025 1588 1149 2293 366 275 630 53610100ndash050 1001 775 1476 263 147 368 54310100ndash075 657 361 805 142 091 202 532Average (10100minuslowast) 1082 762 1525 257 171 400 53710250ndash025 1126 1032 1726 2219 122 387 132110250ndash050 649 646 1151 1565 058 207 134410250ndash075 416 343 605 840 036 123 1306Average (10250minuslowast) 730 674 1161 1541 072 239 132310500ndash025 827 959 1491 8830 064 289 264110500ndash050 526 556 928 6733 027 139 268610500ndash075 349 328 524 3250 022 090 2601Average (10500minuslowast) 567 614 981 6271 037 173 2642Average (10lowastminuslowast) 793 683 1222 2690 093 271 150130100ndash025 1663 1175 2202 955 506 917 136530100ndash050 1031 751 1457 680 280 552 137430100ndash075 660 420 797 379 164 322 1363Average (30100minuslowast) 1118 782 1485 671 317 597 136730250ndash025 1332 1136 1701 5846 442 684 336430250ndash050 819 671 1087 4051 236 411 338830250ndash075 445 356 586 2173 144 239 3348Average (30250minuslowast) 865 721 1125 4023 274 445 336630500ndash025 1034 939 1402 22856 400 594 672630500ndash050 680 617 920 15749 223 369 675530500ndash075 382 334 490 8343 127 214 6659Average (30500minuslowast) 699 630 937 15649 250 392 6713Average (30lowastminuslowast) 894 711 1182 6781 280 478 3815Total average 753 647 1208 3681 150 280 20391Since CONS is a deterministic algorithm each run always outputs the same result2Results from 1000 runs3Total CPU seconds on Pentium III 997MHz

where 120574 is a constant which indicates the degree of penaltyand we used a fixed value 07

3 Experiments

We made experiments on well-known benchmark data pub-licly available from the OR-Library [29] which are the sameas those used in [6] They are composed of 270 instanceswith 5 10 and 30 constraints They have different numbers

of objects and different tightness ratios The tightness ratiomeans 120572 such that 119887

119895= 120572sum

119899

119894=1119908119895119894for each 119895 = 1 2 119898

The class of instances are briefly described below

119898119899 minus 120572 119898 constraints 119899 objects and tightness ratio120572 Each class has 10 instances

The proposed algorithms were implemented with gcccompiler on a Pentium III PC (997MHz) using Linuxoperating system As the measure of performance we

8 Discrete Dynamics in Nature and Society

Table 2 Results of memetic Lagrangian heuristic on benchmark data

Instance Multistart R-CONS-L1 Memetic Lagrangian heuristic Number improvednumber equalledResult CPU2 Result CPU2

5100ndash025 240 1435 232 1466 005100ndash050 108 1471 108 1500 005100ndash075 072 1394 072 1431 00Average (5100minuslowast) 140 1433 137 1466 Total 005250ndash025 092 3513 092 3619 005250ndash050 049 3606 049 3748 005250ndash075 026 3395 025 3529 00Average (5250minuslowast) 056 3505 056 3632 Total 005500ndash025 048 7072 048 7124 005500ndash050 020 7318 020 7546 005500ndash075 014 6846 014 6985 00Average (5500minuslowast) 027 7079 027 7218 Total 00Average (5lowastminuslowast) 074 4006 073 4105 Total 0010100ndash025 245 2679 227 2630 0010100ndash050 146 2717 114 2650 0110100ndash075 089 2657 069 2598 00Average (10100minuslowast) 160 2684 137 2626 Total 0110250ndash025 114 6599 088 6441 0010250ndash050 057 6718 045 6558 0010250ndash075 035 6535 024 6408 0-1Average (10250minuslowast) 069 6617 052 6469 Total 0110500ndash025 058 13196 050 12811 0010500ndash050 027 13425 024 13057 0010500ndash075 021 13006 014 12750 00Average (10500minuslowast) 035 13209 029 12873 Total 00Average (10lowastminuslowast) 088 7504 073 7323 Total 0230100ndash025 492 6824 319 6796 0330100ndash050 267 6870 143 7052 0330100ndash075 156 6817 089 6780 02Average (30100minuslowast) 305 6837 184 6876 Total 0830250ndash025 424 16819 128 16799 2230250ndash050 223 16941 059 16958 0030250ndash075 140 16742 034 16837 00Average (30250minuslowast) 262 16834 074 16865 Total 2230500ndash025 381 33630 068 33414 2030500ndash050 213 33802 029 33737 1030500ndash075 124 33318 019 33440 00Average (30500minuslowast) 239 33583 039 33530 Total 30Average (30lowastminuslowast) 269 19085 099 19090 Total 510Total average 144 10198 082 10173 Total 5121Multistart R-CONS-L returns the best result from 5000 independent runs of R-CONS-L2Average CPU seconds on Pentium III 997MHz

used the percentage difference-ratio 100 times |119871119875 119900119901119905119894119898119906119898 minus

119900119906119905119901119906119905|119900119906119905119901119906119905 which was used in [6] where 119871119875 119900119901119905119894119898119906119898

is the optimal solution of the linear programming relaxationoverR It has a value in the range of [0 100] The smaller thevalue is the smaller the difference from the optimum is

First we compared constructive heuristics and localimprovement heuristic Table 1 shows the results of CONS

R-CONS and R-CONS-L where R-CONS-L starts with asolution produced by R-CONS and locally improves it by thelocal improvement heuristic described in Section 24We cansee that R-CONS-L largely improves the results of R-CONS

Next to verify the effectiveness of the proposed MAwe compared the results with a multistart method using R-CONS-L Multistart R-CONS-L returns the best result from

Discrete Dynamics in Nature and Society 9

5000 independent runs of R-CONS-L Table 2 shows theresults of multistart R-CONS-L and our MA The proposedMA outperformed multistart R-CONS-L

The last column in Table 2 corresponds to the numbers ofimproved and equalled results compared with the best resultsof [6] Surprisingly the proposed memetic algorithm couldfind better results than one of the state-of-the-art geneticalgorithms [6] for some instances with 30 constraints (5 bestover 90 instances)

4 Concluding Remarks

In this paper we tried to find good feasible solutions by usinga newmemetic algorithm based on Lagrangian relaxation Tothe best of the authorrsquos knowledge it is the first trial to use amemetic algorithm when optimizing Lagrange multipliers

The Lagrangian method guarantees optimality with thecapacity constraints which may be different from those ofthe original problem But it is not easy to find Lagrangemultipliers that accord with all the capacity constraintsThrough empirical investigation of Lagrangian space weknew that the space of locally optimal Lagrangian vectorsand that of their corresponding binary solutions are roughlyisometric Based on this fact we applied amemetic algorithmwhich searches only locally optimal Lagrangian vectorsto the transformed problem encoded by real values thatis Lagrange multipliers Then we could find high-qualityLagrange multipliers by the proposed memetic algorithmWe obtained a significant performance improvement over R-CONS and multistart heuristic

In recent years there have been researches showing betterperformance than Chu and Beasley [6] on the benchmarkdata of 0-1MKP [4 5 8 9 11ndash13] Although the presentedmemetic algorithm did not dominate such state-of-the-art methods this study could show the potentiality ofmemetic search on Lagrangian space Becausewe used simplegenetic operators we believe that there is room for furtherimprovement on our memetic search The improvementusing enhanced genetic operators tailored to real encodingfor example [30] will be a good future work

Acknowledgments

The present research has been conducted by the researchgrant of Kwangwoon University in 2013 This research wassupported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) and fundedby the Ministry of Education Science and Technology (2012-0001855)

References

[1] M R Garey and D S Johnson Computers and IntractabilityA Guide to the Theory of NP-Completeness W H Freeman andCompany San Francisco Calif USA 1979

[2] R Mansini and M G Speranza ldquoCORAL an exact algorithmfor the multidimensional knapsack problemrdquo INFORMS Jour-nal on Computing vol 24 no 3 pp 399ndash415 2012

[3] S Martello and P Toth ldquoAn exact algorithm for the two-constraint 0-1 knapsack problemrdquo Operations Research vol 51no 5 pp 826ndash835 2003

[4] S Boussier M Vasquez Y Vimont S Hanafi and P MichelonldquoA multi-level search strategy for the 0-1 multidimensionalknapsack problemrdquo Discrete Applied Mathematics vol 158 no2 pp 97ndash109 2010

[5] V Boyer M Elkihel and D El Baz ldquoHeuristics for the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 199 no 3 pp 658ndash664 2009

[6] P C Chu and J E Beasley ldquoA genetic algorithm for themultidimensional Knapsack problemrdquo Journal ofHeuristics vol4 no 1 pp 63ndash86 1998

[7] F Della Croce and A Grosso ldquoImproved core problem basedheuristics for the 0-1 multi-dimensional knapsack problemrdquoComputers amp Operations Research vol 39 no 1 pp 27ndash31 2012

[8] K Fleszar and K S Hindi ldquoFast effective heuristics forthe 0-1 multi-dimensional knapsack problemrdquo Computers ampOperations Research vol 36 no 5 pp 1602ndash1607 2009

[9] S Hanafi and C Wilbaut ldquoImproved convergent heuristicsfor the 0-1 multidimensional knapsack problemrdquo Annals ofOperations Research vol 183 no 1 pp 125ndash142 2011

[10] M J Magazine and O Oguz ldquoA heuristic algorithm forthe multidimensional zero-one knapsack problemrdquo EuropeanJournal of Operational Research vol 16 no 3 pp 319ndash326 1984

[11] M Vasquez and Y Vimont ldquoImproved results on the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 165 no 1 pp 70ndash81 2005

[12] Y Vimont S Boussier and M Vasquez ldquoReduced costspropagation in an efficient implicit enumeration for the 01multidimensional knapsack problemrdquo Journal of CombinatorialOptimization vol 15 no 2 pp 165ndash178 2008

[13] Y Yoon Y-H Kim and B-R Moon ldquoA theoretical andempirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 218 no 2 pp 366ndash376 2012

[14] C Cotta and J M Troya ldquoA hybrid genetic algorithm for the 0-1multiple knapsack problemrdquo in Artificial Neural Networks andGenetic Algorithms G D Smith N C Steele and R F AlbrechtEds vol 3 pp 251ndash255 Springer New york NY USA 1997

[15] J Gottlieb Evolutionary algorithms for constrained optimizationproblems [PhD thesis] Department of Computer ScienceTechnical University of Clausthal Clausthal Germany 1999

[16] S Khuri T Back and J Heitkotter ldquoThe zeroone multipleknapsack problem and genetic algorithmspagesrdquo in Proceedingsof the ACM Symposium on Applied Computing pp 188ndash193ACM Press 1994

[17] G R Raidl ldquoImproved genetic algorithm for the multicon-strained 0-1 knapsack problemrdquo in Proceedings of the IEEEInternational Conference on Evolutionary Computation (ICECrsquo98) pp 207ndash211 May 1998

[18] J Thiel and S Voss ldquoSome experiences on solving multicon-straint zero-one knapsack problems with genetic algorithmsrdquoINFOR vol 32 no 4 pp 226ndash242 1994

[19] Y Yoon Y-H Kim and B-R Moon ldquoAn evolutionaryLagrangian method for the 0-1 multiple knapsack problemrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GECCO rsquo05) pp 629ndash635 June 2005

[20] A Freville ldquoThe multidimensional 0-1 knapsack problem anoverviewrdquo European Journal of Operational Research vol 155no 1 pp 1ndash21 2004

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

Discrete Dynamics in Nature and Society 5

100908070605040302010

00 01 02 03 04 05 06 07 08 09 1

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

(a) 5100 120588 = 044

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

00 01 02 03 04 05 06 07 08 09 1

Distance between Lagrangian vectors

(b) 5250 120588 = 050

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

0 01 02 03 04 05 06 07 08 09 1Distance between Lagrangian vectors

500450400350300250200150100

500

(c) 5500 120588 = 045

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(d) 10100 120588 = 035

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(e) 10250 120588 = 037

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors0 01 02 03 04 05 06 07

(f) 10500 120588 = 037

100908070605040302010

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors005 01 015 02 025 03 035

(g) 30100 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

250

200

150

100

50

0

Distance between Lagrangian vectors005 01 015 02 025 03 035

(h) 30250 120588 = 019

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

500450400350300250200150100

500

Distance between Lagrangian vectors005 01 015 02 025 03 035

(i) 30500 120588 = 019

Figure 1 Relationship between distances on solution space and those on Lagrangian space (among randomly generated Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

are then combined through a crossover operator to producean offspring With low probability this offspring is thenmodified by a mutation operator to introduce unexploredsearch space into the population enhancing the diversity ofthe population In this way offsprings are generated and theyreplace part of or the whole populationThe evolution processis repeated until a certain condition is satisfied for exampleafter a fixed number of iterations A GA that generates aconsiderable number of offsprings per iteration is called agenerational GA as opposed to a steady-state GA whichgenerates only one offspring per iteration If we apply a localimprovement heuristic typically after the mutation step theGA is called amemetic algorithm (MA) Algorithm 3 shows atypical generational MA

We propose an MA for optimizing Lagrange multipli-ers It conducts search using an evaluation function withpenalties for violated capacity constraints Our MA providesan alternative search method to find a good solution byoptimizing119898 Lagrangemultipliers instead of directly dealingwith binary vectors with length 119899 (119898 ≪ 119899)

The general framework of an MA is used in our study Inthe following we describe each part of the MA

Encoding Each solution in the population is representedby a chromosome Each chromosome consists of 119898 genescorresponding to Lagrange multipliers A real encoding isused for representing the chromosome 120582

InitializationTheMA first creates initial chromosomes usingR-CONS described in Section 23 We set the population size119875 to be 100

Mating and Crossover To select two parents we use a randommating scheme A crossover operator creates a new offspringby combining parts of the parents We use the uniformcrossover

Mutation After the crossover mutation operator is applied tothe offspring We use a gene-wise mutation After generatinga random number 119903 from 1 to 119898 the value of each gene isdivided by 119903

6 Discrete Dynamics in Nature and SocietyD

istan

ce b

etw

een

bina

ry

solu

tion

vect

ors

Distance between Lagrangian vectors

353025201510

50

0 02 04 06 08 1 12

(a) 5100 120588 = 091

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09

(b) 5250 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

160140120100

80604020

00 01 02 03 04 05 06 07 08 09

(c) 5500 120588 = 097

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

3540

3025201510

50

0 02 04 06 08 1 12

(d) 10100 120588 = 080

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09 1

(e) 10250 120588 = 092

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(f) 10500 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

3540

3025201510

50

Distance between Lagrangian vectors01 02 03 04 05 06 07 08 09 1 11

(g) 30100 120588 = 060

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

8090

70605040302010

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09 1

(h) 30250 120588 = 080

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(i) 30500 120588 = 091

Figure 2 Relationship between distances on solution space and those on Lagrangian space (among locally optimal Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

create an initial population of a fixed size 119875do

for 119894 larr 1 to 1198752

choose parent1119894and parent2

119894from population

119900119891119891119904119901119903119894119899119892119894larr crossover(parent1

119894 parent2

119894)

mutation(119900119891119891119904119901119903119894119899119892119894)

apply a local improvement heuristic to 119900119891119891119904119901119903119894119899119892119894

replace(population 119900119891119891119904119901119903119894119899119892119894)

until (stopping condition)return the best individual

Algorithm 3 Framework of our memetic algorithm

Local Improvement We use a local improvement heuristicdescribed in Section 24 The number of iterations (119873) is setto be 30000 We set 120575 to 00002

Replacement and Stopping Condition After generating 1198752

offspring our MA chooses the best 119875 individual among thetotal 31198752 ones as the population of the next generation OurMA stops when the number of generations reaches 100

Evaluation Function Our evaluation function is to find aLagrange multiplier vector 120582 that has a high fitness satisfyingthe capacity constraints as much as possible In our MA thefollowing is used as the objective function tomaximize whichis the function obtained by subtracting the penalty from theobjective function of the 0-1MKP

k119879xlowast minus 120574 sum

blowast119894gtb119894

(blowast119894minus b119894) (6)

Discrete Dynamics in Nature and Society 7

Table 1 Results of local search heuristics on benchmark data

Instance CONS [10] R-CONS [13 19] R-CONS-LResult1 Best2 Ave2 CPU3 Best2 Ave2 CPU3

5100ndash025 1370 971 2232 220 247 320 2865100ndash050 725 573 1400 157 110 135 2925100ndash075 514 370 798 087 072 083 278Average (5100minuslowast) 870 638 1477 155 143 179 2855250ndash025 677 762 1765 1323 092 103 7035250ndash050 527 494 1127 924 049 056 7225250ndash075 355 302 626 506 026 029 679Average (5250minuslowast) 520 519 1173 918 056 063 7015500ndash025 493 780 1527 5300 048 050 14145500ndash050 265 418 945 3656 020 022 14645500ndash075 222 263 543 1975 014 015 1367Average (5500minuslowast) 327 487 1005 3644 027 029 1415Average (5lowastminuslowast) 572 548 1218 1572 075 091 80010100ndash025 1588 1149 2293 366 275 630 53610100ndash050 1001 775 1476 263 147 368 54310100ndash075 657 361 805 142 091 202 532Average (10100minuslowast) 1082 762 1525 257 171 400 53710250ndash025 1126 1032 1726 2219 122 387 132110250ndash050 649 646 1151 1565 058 207 134410250ndash075 416 343 605 840 036 123 1306Average (10250minuslowast) 730 674 1161 1541 072 239 132310500ndash025 827 959 1491 8830 064 289 264110500ndash050 526 556 928 6733 027 139 268610500ndash075 349 328 524 3250 022 090 2601Average (10500minuslowast) 567 614 981 6271 037 173 2642Average (10lowastminuslowast) 793 683 1222 2690 093 271 150130100ndash025 1663 1175 2202 955 506 917 136530100ndash050 1031 751 1457 680 280 552 137430100ndash075 660 420 797 379 164 322 1363Average (30100minuslowast) 1118 782 1485 671 317 597 136730250ndash025 1332 1136 1701 5846 442 684 336430250ndash050 819 671 1087 4051 236 411 338830250ndash075 445 356 586 2173 144 239 3348Average (30250minuslowast) 865 721 1125 4023 274 445 336630500ndash025 1034 939 1402 22856 400 594 672630500ndash050 680 617 920 15749 223 369 675530500ndash075 382 334 490 8343 127 214 6659Average (30500minuslowast) 699 630 937 15649 250 392 6713Average (30lowastminuslowast) 894 711 1182 6781 280 478 3815Total average 753 647 1208 3681 150 280 20391Since CONS is a deterministic algorithm each run always outputs the same result2Results from 1000 runs3Total CPU seconds on Pentium III 997MHz

where 120574 is a constant which indicates the degree of penaltyand we used a fixed value 07

3 Experiments

We made experiments on well-known benchmark data pub-licly available from the OR-Library [29] which are the sameas those used in [6] They are composed of 270 instanceswith 5 10 and 30 constraints They have different numbers

of objects and different tightness ratios The tightness ratiomeans 120572 such that 119887

119895= 120572sum

119899

119894=1119908119895119894for each 119895 = 1 2 119898

The class of instances are briefly described below

119898119899 minus 120572 119898 constraints 119899 objects and tightness ratio120572 Each class has 10 instances

The proposed algorithms were implemented with gcccompiler on a Pentium III PC (997MHz) using Linuxoperating system As the measure of performance we

8 Discrete Dynamics in Nature and Society

Table 2 Results of memetic Lagrangian heuristic on benchmark data

Instance Multistart R-CONS-L1 Memetic Lagrangian heuristic Number improvednumber equalledResult CPU2 Result CPU2

5100ndash025 240 1435 232 1466 005100ndash050 108 1471 108 1500 005100ndash075 072 1394 072 1431 00Average (5100minuslowast) 140 1433 137 1466 Total 005250ndash025 092 3513 092 3619 005250ndash050 049 3606 049 3748 005250ndash075 026 3395 025 3529 00Average (5250minuslowast) 056 3505 056 3632 Total 005500ndash025 048 7072 048 7124 005500ndash050 020 7318 020 7546 005500ndash075 014 6846 014 6985 00Average (5500minuslowast) 027 7079 027 7218 Total 00Average (5lowastminuslowast) 074 4006 073 4105 Total 0010100ndash025 245 2679 227 2630 0010100ndash050 146 2717 114 2650 0110100ndash075 089 2657 069 2598 00Average (10100minuslowast) 160 2684 137 2626 Total 0110250ndash025 114 6599 088 6441 0010250ndash050 057 6718 045 6558 0010250ndash075 035 6535 024 6408 0-1Average (10250minuslowast) 069 6617 052 6469 Total 0110500ndash025 058 13196 050 12811 0010500ndash050 027 13425 024 13057 0010500ndash075 021 13006 014 12750 00Average (10500minuslowast) 035 13209 029 12873 Total 00Average (10lowastminuslowast) 088 7504 073 7323 Total 0230100ndash025 492 6824 319 6796 0330100ndash050 267 6870 143 7052 0330100ndash075 156 6817 089 6780 02Average (30100minuslowast) 305 6837 184 6876 Total 0830250ndash025 424 16819 128 16799 2230250ndash050 223 16941 059 16958 0030250ndash075 140 16742 034 16837 00Average (30250minuslowast) 262 16834 074 16865 Total 2230500ndash025 381 33630 068 33414 2030500ndash050 213 33802 029 33737 1030500ndash075 124 33318 019 33440 00Average (30500minuslowast) 239 33583 039 33530 Total 30Average (30lowastminuslowast) 269 19085 099 19090 Total 510Total average 144 10198 082 10173 Total 5121Multistart R-CONS-L returns the best result from 5000 independent runs of R-CONS-L2Average CPU seconds on Pentium III 997MHz

used the percentage difference-ratio 100 times |119871119875 119900119901119905119894119898119906119898 minus

119900119906119905119901119906119905|119900119906119905119901119906119905 which was used in [6] where 119871119875 119900119901119905119894119898119906119898

is the optimal solution of the linear programming relaxationoverR It has a value in the range of [0 100] The smaller thevalue is the smaller the difference from the optimum is

First we compared constructive heuristics and localimprovement heuristic Table 1 shows the results of CONS

R-CONS and R-CONS-L where R-CONS-L starts with asolution produced by R-CONS and locally improves it by thelocal improvement heuristic described in Section 24We cansee that R-CONS-L largely improves the results of R-CONS

Next to verify the effectiveness of the proposed MAwe compared the results with a multistart method using R-CONS-L Multistart R-CONS-L returns the best result from

Discrete Dynamics in Nature and Society 9

5000 independent runs of R-CONS-L Table 2 shows theresults of multistart R-CONS-L and our MA The proposedMA outperformed multistart R-CONS-L

The last column in Table 2 corresponds to the numbers ofimproved and equalled results compared with the best resultsof [6] Surprisingly the proposed memetic algorithm couldfind better results than one of the state-of-the-art geneticalgorithms [6] for some instances with 30 constraints (5 bestover 90 instances)

4 Concluding Remarks

In this paper we tried to find good feasible solutions by usinga newmemetic algorithm based on Lagrangian relaxation Tothe best of the authorrsquos knowledge it is the first trial to use amemetic algorithm when optimizing Lagrange multipliers

The Lagrangian method guarantees optimality with thecapacity constraints which may be different from those ofthe original problem But it is not easy to find Lagrangemultipliers that accord with all the capacity constraintsThrough empirical investigation of Lagrangian space weknew that the space of locally optimal Lagrangian vectorsand that of their corresponding binary solutions are roughlyisometric Based on this fact we applied amemetic algorithmwhich searches only locally optimal Lagrangian vectorsto the transformed problem encoded by real values thatis Lagrange multipliers Then we could find high-qualityLagrange multipliers by the proposed memetic algorithmWe obtained a significant performance improvement over R-CONS and multistart heuristic

In recent years there have been researches showing betterperformance than Chu and Beasley [6] on the benchmarkdata of 0-1MKP [4 5 8 9 11ndash13] Although the presentedmemetic algorithm did not dominate such state-of-the-art methods this study could show the potentiality ofmemetic search on Lagrangian space Becausewe used simplegenetic operators we believe that there is room for furtherimprovement on our memetic search The improvementusing enhanced genetic operators tailored to real encodingfor example [30] will be a good future work

Acknowledgments

The present research has been conducted by the researchgrant of Kwangwoon University in 2013 This research wassupported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) and fundedby the Ministry of Education Science and Technology (2012-0001855)

References

[1] M R Garey and D S Johnson Computers and IntractabilityA Guide to the Theory of NP-Completeness W H Freeman andCompany San Francisco Calif USA 1979

[2] R Mansini and M G Speranza ldquoCORAL an exact algorithmfor the multidimensional knapsack problemrdquo INFORMS Jour-nal on Computing vol 24 no 3 pp 399ndash415 2012

[3] S Martello and P Toth ldquoAn exact algorithm for the two-constraint 0-1 knapsack problemrdquo Operations Research vol 51no 5 pp 826ndash835 2003

[4] S Boussier M Vasquez Y Vimont S Hanafi and P MichelonldquoA multi-level search strategy for the 0-1 multidimensionalknapsack problemrdquo Discrete Applied Mathematics vol 158 no2 pp 97ndash109 2010

[5] V Boyer M Elkihel and D El Baz ldquoHeuristics for the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 199 no 3 pp 658ndash664 2009

[6] P C Chu and J E Beasley ldquoA genetic algorithm for themultidimensional Knapsack problemrdquo Journal ofHeuristics vol4 no 1 pp 63ndash86 1998

[7] F Della Croce and A Grosso ldquoImproved core problem basedheuristics for the 0-1 multi-dimensional knapsack problemrdquoComputers amp Operations Research vol 39 no 1 pp 27ndash31 2012

[8] K Fleszar and K S Hindi ldquoFast effective heuristics forthe 0-1 multi-dimensional knapsack problemrdquo Computers ampOperations Research vol 36 no 5 pp 1602ndash1607 2009

[9] S Hanafi and C Wilbaut ldquoImproved convergent heuristicsfor the 0-1 multidimensional knapsack problemrdquo Annals ofOperations Research vol 183 no 1 pp 125ndash142 2011

[10] M J Magazine and O Oguz ldquoA heuristic algorithm forthe multidimensional zero-one knapsack problemrdquo EuropeanJournal of Operational Research vol 16 no 3 pp 319ndash326 1984

[11] M Vasquez and Y Vimont ldquoImproved results on the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 165 no 1 pp 70ndash81 2005

[12] Y Vimont S Boussier and M Vasquez ldquoReduced costspropagation in an efficient implicit enumeration for the 01multidimensional knapsack problemrdquo Journal of CombinatorialOptimization vol 15 no 2 pp 165ndash178 2008

[13] Y Yoon Y-H Kim and B-R Moon ldquoA theoretical andempirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 218 no 2 pp 366ndash376 2012

[14] C Cotta and J M Troya ldquoA hybrid genetic algorithm for the 0-1multiple knapsack problemrdquo in Artificial Neural Networks andGenetic Algorithms G D Smith N C Steele and R F AlbrechtEds vol 3 pp 251ndash255 Springer New york NY USA 1997

[15] J Gottlieb Evolutionary algorithms for constrained optimizationproblems [PhD thesis] Department of Computer ScienceTechnical University of Clausthal Clausthal Germany 1999

[16] S Khuri T Back and J Heitkotter ldquoThe zeroone multipleknapsack problem and genetic algorithmspagesrdquo in Proceedingsof the ACM Symposium on Applied Computing pp 188ndash193ACM Press 1994

[17] G R Raidl ldquoImproved genetic algorithm for the multicon-strained 0-1 knapsack problemrdquo in Proceedings of the IEEEInternational Conference on Evolutionary Computation (ICECrsquo98) pp 207ndash211 May 1998

[18] J Thiel and S Voss ldquoSome experiences on solving multicon-straint zero-one knapsack problems with genetic algorithmsrdquoINFOR vol 32 no 4 pp 226ndash242 1994

[19] Y Yoon Y-H Kim and B-R Moon ldquoAn evolutionaryLagrangian method for the 0-1 multiple knapsack problemrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GECCO rsquo05) pp 629ndash635 June 2005

[20] A Freville ldquoThe multidimensional 0-1 knapsack problem anoverviewrdquo European Journal of Operational Research vol 155no 1 pp 1ndash21 2004

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

6 Discrete Dynamics in Nature and SocietyD

istan

ce b

etw

een

bina

ry

solu

tion

vect

ors

Distance between Lagrangian vectors

353025201510

50

0 02 04 06 08 1 12

(a) 5100 120588 = 091

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09

(b) 5250 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

160140120100

80604020

00 01 02 03 04 05 06 07 08 09

(c) 5500 120588 = 097

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

3540

3025201510

50

0 02 04 06 08 1 12

(d) 10100 120588 = 080

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors

8070605040302010

00 01 02 03 04 05 06 07 08 09 1

(e) 10250 120588 = 092

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(f) 10500 120588 = 096

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

3540

3025201510

50

Distance between Lagrangian vectors01 02 03 04 05 06 07 08 09 1 11

(g) 30100 120588 = 060

Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

8090

70605040302010

0

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09 1

(h) 30250 120588 = 080

160140120100

80604020

0Dist

ance

bet

wee

n bi

nary

so

lutio

n ve

ctor

s

Distance between Lagrangian vectors0 01 02 03 04 05 06 07 08 09

(i) 30500 120588 = 091

Figure 2 Relationship between distances on solution space and those on Lagrangian space (among locally optimal Lagrangian vectors)lowast119909-axis distance between Lagrangian vectors 119910-axis distance between binary solution vectors and 120588 Pearson correlation coefficient

create an initial population of a fixed size 119875do

for 119894 larr 1 to 1198752

choose parent1119894and parent2

119894from population

119900119891119891119904119901119903119894119899119892119894larr crossover(parent1

119894 parent2

119894)

mutation(119900119891119891119904119901119903119894119899119892119894)

apply a local improvement heuristic to 119900119891119891119904119901119903119894119899119892119894

replace(population 119900119891119891119904119901119903119894119899119892119894)

until (stopping condition)return the best individual

Algorithm 3 Framework of our memetic algorithm

Local Improvement We use a local improvement heuristicdescribed in Section 24 The number of iterations (119873) is setto be 30000 We set 120575 to 00002

Replacement and Stopping Condition After generating 1198752

offspring our MA chooses the best 119875 individual among thetotal 31198752 ones as the population of the next generation OurMA stops when the number of generations reaches 100

Evaluation Function Our evaluation function is to find aLagrange multiplier vector 120582 that has a high fitness satisfyingthe capacity constraints as much as possible In our MA thefollowing is used as the objective function tomaximize whichis the function obtained by subtracting the penalty from theobjective function of the 0-1MKP

k119879xlowast minus 120574 sum

blowast119894gtb119894

(blowast119894minus b119894) (6)

Discrete Dynamics in Nature and Society 7

Table 1 Results of local search heuristics on benchmark data

Instance CONS [10] R-CONS [13 19] R-CONS-LResult1 Best2 Ave2 CPU3 Best2 Ave2 CPU3

5100ndash025 1370 971 2232 220 247 320 2865100ndash050 725 573 1400 157 110 135 2925100ndash075 514 370 798 087 072 083 278Average (5100minuslowast) 870 638 1477 155 143 179 2855250ndash025 677 762 1765 1323 092 103 7035250ndash050 527 494 1127 924 049 056 7225250ndash075 355 302 626 506 026 029 679Average (5250minuslowast) 520 519 1173 918 056 063 7015500ndash025 493 780 1527 5300 048 050 14145500ndash050 265 418 945 3656 020 022 14645500ndash075 222 263 543 1975 014 015 1367Average (5500minuslowast) 327 487 1005 3644 027 029 1415Average (5lowastminuslowast) 572 548 1218 1572 075 091 80010100ndash025 1588 1149 2293 366 275 630 53610100ndash050 1001 775 1476 263 147 368 54310100ndash075 657 361 805 142 091 202 532Average (10100minuslowast) 1082 762 1525 257 171 400 53710250ndash025 1126 1032 1726 2219 122 387 132110250ndash050 649 646 1151 1565 058 207 134410250ndash075 416 343 605 840 036 123 1306Average (10250minuslowast) 730 674 1161 1541 072 239 132310500ndash025 827 959 1491 8830 064 289 264110500ndash050 526 556 928 6733 027 139 268610500ndash075 349 328 524 3250 022 090 2601Average (10500minuslowast) 567 614 981 6271 037 173 2642Average (10lowastminuslowast) 793 683 1222 2690 093 271 150130100ndash025 1663 1175 2202 955 506 917 136530100ndash050 1031 751 1457 680 280 552 137430100ndash075 660 420 797 379 164 322 1363Average (30100minuslowast) 1118 782 1485 671 317 597 136730250ndash025 1332 1136 1701 5846 442 684 336430250ndash050 819 671 1087 4051 236 411 338830250ndash075 445 356 586 2173 144 239 3348Average (30250minuslowast) 865 721 1125 4023 274 445 336630500ndash025 1034 939 1402 22856 400 594 672630500ndash050 680 617 920 15749 223 369 675530500ndash075 382 334 490 8343 127 214 6659Average (30500minuslowast) 699 630 937 15649 250 392 6713Average (30lowastminuslowast) 894 711 1182 6781 280 478 3815Total average 753 647 1208 3681 150 280 20391Since CONS is a deterministic algorithm each run always outputs the same result2Results from 1000 runs3Total CPU seconds on Pentium III 997MHz

where 120574 is a constant which indicates the degree of penaltyand we used a fixed value 07

3 Experiments

We made experiments on well-known benchmark data pub-licly available from the OR-Library [29] which are the sameas those used in [6] They are composed of 270 instanceswith 5 10 and 30 constraints They have different numbers

of objects and different tightness ratios The tightness ratiomeans 120572 such that 119887

119895= 120572sum

119899

119894=1119908119895119894for each 119895 = 1 2 119898

The class of instances are briefly described below

119898119899 minus 120572 119898 constraints 119899 objects and tightness ratio120572 Each class has 10 instances

The proposed algorithms were implemented with gcccompiler on a Pentium III PC (997MHz) using Linuxoperating system As the measure of performance we

8 Discrete Dynamics in Nature and Society

Table 2 Results of memetic Lagrangian heuristic on benchmark data

Instance Multistart R-CONS-L1 Memetic Lagrangian heuristic Number improvednumber equalledResult CPU2 Result CPU2

5100ndash025 240 1435 232 1466 005100ndash050 108 1471 108 1500 005100ndash075 072 1394 072 1431 00Average (5100minuslowast) 140 1433 137 1466 Total 005250ndash025 092 3513 092 3619 005250ndash050 049 3606 049 3748 005250ndash075 026 3395 025 3529 00Average (5250minuslowast) 056 3505 056 3632 Total 005500ndash025 048 7072 048 7124 005500ndash050 020 7318 020 7546 005500ndash075 014 6846 014 6985 00Average (5500minuslowast) 027 7079 027 7218 Total 00Average (5lowastminuslowast) 074 4006 073 4105 Total 0010100ndash025 245 2679 227 2630 0010100ndash050 146 2717 114 2650 0110100ndash075 089 2657 069 2598 00Average (10100minuslowast) 160 2684 137 2626 Total 0110250ndash025 114 6599 088 6441 0010250ndash050 057 6718 045 6558 0010250ndash075 035 6535 024 6408 0-1Average (10250minuslowast) 069 6617 052 6469 Total 0110500ndash025 058 13196 050 12811 0010500ndash050 027 13425 024 13057 0010500ndash075 021 13006 014 12750 00Average (10500minuslowast) 035 13209 029 12873 Total 00Average (10lowastminuslowast) 088 7504 073 7323 Total 0230100ndash025 492 6824 319 6796 0330100ndash050 267 6870 143 7052 0330100ndash075 156 6817 089 6780 02Average (30100minuslowast) 305 6837 184 6876 Total 0830250ndash025 424 16819 128 16799 2230250ndash050 223 16941 059 16958 0030250ndash075 140 16742 034 16837 00Average (30250minuslowast) 262 16834 074 16865 Total 2230500ndash025 381 33630 068 33414 2030500ndash050 213 33802 029 33737 1030500ndash075 124 33318 019 33440 00Average (30500minuslowast) 239 33583 039 33530 Total 30Average (30lowastminuslowast) 269 19085 099 19090 Total 510Total average 144 10198 082 10173 Total 5121Multistart R-CONS-L returns the best result from 5000 independent runs of R-CONS-L2Average CPU seconds on Pentium III 997MHz

used the percentage difference-ratio 100 times |119871119875 119900119901119905119894119898119906119898 minus

119900119906119905119901119906119905|119900119906119905119901119906119905 which was used in [6] where 119871119875 119900119901119905119894119898119906119898

is the optimal solution of the linear programming relaxationoverR It has a value in the range of [0 100] The smaller thevalue is the smaller the difference from the optimum is

First we compared constructive heuristics and localimprovement heuristic Table 1 shows the results of CONS

R-CONS and R-CONS-L where R-CONS-L starts with asolution produced by R-CONS and locally improves it by thelocal improvement heuristic described in Section 24We cansee that R-CONS-L largely improves the results of R-CONS

Next to verify the effectiveness of the proposed MAwe compared the results with a multistart method using R-CONS-L Multistart R-CONS-L returns the best result from

Discrete Dynamics in Nature and Society 9

5000 independent runs of R-CONS-L Table 2 shows theresults of multistart R-CONS-L and our MA The proposedMA outperformed multistart R-CONS-L

The last column in Table 2 corresponds to the numbers ofimproved and equalled results compared with the best resultsof [6] Surprisingly the proposed memetic algorithm couldfind better results than one of the state-of-the-art geneticalgorithms [6] for some instances with 30 constraints (5 bestover 90 instances)

4 Concluding Remarks

In this paper we tried to find good feasible solutions by usinga newmemetic algorithm based on Lagrangian relaxation Tothe best of the authorrsquos knowledge it is the first trial to use amemetic algorithm when optimizing Lagrange multipliers

The Lagrangian method guarantees optimality with thecapacity constraints which may be different from those ofthe original problem But it is not easy to find Lagrangemultipliers that accord with all the capacity constraintsThrough empirical investigation of Lagrangian space weknew that the space of locally optimal Lagrangian vectorsand that of their corresponding binary solutions are roughlyisometric Based on this fact we applied amemetic algorithmwhich searches only locally optimal Lagrangian vectorsto the transformed problem encoded by real values thatis Lagrange multipliers Then we could find high-qualityLagrange multipliers by the proposed memetic algorithmWe obtained a significant performance improvement over R-CONS and multistart heuristic

In recent years there have been researches showing betterperformance than Chu and Beasley [6] on the benchmarkdata of 0-1MKP [4 5 8 9 11ndash13] Although the presentedmemetic algorithm did not dominate such state-of-the-art methods this study could show the potentiality ofmemetic search on Lagrangian space Becausewe used simplegenetic operators we believe that there is room for furtherimprovement on our memetic search The improvementusing enhanced genetic operators tailored to real encodingfor example [30] will be a good future work

Acknowledgments

The present research has been conducted by the researchgrant of Kwangwoon University in 2013 This research wassupported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) and fundedby the Ministry of Education Science and Technology (2012-0001855)

References

[1] M R Garey and D S Johnson Computers and IntractabilityA Guide to the Theory of NP-Completeness W H Freeman andCompany San Francisco Calif USA 1979

[2] R Mansini and M G Speranza ldquoCORAL an exact algorithmfor the multidimensional knapsack problemrdquo INFORMS Jour-nal on Computing vol 24 no 3 pp 399ndash415 2012

[3] S Martello and P Toth ldquoAn exact algorithm for the two-constraint 0-1 knapsack problemrdquo Operations Research vol 51no 5 pp 826ndash835 2003

[4] S Boussier M Vasquez Y Vimont S Hanafi and P MichelonldquoA multi-level search strategy for the 0-1 multidimensionalknapsack problemrdquo Discrete Applied Mathematics vol 158 no2 pp 97ndash109 2010

[5] V Boyer M Elkihel and D El Baz ldquoHeuristics for the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 199 no 3 pp 658ndash664 2009

[6] P C Chu and J E Beasley ldquoA genetic algorithm for themultidimensional Knapsack problemrdquo Journal ofHeuristics vol4 no 1 pp 63ndash86 1998

[7] F Della Croce and A Grosso ldquoImproved core problem basedheuristics for the 0-1 multi-dimensional knapsack problemrdquoComputers amp Operations Research vol 39 no 1 pp 27ndash31 2012

[8] K Fleszar and K S Hindi ldquoFast effective heuristics forthe 0-1 multi-dimensional knapsack problemrdquo Computers ampOperations Research vol 36 no 5 pp 1602ndash1607 2009

[9] S Hanafi and C Wilbaut ldquoImproved convergent heuristicsfor the 0-1 multidimensional knapsack problemrdquo Annals ofOperations Research vol 183 no 1 pp 125ndash142 2011

[10] M J Magazine and O Oguz ldquoA heuristic algorithm forthe multidimensional zero-one knapsack problemrdquo EuropeanJournal of Operational Research vol 16 no 3 pp 319ndash326 1984

[11] M Vasquez and Y Vimont ldquoImproved results on the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 165 no 1 pp 70ndash81 2005

[12] Y Vimont S Boussier and M Vasquez ldquoReduced costspropagation in an efficient implicit enumeration for the 01multidimensional knapsack problemrdquo Journal of CombinatorialOptimization vol 15 no 2 pp 165ndash178 2008

[13] Y Yoon Y-H Kim and B-R Moon ldquoA theoretical andempirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 218 no 2 pp 366ndash376 2012

[14] C Cotta and J M Troya ldquoA hybrid genetic algorithm for the 0-1multiple knapsack problemrdquo in Artificial Neural Networks andGenetic Algorithms G D Smith N C Steele and R F AlbrechtEds vol 3 pp 251ndash255 Springer New york NY USA 1997

[15] J Gottlieb Evolutionary algorithms for constrained optimizationproblems [PhD thesis] Department of Computer ScienceTechnical University of Clausthal Clausthal Germany 1999

[16] S Khuri T Back and J Heitkotter ldquoThe zeroone multipleknapsack problem and genetic algorithmspagesrdquo in Proceedingsof the ACM Symposium on Applied Computing pp 188ndash193ACM Press 1994

[17] G R Raidl ldquoImproved genetic algorithm for the multicon-strained 0-1 knapsack problemrdquo in Proceedings of the IEEEInternational Conference on Evolutionary Computation (ICECrsquo98) pp 207ndash211 May 1998

[18] J Thiel and S Voss ldquoSome experiences on solving multicon-straint zero-one knapsack problems with genetic algorithmsrdquoINFOR vol 32 no 4 pp 226ndash242 1994

[19] Y Yoon Y-H Kim and B-R Moon ldquoAn evolutionaryLagrangian method for the 0-1 multiple knapsack problemrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GECCO rsquo05) pp 629ndash635 June 2005

[20] A Freville ldquoThe multidimensional 0-1 knapsack problem anoverviewrdquo European Journal of Operational Research vol 155no 1 pp 1ndash21 2004

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

Discrete Dynamics in Nature and Society 7

Table 1 Results of local search heuristics on benchmark data

Instance CONS [10] R-CONS [13 19] R-CONS-LResult1 Best2 Ave2 CPU3 Best2 Ave2 CPU3

5100ndash025 1370 971 2232 220 247 320 2865100ndash050 725 573 1400 157 110 135 2925100ndash075 514 370 798 087 072 083 278Average (5100minuslowast) 870 638 1477 155 143 179 2855250ndash025 677 762 1765 1323 092 103 7035250ndash050 527 494 1127 924 049 056 7225250ndash075 355 302 626 506 026 029 679Average (5250minuslowast) 520 519 1173 918 056 063 7015500ndash025 493 780 1527 5300 048 050 14145500ndash050 265 418 945 3656 020 022 14645500ndash075 222 263 543 1975 014 015 1367Average (5500minuslowast) 327 487 1005 3644 027 029 1415Average (5lowastminuslowast) 572 548 1218 1572 075 091 80010100ndash025 1588 1149 2293 366 275 630 53610100ndash050 1001 775 1476 263 147 368 54310100ndash075 657 361 805 142 091 202 532Average (10100minuslowast) 1082 762 1525 257 171 400 53710250ndash025 1126 1032 1726 2219 122 387 132110250ndash050 649 646 1151 1565 058 207 134410250ndash075 416 343 605 840 036 123 1306Average (10250minuslowast) 730 674 1161 1541 072 239 132310500ndash025 827 959 1491 8830 064 289 264110500ndash050 526 556 928 6733 027 139 268610500ndash075 349 328 524 3250 022 090 2601Average (10500minuslowast) 567 614 981 6271 037 173 2642Average (10lowastminuslowast) 793 683 1222 2690 093 271 150130100ndash025 1663 1175 2202 955 506 917 136530100ndash050 1031 751 1457 680 280 552 137430100ndash075 660 420 797 379 164 322 1363Average (30100minuslowast) 1118 782 1485 671 317 597 136730250ndash025 1332 1136 1701 5846 442 684 336430250ndash050 819 671 1087 4051 236 411 338830250ndash075 445 356 586 2173 144 239 3348Average (30250minuslowast) 865 721 1125 4023 274 445 336630500ndash025 1034 939 1402 22856 400 594 672630500ndash050 680 617 920 15749 223 369 675530500ndash075 382 334 490 8343 127 214 6659Average (30500minuslowast) 699 630 937 15649 250 392 6713Average (30lowastminuslowast) 894 711 1182 6781 280 478 3815Total average 753 647 1208 3681 150 280 20391Since CONS is a deterministic algorithm each run always outputs the same result2Results from 1000 runs3Total CPU seconds on Pentium III 997MHz

where 120574 is a constant which indicates the degree of penaltyand we used a fixed value 07

3 Experiments

We made experiments on well-known benchmark data pub-licly available from the OR-Library [29] which are the sameas those used in [6] They are composed of 270 instanceswith 5 10 and 30 constraints They have different numbers

of objects and different tightness ratios The tightness ratiomeans 120572 such that 119887

119895= 120572sum

119899

119894=1119908119895119894for each 119895 = 1 2 119898

The class of instances are briefly described below

119898119899 minus 120572 119898 constraints 119899 objects and tightness ratio120572 Each class has 10 instances

The proposed algorithms were implemented with gcccompiler on a Pentium III PC (997MHz) using Linuxoperating system As the measure of performance we

8 Discrete Dynamics in Nature and Society

Table 2 Results of memetic Lagrangian heuristic on benchmark data

Instance Multistart R-CONS-L1 Memetic Lagrangian heuristic Number improvednumber equalledResult CPU2 Result CPU2

5100ndash025 240 1435 232 1466 005100ndash050 108 1471 108 1500 005100ndash075 072 1394 072 1431 00Average (5100minuslowast) 140 1433 137 1466 Total 005250ndash025 092 3513 092 3619 005250ndash050 049 3606 049 3748 005250ndash075 026 3395 025 3529 00Average (5250minuslowast) 056 3505 056 3632 Total 005500ndash025 048 7072 048 7124 005500ndash050 020 7318 020 7546 005500ndash075 014 6846 014 6985 00Average (5500minuslowast) 027 7079 027 7218 Total 00Average (5lowastminuslowast) 074 4006 073 4105 Total 0010100ndash025 245 2679 227 2630 0010100ndash050 146 2717 114 2650 0110100ndash075 089 2657 069 2598 00Average (10100minuslowast) 160 2684 137 2626 Total 0110250ndash025 114 6599 088 6441 0010250ndash050 057 6718 045 6558 0010250ndash075 035 6535 024 6408 0-1Average (10250minuslowast) 069 6617 052 6469 Total 0110500ndash025 058 13196 050 12811 0010500ndash050 027 13425 024 13057 0010500ndash075 021 13006 014 12750 00Average (10500minuslowast) 035 13209 029 12873 Total 00Average (10lowastminuslowast) 088 7504 073 7323 Total 0230100ndash025 492 6824 319 6796 0330100ndash050 267 6870 143 7052 0330100ndash075 156 6817 089 6780 02Average (30100minuslowast) 305 6837 184 6876 Total 0830250ndash025 424 16819 128 16799 2230250ndash050 223 16941 059 16958 0030250ndash075 140 16742 034 16837 00Average (30250minuslowast) 262 16834 074 16865 Total 2230500ndash025 381 33630 068 33414 2030500ndash050 213 33802 029 33737 1030500ndash075 124 33318 019 33440 00Average (30500minuslowast) 239 33583 039 33530 Total 30Average (30lowastminuslowast) 269 19085 099 19090 Total 510Total average 144 10198 082 10173 Total 5121Multistart R-CONS-L returns the best result from 5000 independent runs of R-CONS-L2Average CPU seconds on Pentium III 997MHz

used the percentage difference-ratio 100 times |119871119875 119900119901119905119894119898119906119898 minus

119900119906119905119901119906119905|119900119906119905119901119906119905 which was used in [6] where 119871119875 119900119901119905119894119898119906119898

is the optimal solution of the linear programming relaxationoverR It has a value in the range of [0 100] The smaller thevalue is the smaller the difference from the optimum is

First we compared constructive heuristics and localimprovement heuristic Table 1 shows the results of CONS

R-CONS and R-CONS-L where R-CONS-L starts with asolution produced by R-CONS and locally improves it by thelocal improvement heuristic described in Section 24We cansee that R-CONS-L largely improves the results of R-CONS

Next to verify the effectiveness of the proposed MAwe compared the results with a multistart method using R-CONS-L Multistart R-CONS-L returns the best result from

Discrete Dynamics in Nature and Society 9

5000 independent runs of R-CONS-L Table 2 shows theresults of multistart R-CONS-L and our MA The proposedMA outperformed multistart R-CONS-L

The last column in Table 2 corresponds to the numbers ofimproved and equalled results compared with the best resultsof [6] Surprisingly the proposed memetic algorithm couldfind better results than one of the state-of-the-art geneticalgorithms [6] for some instances with 30 constraints (5 bestover 90 instances)

4 Concluding Remarks

In this paper we tried to find good feasible solutions by usinga newmemetic algorithm based on Lagrangian relaxation Tothe best of the authorrsquos knowledge it is the first trial to use amemetic algorithm when optimizing Lagrange multipliers

The Lagrangian method guarantees optimality with thecapacity constraints which may be different from those ofthe original problem But it is not easy to find Lagrangemultipliers that accord with all the capacity constraintsThrough empirical investigation of Lagrangian space weknew that the space of locally optimal Lagrangian vectorsand that of their corresponding binary solutions are roughlyisometric Based on this fact we applied amemetic algorithmwhich searches only locally optimal Lagrangian vectorsto the transformed problem encoded by real values thatis Lagrange multipliers Then we could find high-qualityLagrange multipliers by the proposed memetic algorithmWe obtained a significant performance improvement over R-CONS and multistart heuristic

In recent years there have been researches showing betterperformance than Chu and Beasley [6] on the benchmarkdata of 0-1MKP [4 5 8 9 11ndash13] Although the presentedmemetic algorithm did not dominate such state-of-the-art methods this study could show the potentiality ofmemetic search on Lagrangian space Becausewe used simplegenetic operators we believe that there is room for furtherimprovement on our memetic search The improvementusing enhanced genetic operators tailored to real encodingfor example [30] will be a good future work

Acknowledgments

The present research has been conducted by the researchgrant of Kwangwoon University in 2013 This research wassupported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) and fundedby the Ministry of Education Science and Technology (2012-0001855)

References

[1] M R Garey and D S Johnson Computers and IntractabilityA Guide to the Theory of NP-Completeness W H Freeman andCompany San Francisco Calif USA 1979

[2] R Mansini and M G Speranza ldquoCORAL an exact algorithmfor the multidimensional knapsack problemrdquo INFORMS Jour-nal on Computing vol 24 no 3 pp 399ndash415 2012

[3] S Martello and P Toth ldquoAn exact algorithm for the two-constraint 0-1 knapsack problemrdquo Operations Research vol 51no 5 pp 826ndash835 2003

[4] S Boussier M Vasquez Y Vimont S Hanafi and P MichelonldquoA multi-level search strategy for the 0-1 multidimensionalknapsack problemrdquo Discrete Applied Mathematics vol 158 no2 pp 97ndash109 2010

[5] V Boyer M Elkihel and D El Baz ldquoHeuristics for the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 199 no 3 pp 658ndash664 2009

[6] P C Chu and J E Beasley ldquoA genetic algorithm for themultidimensional Knapsack problemrdquo Journal ofHeuristics vol4 no 1 pp 63ndash86 1998

[7] F Della Croce and A Grosso ldquoImproved core problem basedheuristics for the 0-1 multi-dimensional knapsack problemrdquoComputers amp Operations Research vol 39 no 1 pp 27ndash31 2012

[8] K Fleszar and K S Hindi ldquoFast effective heuristics forthe 0-1 multi-dimensional knapsack problemrdquo Computers ampOperations Research vol 36 no 5 pp 1602ndash1607 2009

[9] S Hanafi and C Wilbaut ldquoImproved convergent heuristicsfor the 0-1 multidimensional knapsack problemrdquo Annals ofOperations Research vol 183 no 1 pp 125ndash142 2011

[10] M J Magazine and O Oguz ldquoA heuristic algorithm forthe multidimensional zero-one knapsack problemrdquo EuropeanJournal of Operational Research vol 16 no 3 pp 319ndash326 1984

[11] M Vasquez and Y Vimont ldquoImproved results on the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 165 no 1 pp 70ndash81 2005

[12] Y Vimont S Boussier and M Vasquez ldquoReduced costspropagation in an efficient implicit enumeration for the 01multidimensional knapsack problemrdquo Journal of CombinatorialOptimization vol 15 no 2 pp 165ndash178 2008

[13] Y Yoon Y-H Kim and B-R Moon ldquoA theoretical andempirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 218 no 2 pp 366ndash376 2012

[14] C Cotta and J M Troya ldquoA hybrid genetic algorithm for the 0-1multiple knapsack problemrdquo in Artificial Neural Networks andGenetic Algorithms G D Smith N C Steele and R F AlbrechtEds vol 3 pp 251ndash255 Springer New york NY USA 1997

[15] J Gottlieb Evolutionary algorithms for constrained optimizationproblems [PhD thesis] Department of Computer ScienceTechnical University of Clausthal Clausthal Germany 1999

[16] S Khuri T Back and J Heitkotter ldquoThe zeroone multipleknapsack problem and genetic algorithmspagesrdquo in Proceedingsof the ACM Symposium on Applied Computing pp 188ndash193ACM Press 1994

[17] G R Raidl ldquoImproved genetic algorithm for the multicon-strained 0-1 knapsack problemrdquo in Proceedings of the IEEEInternational Conference on Evolutionary Computation (ICECrsquo98) pp 207ndash211 May 1998

[18] J Thiel and S Voss ldquoSome experiences on solving multicon-straint zero-one knapsack problems with genetic algorithmsrdquoINFOR vol 32 no 4 pp 226ndash242 1994

[19] Y Yoon Y-H Kim and B-R Moon ldquoAn evolutionaryLagrangian method for the 0-1 multiple knapsack problemrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GECCO rsquo05) pp 629ndash635 June 2005

[20] A Freville ldquoThe multidimensional 0-1 knapsack problem anoverviewrdquo European Journal of Operational Research vol 155no 1 pp 1ndash21 2004

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

8 Discrete Dynamics in Nature and Society

Table 2 Results of memetic Lagrangian heuristic on benchmark data

Instance Multistart R-CONS-L1 Memetic Lagrangian heuristic Number improvednumber equalledResult CPU2 Result CPU2

5100ndash025 240 1435 232 1466 005100ndash050 108 1471 108 1500 005100ndash075 072 1394 072 1431 00Average (5100minuslowast) 140 1433 137 1466 Total 005250ndash025 092 3513 092 3619 005250ndash050 049 3606 049 3748 005250ndash075 026 3395 025 3529 00Average (5250minuslowast) 056 3505 056 3632 Total 005500ndash025 048 7072 048 7124 005500ndash050 020 7318 020 7546 005500ndash075 014 6846 014 6985 00Average (5500minuslowast) 027 7079 027 7218 Total 00Average (5lowastminuslowast) 074 4006 073 4105 Total 0010100ndash025 245 2679 227 2630 0010100ndash050 146 2717 114 2650 0110100ndash075 089 2657 069 2598 00Average (10100minuslowast) 160 2684 137 2626 Total 0110250ndash025 114 6599 088 6441 0010250ndash050 057 6718 045 6558 0010250ndash075 035 6535 024 6408 0-1Average (10250minuslowast) 069 6617 052 6469 Total 0110500ndash025 058 13196 050 12811 0010500ndash050 027 13425 024 13057 0010500ndash075 021 13006 014 12750 00Average (10500minuslowast) 035 13209 029 12873 Total 00Average (10lowastminuslowast) 088 7504 073 7323 Total 0230100ndash025 492 6824 319 6796 0330100ndash050 267 6870 143 7052 0330100ndash075 156 6817 089 6780 02Average (30100minuslowast) 305 6837 184 6876 Total 0830250ndash025 424 16819 128 16799 2230250ndash050 223 16941 059 16958 0030250ndash075 140 16742 034 16837 00Average (30250minuslowast) 262 16834 074 16865 Total 2230500ndash025 381 33630 068 33414 2030500ndash050 213 33802 029 33737 1030500ndash075 124 33318 019 33440 00Average (30500minuslowast) 239 33583 039 33530 Total 30Average (30lowastminuslowast) 269 19085 099 19090 Total 510Total average 144 10198 082 10173 Total 5121Multistart R-CONS-L returns the best result from 5000 independent runs of R-CONS-L2Average CPU seconds on Pentium III 997MHz

used the percentage difference-ratio 100 times |119871119875 119900119901119905119894119898119906119898 minus

119900119906119905119901119906119905|119900119906119905119901119906119905 which was used in [6] where 119871119875 119900119901119905119894119898119906119898

is the optimal solution of the linear programming relaxationoverR It has a value in the range of [0 100] The smaller thevalue is the smaller the difference from the optimum is

First we compared constructive heuristics and localimprovement heuristic Table 1 shows the results of CONS

R-CONS and R-CONS-L where R-CONS-L starts with asolution produced by R-CONS and locally improves it by thelocal improvement heuristic described in Section 24We cansee that R-CONS-L largely improves the results of R-CONS

Next to verify the effectiveness of the proposed MAwe compared the results with a multistart method using R-CONS-L Multistart R-CONS-L returns the best result from

Discrete Dynamics in Nature and Society 9

5000 independent runs of R-CONS-L Table 2 shows theresults of multistart R-CONS-L and our MA The proposedMA outperformed multistart R-CONS-L

The last column in Table 2 corresponds to the numbers ofimproved and equalled results compared with the best resultsof [6] Surprisingly the proposed memetic algorithm couldfind better results than one of the state-of-the-art geneticalgorithms [6] for some instances with 30 constraints (5 bestover 90 instances)

4 Concluding Remarks

In this paper we tried to find good feasible solutions by usinga newmemetic algorithm based on Lagrangian relaxation Tothe best of the authorrsquos knowledge it is the first trial to use amemetic algorithm when optimizing Lagrange multipliers

The Lagrangian method guarantees optimality with thecapacity constraints which may be different from those ofthe original problem But it is not easy to find Lagrangemultipliers that accord with all the capacity constraintsThrough empirical investigation of Lagrangian space weknew that the space of locally optimal Lagrangian vectorsand that of their corresponding binary solutions are roughlyisometric Based on this fact we applied amemetic algorithmwhich searches only locally optimal Lagrangian vectorsto the transformed problem encoded by real values thatis Lagrange multipliers Then we could find high-qualityLagrange multipliers by the proposed memetic algorithmWe obtained a significant performance improvement over R-CONS and multistart heuristic

In recent years there have been researches showing betterperformance than Chu and Beasley [6] on the benchmarkdata of 0-1MKP [4 5 8 9 11ndash13] Although the presentedmemetic algorithm did not dominate such state-of-the-art methods this study could show the potentiality ofmemetic search on Lagrangian space Becausewe used simplegenetic operators we believe that there is room for furtherimprovement on our memetic search The improvementusing enhanced genetic operators tailored to real encodingfor example [30] will be a good future work

Acknowledgments

The present research has been conducted by the researchgrant of Kwangwoon University in 2013 This research wassupported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) and fundedby the Ministry of Education Science and Technology (2012-0001855)

References

[1] M R Garey and D S Johnson Computers and IntractabilityA Guide to the Theory of NP-Completeness W H Freeman andCompany San Francisco Calif USA 1979

[2] R Mansini and M G Speranza ldquoCORAL an exact algorithmfor the multidimensional knapsack problemrdquo INFORMS Jour-nal on Computing vol 24 no 3 pp 399ndash415 2012

[3] S Martello and P Toth ldquoAn exact algorithm for the two-constraint 0-1 knapsack problemrdquo Operations Research vol 51no 5 pp 826ndash835 2003

[4] S Boussier M Vasquez Y Vimont S Hanafi and P MichelonldquoA multi-level search strategy for the 0-1 multidimensionalknapsack problemrdquo Discrete Applied Mathematics vol 158 no2 pp 97ndash109 2010

[5] V Boyer M Elkihel and D El Baz ldquoHeuristics for the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 199 no 3 pp 658ndash664 2009

[6] P C Chu and J E Beasley ldquoA genetic algorithm for themultidimensional Knapsack problemrdquo Journal ofHeuristics vol4 no 1 pp 63ndash86 1998

[7] F Della Croce and A Grosso ldquoImproved core problem basedheuristics for the 0-1 multi-dimensional knapsack problemrdquoComputers amp Operations Research vol 39 no 1 pp 27ndash31 2012

[8] K Fleszar and K S Hindi ldquoFast effective heuristics forthe 0-1 multi-dimensional knapsack problemrdquo Computers ampOperations Research vol 36 no 5 pp 1602ndash1607 2009

[9] S Hanafi and C Wilbaut ldquoImproved convergent heuristicsfor the 0-1 multidimensional knapsack problemrdquo Annals ofOperations Research vol 183 no 1 pp 125ndash142 2011

[10] M J Magazine and O Oguz ldquoA heuristic algorithm forthe multidimensional zero-one knapsack problemrdquo EuropeanJournal of Operational Research vol 16 no 3 pp 319ndash326 1984

[11] M Vasquez and Y Vimont ldquoImproved results on the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 165 no 1 pp 70ndash81 2005

[12] Y Vimont S Boussier and M Vasquez ldquoReduced costspropagation in an efficient implicit enumeration for the 01multidimensional knapsack problemrdquo Journal of CombinatorialOptimization vol 15 no 2 pp 165ndash178 2008

[13] Y Yoon Y-H Kim and B-R Moon ldquoA theoretical andempirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 218 no 2 pp 366ndash376 2012

[14] C Cotta and J M Troya ldquoA hybrid genetic algorithm for the 0-1multiple knapsack problemrdquo in Artificial Neural Networks andGenetic Algorithms G D Smith N C Steele and R F AlbrechtEds vol 3 pp 251ndash255 Springer New york NY USA 1997

[15] J Gottlieb Evolutionary algorithms for constrained optimizationproblems [PhD thesis] Department of Computer ScienceTechnical University of Clausthal Clausthal Germany 1999

[16] S Khuri T Back and J Heitkotter ldquoThe zeroone multipleknapsack problem and genetic algorithmspagesrdquo in Proceedingsof the ACM Symposium on Applied Computing pp 188ndash193ACM Press 1994

[17] G R Raidl ldquoImproved genetic algorithm for the multicon-strained 0-1 knapsack problemrdquo in Proceedings of the IEEEInternational Conference on Evolutionary Computation (ICECrsquo98) pp 207ndash211 May 1998

[18] J Thiel and S Voss ldquoSome experiences on solving multicon-straint zero-one knapsack problems with genetic algorithmsrdquoINFOR vol 32 no 4 pp 226ndash242 1994

[19] Y Yoon Y-H Kim and B-R Moon ldquoAn evolutionaryLagrangian method for the 0-1 multiple knapsack problemrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GECCO rsquo05) pp 629ndash635 June 2005

[20] A Freville ldquoThe multidimensional 0-1 knapsack problem anoverviewrdquo European Journal of Operational Research vol 155no 1 pp 1ndash21 2004

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

Discrete Dynamics in Nature and Society 9

5000 independent runs of R-CONS-L Table 2 shows theresults of multistart R-CONS-L and our MA The proposedMA outperformed multistart R-CONS-L

The last column in Table 2 corresponds to the numbers ofimproved and equalled results compared with the best resultsof [6] Surprisingly the proposed memetic algorithm couldfind better results than one of the state-of-the-art geneticalgorithms [6] for some instances with 30 constraints (5 bestover 90 instances)

4 Concluding Remarks

In this paper we tried to find good feasible solutions by usinga newmemetic algorithm based on Lagrangian relaxation Tothe best of the authorrsquos knowledge it is the first trial to use amemetic algorithm when optimizing Lagrange multipliers

The Lagrangian method guarantees optimality with thecapacity constraints which may be different from those ofthe original problem But it is not easy to find Lagrangemultipliers that accord with all the capacity constraintsThrough empirical investigation of Lagrangian space weknew that the space of locally optimal Lagrangian vectorsand that of their corresponding binary solutions are roughlyisometric Based on this fact we applied amemetic algorithmwhich searches only locally optimal Lagrangian vectorsto the transformed problem encoded by real values thatis Lagrange multipliers Then we could find high-qualityLagrange multipliers by the proposed memetic algorithmWe obtained a significant performance improvement over R-CONS and multistart heuristic

In recent years there have been researches showing betterperformance than Chu and Beasley [6] on the benchmarkdata of 0-1MKP [4 5 8 9 11ndash13] Although the presentedmemetic algorithm did not dominate such state-of-the-art methods this study could show the potentiality ofmemetic search on Lagrangian space Becausewe used simplegenetic operators we believe that there is room for furtherimprovement on our memetic search The improvementusing enhanced genetic operators tailored to real encodingfor example [30] will be a good future work

Acknowledgments

The present research has been conducted by the researchgrant of Kwangwoon University in 2013 This research wassupported by Basic Science Research Program through theNational Research Foundation of Korea (NRF) and fundedby the Ministry of Education Science and Technology (2012-0001855)

References

[1] M R Garey and D S Johnson Computers and IntractabilityA Guide to the Theory of NP-Completeness W H Freeman andCompany San Francisco Calif USA 1979

[2] R Mansini and M G Speranza ldquoCORAL an exact algorithmfor the multidimensional knapsack problemrdquo INFORMS Jour-nal on Computing vol 24 no 3 pp 399ndash415 2012

[3] S Martello and P Toth ldquoAn exact algorithm for the two-constraint 0-1 knapsack problemrdquo Operations Research vol 51no 5 pp 826ndash835 2003

[4] S Boussier M Vasquez Y Vimont S Hanafi and P MichelonldquoA multi-level search strategy for the 0-1 multidimensionalknapsack problemrdquo Discrete Applied Mathematics vol 158 no2 pp 97ndash109 2010

[5] V Boyer M Elkihel and D El Baz ldquoHeuristics for the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 199 no 3 pp 658ndash664 2009

[6] P C Chu and J E Beasley ldquoA genetic algorithm for themultidimensional Knapsack problemrdquo Journal ofHeuristics vol4 no 1 pp 63ndash86 1998

[7] F Della Croce and A Grosso ldquoImproved core problem basedheuristics for the 0-1 multi-dimensional knapsack problemrdquoComputers amp Operations Research vol 39 no 1 pp 27ndash31 2012

[8] K Fleszar and K S Hindi ldquoFast effective heuristics forthe 0-1 multi-dimensional knapsack problemrdquo Computers ampOperations Research vol 36 no 5 pp 1602ndash1607 2009

[9] S Hanafi and C Wilbaut ldquoImproved convergent heuristicsfor the 0-1 multidimensional knapsack problemrdquo Annals ofOperations Research vol 183 no 1 pp 125ndash142 2011

[10] M J Magazine and O Oguz ldquoA heuristic algorithm forthe multidimensional zero-one knapsack problemrdquo EuropeanJournal of Operational Research vol 16 no 3 pp 319ndash326 1984

[11] M Vasquez and Y Vimont ldquoImproved results on the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 165 no 1 pp 70ndash81 2005

[12] Y Vimont S Boussier and M Vasquez ldquoReduced costspropagation in an efficient implicit enumeration for the 01multidimensional knapsack problemrdquo Journal of CombinatorialOptimization vol 15 no 2 pp 165ndash178 2008

[13] Y Yoon Y-H Kim and B-R Moon ldquoA theoretical andempirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problemrdquo European Journal ofOperational Research vol 218 no 2 pp 366ndash376 2012

[14] C Cotta and J M Troya ldquoA hybrid genetic algorithm for the 0-1multiple knapsack problemrdquo in Artificial Neural Networks andGenetic Algorithms G D Smith N C Steele and R F AlbrechtEds vol 3 pp 251ndash255 Springer New york NY USA 1997

[15] J Gottlieb Evolutionary algorithms for constrained optimizationproblems [PhD thesis] Department of Computer ScienceTechnical University of Clausthal Clausthal Germany 1999

[16] S Khuri T Back and J Heitkotter ldquoThe zeroone multipleknapsack problem and genetic algorithmspagesrdquo in Proceedingsof the ACM Symposium on Applied Computing pp 188ndash193ACM Press 1994

[17] G R Raidl ldquoImproved genetic algorithm for the multicon-strained 0-1 knapsack problemrdquo in Proceedings of the IEEEInternational Conference on Evolutionary Computation (ICECrsquo98) pp 207ndash211 May 1998

[18] J Thiel and S Voss ldquoSome experiences on solving multicon-straint zero-one knapsack problems with genetic algorithmsrdquoINFOR vol 32 no 4 pp 226ndash242 1994

[19] Y Yoon Y-H Kim and B-R Moon ldquoAn evolutionaryLagrangian method for the 0-1 multiple knapsack problemrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GECCO rsquo05) pp 629ndash635 June 2005

[20] A Freville ldquoThe multidimensional 0-1 knapsack problem anoverviewrdquo European Journal of Operational Research vol 155no 1 pp 1ndash21 2004

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 10: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

10 Discrete Dynamics in Nature and Society

[21] A Freville and S Hanafi ldquoThe multidimensional 0-1 knapsackproblem-bounds and computational aspectsrdquo Annals of Opera-tions Research vol 139 no 1 pp 195ndash227 2005

[22] H Kellerer U Pferschy and D Pisinger Knapsack ProblemsSpringer Berlin Germany 2004

[23] G R Raidl ldquoWeight-codings in a genetic algorithm for themul-ticonstraint knapsack problemrdquo in Proceedings of the Congresson Evolutionary Computation vol 1 pp 596ndash603 1999

[24] M L Fisher ldquoThe Lagrangian relaxation method for solvinginteger programming problemsrdquo Management Science vol 27no 1 pp 1ndash18 1981

[25] K D Boese A B Kahng and S Muddu ldquoA new adaptivemulti-start technique for combinatorial global optimizationsrdquoOperations Research Letters vol 16 no 2 pp 101ndash113 1994

[26] T Jones and S Forrest ldquoFitness distance correlation as a mea-sure of problem difficulty for genetic algorithmsrdquo in Proceedingsof the 6th International Conference on Genetic Algorithms pp184ndash192 1995

[27] Y-H Kim and B-R Moon ldquoInvestigation of the fitness land-scapes in graph bipartitioning an empirical studyrdquo Journal ofHeuristics vol 10 no 2 pp 111ndash133 2004

[28] J Puchinger G R Raidl and U Pferschy ldquoThe multidimen-sional knapsack problem structure and algorithmsrdquo INFORMSJournal on Computing vol 22 no 2 pp 250ndash265 2010

[29] J E Beasley ldquoObtaining test problems via Internetrdquo Journal ofGlobal Optimization vol 8 no 4 pp 429ndash433 1996

[30] Y Yoon Y-H Kim AMoraglio and B-RMoon ldquoA theoreticaland empirical study on unbiased boundary-extended crossoverfor real-valued representationrdquo Information Sciences vol 183no 1 pp 48ndash65 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 11: Research Article A Memetic Lagrangian Heuristic for the 0 ...downloads.hindawi.com/journals/ddns/2013/474852.pdf · A Memetic Lagrangian Heuristic for the 0-1 Multidimensional Knapsack

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of