13
Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems Zahra Naji-Azimi a , Majid Salari a , Bruce Golden b, , S. Raghavan b , Paolo Toth a a DEIS, University of Bologna, Viale Risorgimento 2, 40136 Bologna, Italy b Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815, USA article info Available online 13 January 2010 Keywords: Minimum spanning tree problem Minimum label spanning tree problem Heuristics Mixed integer programming Variable neighborhood search Genetic algorithm abstract Given an undirected graph whose edges are labeled or colored, edge weights indicating the cost of an edge, and a positive budget B, the goal of the cost constrained minimum label spanning tree (CCMLST) problem is to find a spanning tree that uses the minimum number of labels while ensuring its cost does not exceed B. The label constrained minimum spanning tree (LCMST) problem is closely related to the CCMLST problem. Here, we are given a threshold K on the number of labels. The goal is to find a minimum weight spanning tree that uses at most K distinct labels. Both of these problems are motivated from the design of telecommunication networks and are known to be NP-complete [15]. In this paper, we present a variable neighborhood search (VNS) algorithm for the CCMLST problem. The VNS algorithm uses neighborhoods defined on the labels. We also adapt the VNS algorithm to the LCMST problem. We then test the VNS algorithm on existing data sets as well as a large-scale dataset based on TSPLIB [12] instances ranging in size from 500 to 1000 nodes. For the LCMST problem, we compare the VNS procedure to a genetic algorithm (GA) and two local search procedures suggested in [15]. For the CCMLST problem, the procedures suggested in [15] can be applied by means of a binary search procedure. Consequently, we compared our VNS algorithm to the GA and two local search procedures suggested in [15]. The overall results demonstrate that the proposed VNS algorithm is of high quality and computes solutions rapidly. On our test datasets, it obtains the optimal solution in all instances for which the optimal solution is known. Further, it significantly outperforms the GA and two local search procedures described in [15]. & 2009 Elsevier Ltd. All rights reserved. 1. Introduction The minimum label spanning tree (MLST) problem was introduced by Chang and Leu [2]. In this problem, we are given an undirected graph G =(V, E) with labeled edges; each edge has a single label from the set of labels L and different edges can have the same label. The objective is to find a spanning tree with the minimum number of distinct labels. The MLST is motivated from applications in the communications sector. Since communication networks sometimes include numerous different media such as fiber optics, cable, microwave or telephone lines and commu- nication along each edge requires a specific media type, decreas- ing the number of different media types in the spanning tree reduces the complexity of the communication process. The MLST problem is known to be NP-complete [2]. Several researchers have studied the MLST problem including Br ¨ uggemann et al. [1], Cerulli et al. [3], Consoli et al. [4], Krumke and Wirth [6], Wan et al. [14], and Xiong et al. [16–18]. Recently Xiong et al. [15] introduced a more realistic version of the MLST problem called the label constrained minimum span- ning tree (LCMST) problem. In contrast to the MLST problem, which completely ignores edge costs, the LCMST problem takes into account the cost or weight of edges in the network (we use the term cost and weight interchangeably in this paper). The objective of the LCMST problem is to find a minimum weight spanning tree that uses at most K labels (i.e., different types of communications media). Xiong et al. [15] describe two simple local search heuristics and a genetic algorithm for solving the LCMST problem. They also describe a mixed integer programming (MIP) model to solve the problem exactly. However, the MIP models were unable to find solutions for problems with greater than 50 nodes due to excessive memory requirements. The cost constrained minimum label spanning tree (CCMLST) problem is another realistic version of the MLST problem. The CCMLST problem was introduced by Xiong et al. [15]. In contrast to the LCMST problem, there is a threshold on the cost of the minimum spanning tree (MST) while minimizing the number of labels. Thus, given a graph G =(V, E), where each edge (i, j) has a label from the set L and an edge weight c ij , and a positive budget B, the goal of the CCMLST problem is to find a spanning tree with the ARTICLE IN PRESS Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/caor Computers & Operations Research 0305-0548/$ - see front matter & 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2009.12.013 Corresponding author. Tel.: + 1 301 405 2232; fax: + 1 301 405 8655. E-mail address: [email protected] (B. Golden). Computers & Operations Research 37 (2010) 1952–1964

Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

Embed Size (px)

Citation preview

Page 1: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Computers & Operations Research 37 (2010) 1952–1964

Contents lists available at ScienceDirect

Computers & Operations Research

0305-05

doi:10.1

� Corr

E-m

journal homepage: www.elsevier.com/locate/caor

Variable neighborhood search for the cost constrained minimum labelspanning tree and label constrained minimum spanning tree problems

Zahra Naji-Azimi a, Majid Salari a, Bruce Golden b,�, S. Raghavan b, Paolo Toth a

a DEIS, University of Bologna, Viale Risorgimento 2, 40136 Bologna, Italyb Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815, USA

a r t i c l e i n f o

Available online 13 January 2010

Keywords:

Minimum spanning tree problem

Minimum label spanning tree problem

Heuristics

Mixed integer programming

Variable neighborhood search

Genetic algorithm

48/$ - see front matter & 2009 Elsevier Ltd. A

016/j.cor.2009.12.013

esponding author. Tel.: +1 301 405 2232; fax

ail address: [email protected] (B. Go

a b s t r a c t

Given an undirected graph whose edges are labeled or colored, edge weights indicating the cost of an

edge, and a positive budget B, the goal of the cost constrained minimum label spanning tree (CCMLST)

problem is to find a spanning tree that uses the minimum number of labels while ensuring its cost does

not exceed B. The label constrained minimum spanning tree (LCMST) problem is closely related to the

CCMLST problem. Here, we are given a threshold K on the number of labels. The goal is to find a

minimum weight spanning tree that uses at most K distinct labels. Both of these problems are

motivated from the design of telecommunication networks and are known to be NP-complete [15].

In this paper, we present a variable neighborhood search (VNS) algorithm for the CCMLST problem.

The VNS algorithm uses neighborhoods defined on the labels. We also adapt the VNS algorithm to the

LCMST problem. We then test the VNS algorithm on existing data sets as well as a large-scale dataset

based on TSPLIB [12] instances ranging in size from 500 to 1000 nodes. For the LCMST problem, we

compare the VNS procedure to a genetic algorithm (GA) and two local search procedures suggested in

[15]. For the CCMLST problem, the procedures suggested in [15] can be applied by means of a binary

search procedure. Consequently, we compared our VNS algorithm to the GA and two local search

procedures suggested in [15]. The overall results demonstrate that the proposed VNS algorithm is of

high quality and computes solutions rapidly. On our test datasets, it obtains the optimal solution in all

instances for which the optimal solution is known. Further, it significantly outperforms the GA and two

local search procedures described in [15].

& 2009 Elsevier Ltd. All rights reserved.

1. Introduction

The minimum label spanning tree (MLST) problem wasintroduced by Chang and Leu [2]. In this problem, we are givenan undirected graph G=(V, E) with labeled edges; each edge has asingle label from the set of labels L and different edges can havethe same label. The objective is to find a spanning tree with theminimum number of distinct labels. The MLST is motivated fromapplications in the communications sector. Since communicationnetworks sometimes include numerous different media such asfiber optics, cable, microwave or telephone lines and commu-nication along each edge requires a specific media type, decreas-ing the number of different media types in the spanning treereduces the complexity of the communication process. The MLSTproblem is known to be NP-complete [2]. Several researchers havestudied the MLST problem including Bruggemann et al. [1], Cerulliet al. [3], Consoli et al. [4], Krumke and Wirth [6], Wan et al. [14],and Xiong et al. [16–18].

ll rights reserved.

: +1 301 405 8655.

lden).

Recently Xiong et al. [15] introduced a more realistic version ofthe MLST problem called the label constrained minimum span-ning tree (LCMST) problem. In contrast to the MLST problem,which completely ignores edge costs, the LCMST problem takesinto account the cost or weight of edges in the network (we usethe term cost and weight interchangeably in this paper). Theobjective of the LCMST problem is to find a minimum weightspanning tree that uses at most K labels (i.e., different types ofcommunications media). Xiong et al. [15] describe two simplelocal search heuristics and a genetic algorithm for solving theLCMST problem. They also describe a mixed integer programming(MIP) model to solve the problem exactly. However, the MIPmodels were unable to find solutions for problems with greaterthan 50 nodes due to excessive memory requirements.

The cost constrained minimum label spanning tree (CCMLST)problem is another realistic version of the MLST problem. TheCCMLST problem was introduced by Xiong et al. [15]. In contrastto the LCMST problem, there is a threshold on the cost of theminimum spanning tree (MST) while minimizing the number oflabels. Thus, given a graph G=(V, E), where each edge (i, j) has alabel from the set L and an edge weight cij, and a positive budget B,the goal of the CCMLST problem is to find a spanning tree with the

Page 2: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–1964 1953

fewest number of labels whose weight does not exceed thebudget B. The notion is to design a tree with the fewest number oflabels while ensuring that the budget for the network design isnot exceeded. (Notice that the objective function here is not thecost of the spanning tree, but rather the number of labels in thespanning tree.) Xiong et al. [15] showed that both the LCMST andthe CCMLST are NP-Complete. Thus, the resolution of theseproblems requires heuristics.

In this paper, we first focus on the CCMLST problem. Wepropose a variable neighborhood search (VNS) method for theCCMLST problem. The VNS algorithm uses neighborhoods definedon the labels. We also adapt the VNS method that we develop tothe LCMST problem. We then compare the VNS method to theheuristics described by Xiong et al. [15]. To do so, we considerexisting data sets and also design a set of nine Euclidean large-scale datasets, derived from TSPLIB instances [12]. On the LCMSTproblem we directly compare our VNS method with the heuristicprocedures of Xiong et al. [15] (which are the best knownheuristics for the LCMST problem). On the CCMLST, we adapt theprocedures of Xiong et al. [15] by embedding them in a binarysearch procedure. The VNS methods perform extremely well onboth the CCMLST and LCMST problems, with respect to solutionquality and computational running time.

The rest of this paper is organized as follows. Section 2describes the mathematical formulation proposed for the LCMSTand CCMLST problems. Section 3 describes the VNS method forthe CCMLST problem. Section 4 describes the VNS method for theLCMST problem. Section 5 describes the procedures of Xiong et al.[15] for the LCMST problem, and explains the binary searchprocedure. Section 6 reports on our computational experiments.Finally, Section 7 provides concluding remarks.

2. Mathematical formulation

In this section, we provide a mixed integer programming (MIP)model for the CCMLST problem. It is based on a multicommoditynetwork flow formulation, and is similar to the MIP modeldescribed in Xiong et al. [15] (though our notation is somewhatmore compact). Further, we significantly strengthen the model byimproving upon one of the constraints in their model.

The main idea in the multicommodity network flow model isto direct the spanning tree away from an arbitrarily selected rootnode, and to use flow variables to model the connectivityrequirement. To help define the multicommodity network flowmodel, we define a bidirected network obtained by replacing eachundirected edge {i, j} by a pair of directed arcs (i, j) and (j, i). Let A

denote the set of arcs, L={1,2,y, l} the set of labels, V={1,2,y, n}the set of nodes in the graph, and B the budget. Also, let Ak denotethe set of all arcs with label k and cij be the cost of arc (i, j). Notethat the cost and label of arcs (i, j) and (j, i) are identical to those ofedge {i, j}. To model the fact that the spanning tree must beconnected, we use the following well-known idea [8] andmulticommodity network flow model. We pick node 1 as theroot node (any node of the graph may be picked for this purpose).We then observe that the spanning tree on the nodes can bedirected away from the root node. Consequently, we create a setof H commodities, where each commodity h has the root node asits origin and the destination (denoted by D(h)) is one of the nodesin {2, 3,y, n} (for a total of n�1 commodities). Each commodityhas a supply of 1 unit of flow and a demand of 1 unit of flow. Thevariables in the multicommodity network flow formulation aredefined as follows:

yk ¼1 if label k is selected

0 otherwise

xij ¼1 if arc ði; jÞ is used

0 otherwise

and

f hij ¼ flow of commodity h along arc ði; jÞ:

The MIP formulation based on the multicommodity flow (mcf)model is as follows:

ðmcfÞ minXkAL

yk ð1Þ

subject toXði;jÞAA

xij ¼ n�1 ð2Þ

Xi:ði;DðhÞÞAA

f hiDðhÞ�

Xl:ðDðhÞ;lÞAA

f hDðhÞl ¼ 1 8hAH ð3Þ

Xi:ði;1ÞAA

f hi1�

Xl:ð1;lÞAA

f h1l ¼�1 8hAH ð4Þ

Xi:ði;jÞAA

f hij�

Xl:ðj;lÞAA

f hjl ¼ 0 8hAH; 8jah ð5Þ

f hij rxij 8ði; jÞAA; 8hAH ð6Þ

xijþxjir1 8ði; jÞAE ð7Þ

Xði;jÞAAk

xijr ðn�1Þ � yk 8kAL ð8Þ

Xði;jÞAA

cijxijrB ð9Þ

xij; yk Af0;1g 8ði; jÞAA; 8kAL ð10Þ

f hij Z0 8ði; jÞAA; 8hAH: ð11Þ

In the objective function (1), we want to minimize the totalnumber of labels used in the solution. Constraint (2) ensures thetree has exactly (n�1) arcs. Constraints (3)–(5) represent the flowbalance constraints for the commodity flows. Constraint set (6)is a forcing constraint set. These constraints enforce the conditionthat if flow is sent along an arc, the arc must be included inthe directed tree. Constraint set (7) ensures that either arc (i, j) orarc (j, i) can be in the solution, but not both (recall the tree mustbe directed away from the root node). Constraint set (8) is aforcing constraint set between arcs and labels. It says that if an arcwith label k is used, then this label must be selected. Constraint(9) imposes the budget on the tree cost. Finally, constraint (10)defines the arc and label variables as binary, and constraint(11) defines the flow variables as non-negative.

Note that to model the LCMST: (i) we use the left-hand side ofconstraint (9) as the objective, and (ii) constraint (9) is replacedby the constraint

PkA LykrK , where K denotes the limit on the

number of labels permitted. As such (with the change in theobjective function and constraint (9)) the multicommodity flowformulation is virtually identical to Xiong et al. [15] (we haveeliminated the edge variables in the Xiong et al. [15] model andthus our notation is somewhat more compact). However, thisformulation can be considerably strengthened by using thetechnique of constraint disaggregation (see page 185 of [5]) onconstraint set (8). We replace this constraint with the stronger

xijryk 8kAL; 8ði; jÞAAk: ð80Þ

In our computational work, we use the multicommodity flowmodel with constraint set (80) to obtain lower bounds and optimalsolutions on our test instances. We found that it is considerably

Page 3: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–19641954

stronger than the multicommodity flow model proposed by Xionget al. [15] that has constraint (8).

3. Variable neighborhood search for the CCMLST problem

In this section, we develop our variable neighborhood searchalgorithm for the CCMLST problem. We also describe how it canbe applied to the LCMST problem. Variable neighborhood searchis a metaheuristic proposed by Mladenovic and Hansen [9], whichexplicitly applies a strategy based on dynamically changingneighborhood structures. The algorithm is very general and manydegrees of freedom exist for designing variants.

The basic idea is to choose a set of neighborhood structuresthat vary in size. These neighborhoods can be arbitrarily chosen,but usually a sequence of neighborhoods with increasingcardinality is defined. In the VNS paradigm, an initial solution isgenerated, then the neighborhood index is initialized, and thealgorithm iterates through the different neighborhood structureslooking for improvements, until a stopping condition is met.

We consider VNS as a framework, and start by constructing aninitial solution. We then improve upon this initial solution usinglocal search. Then, the improvement of the incumbent solution (R)continues in a loop until the termination criterion is reached. Thisloop contains a shaking phase and a local search phase. The shaking

phase follows the VNS paradigm. It considers a specially designedneighborhood and makes random changes to the current solutionthat enables us to explore neighborhoods farther away from thecurrent solution. The local search phase considers a more restrictedneighborhood set and attempts to improve upon the quality of agiven solution.

We now make an important observation regarding the relation-ship between the selected labels and the associated solution. Givena set of labels RDL, the minimum cost solution on the labels R isthe minimum spanning tree computed on the graph induced by thelabels in R. We denote the minimum spanning tree on the graphinduced by the labels in R as MST(R) and its cost by MSTCOST(R).These two can be computed rapidly using any of the well-known

2

1 3 1 2

2.5

Initial

2

1 1 1 2

Components (a) = 4 Componen

Connected graph

2

3 2

2.5 1

Fig. 1. An example illustrating the selection of

minimum spanning tree algorithms [7,11]. Consequently, oursearch for a solution focuses on selecting labels (as opposed toedges), and our neighborhoods as such are neighborhoods onlabels. Our solutions then are described in terms of the labels theycontain (as opposed to the edges they contain). Furthermore,without loss of generality, we assume MSTCOST(L)rB, because ifBoMSTCOST(L) the problem is infeasible.

3.1. Initial solution

Our procedure to construct an initial solution focuses onselecting a minimal set of labels that result in a connected graph.Let Components(R) denote the number of connected componentsin the graph induced by the labels in R. This can easily becomputed using depth first search [13]. Our procedure adds labelsto our solution in a greedy fashion. The label selected for additionto the current set of labels is the one (amongst all the labels thatare not in the current set of labels) that when added results in theminimum number of connected components. Ties between labelsare broken randomly. In other words, we choose a label foraddition to the current set of labels R randomly from the set

S¼ ftAðL\RÞ : min Components ðR [ ftgÞg: ð12Þ

This continues until the selected labels result in a single component.In Fig. 1, an example illustrating the initialization method is

shown. Suppose there are three labels, namely a, b, and c, in thelabel set. Since the number of connected components after addinglabel c is less than for the two other labels, we add this label to thesolution. However the graph is still not connected, so we gofurther by repeating this procedure with the remaining labels.Both labels a and b produce the same number of components, sowe select one of them randomly (label b).

Before considering the cost constraint, which we need tosatisfy, we have found it useful to try to improve the quality of theinitial solution slightly. To this aim, we swap used and unusedlabels in a given solution in order to decrease the cost of aminimum spanning tree on the selected labels. We scan throughthe labels in the current solution. We iteratively consider all

2 Labels:

3 1 a 2

b 1 2.5

c

graph

2

33

2 1

2.5 2.5

ts (b) = 3 Components (c) = 2

with label b and c

2

3 2

2.5

labels for the initial connected subgraph.

Page 4: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–1964 1955

unused labels and attempt to swap a given label in the currentsolution with an unused label if it results in an improvement (i.e.,the cost of the minimum spanning tree on the graph induced bythe labels decreases). As soon as an improvement is found, it isimplemented and the next used label in the current solution isexamined. This is illustrated with an example in Fig. 2. ConsiderA¼ fb; cg we have MSTCOST(A)=12 and MSTCOSTðfA\bg [agÞ ¼ 9.Therefore, we remove label b and add label a to the representationof our solution.

At this stage, it is possible that the set of labels in the currentsolution does not result in a tree that satisfies the budgetconstraint. To find a set of labels that does, we iteratively addlabels to the current set of labels by choosing the label that, whenadded, results in the lowest cost minimum spanning tree. In otherwords the label to be added is selected from

S¼ ftAðL\RÞ : min MSTCOSTðR [ ftgÞg: ð13Þ

and ties are broken randomly. We continue adding labels to thecurrent solution in this fashion until we obtain a minimumspanning tree satisfying the cost constraint. (Recall sinceBZMSTCOST(L) a feasible solution exists and the initializationphase will find one.)

3.2. Shaking phase

The shaking phase follows the VNS paradigm and dynamicallyexpands the neighborhood search area. Suppose R denotes thecurrent solution (it really denotes the labels in the currentsolution, but as explained earlier it suffices to focus on labels). Inthis step, we use randomization to select a solution that is in thesize k neighborhood of the solution R, i.e., Nk(R). Specifically Nk(R)is defined as the set of labels that can be obtained from R byperforming a sequence of exactly k additions and/or deletions oflabels. So N1(R) is the set of labels obtained from R by eitheradding exactly one label from R, or deleting exactly one label fromR. N2(R) is the set of labels obtained from R by either addingexactly two labels, or deleting exactly two labels, or addingexactly one label and deleting exactly one label.

The shaking phase may result in the selection of labels that donot result in a connected graph, or result in a minimum cost

2 2

1 1 1 2 2

1 3 2.5 1 2.5

MSTCOST ({b, c}) = 12 MSTCOST ({a, c}) = 9

Fig. 2. An example for the swap of used and unused labels.

Table 1Parameter/algorithmic choices within the VNS procedure for the CCMLST problem.

Phase Parameter varied

Initial solution Connectivity

Swap

Cost constraint

Local search applied to initial solution

Feasibility post-shaking phase Connectivity

Cost constraint

Local search Swap

spanning tree that does not meet the budget constraint. If the setof labels results in a graph that is not connected, we add labelsthat are not in the current solution one by one, at random untilthe graph is connected. If the minimum spanning tree on theselected labels does not meet the budget constraint, we iterativelyadd labels to the current set of labels by choosing the label thatwhen added results in the lowest cost minimum spanning tree.

3.3. Local search phase

The local search phase consists of two parts. In the first part,the algorithm tries to swap each of the labels in the currentsolution with an unused one if it results in a lower minimumspanning tree cost. To this aim, it iteratively considers the labelsin the solution and tests all possible exchanges of a given labelwith unused labels until it finds an exchange resulting in a lowerMST cost. If we find such an exchange, we implement it (i.e., weignore the remaining unused labels) and proceed to the next labelin our solution. Obviously, a label remains in the solution if thealgorithm cannot find a label swap resulting in an improvement.

The second part of the local search phase tries to improve thequality of the solution by removing labels from the current set oflabels. It iteratively, tries to remove each label. If the resultingset of labels provides a minimum spanning tree whose cost doesnot exceed the budget we permanently remove the label, andcontinue. Otherwise, the label remains in the solution.Our variable neighborhood search algorithm is outlined inAlgorithm 1.

3.4. Discussion of algorithmic parameter choices

We now discuss the different components of our VNSalgorithm and identify the rationale for the different choicesmade within the algorithm. In constructing the initial solution, wefirst chose labels to construct a connected graph. Our choice wasto choose the label that resulted in the greatest decrease in thenumber of connected components. An alternative is to simply addlabels randomly. We conducted experiments with these 2 variantsfor connectivity of the initial solution, and found that choosing toadd labels so that they result in the greatest decrease in thenumber of connected components provided significantly bettersolutions. Table 1 identifies the different parts of the VNSalgorithm for the CCMLST, the parameter or algorithmic choiceswithin each part, and the choice that resulted in the best solution.For example, when comparing the use of swapping against noswapping in the construction of the initial solution, swappinglabels provided the best results. In Table 1, the parameter costconstraint refers to the scenario where the cost of the solution isstrictly greater than B. Here, in order to reduce the cost of thesolution, min avg cost considers the unused label with thesmallest average cost to add to the solution. Feasibility post-shaking phase refers to the part of the algorithm where feasibility

Values Best value

Random, min components Min components

Yes/no Yes

Random, min avg cost, min MST cost Min MST cost

Yes/no Yes

Random, min components Random

Min avg cost, min MST cost Min MST cost

Yes/no Yes

Page 5: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–19641956

of the solution obtained by the shaking phase is restored. As canbe seen from Table 1, the best algorithmic choices within thedifferent parts of the VNS algorithm are the ones used in our VNSalgorithm described in Algorithm 1.

Algorithm 1. Variable neighborhood search algorithm for theCCMLST problem

VNS for CCMLSTR=j;Initialization procedure (G, R);Local search (G, R);While termination criterion not met

k=1;While kr5

R ¼ Shaking_PhaseðG; k;RÞ;

While Components ðRÞ41

Select at random a label uAL\R and add it to R;End

While MSTCOSTðRÞ4B

S¼ ftAðL\RÞ : min MSTCOSTðR [ ftgÞg;

Select at random a label uAS and add it to R;End

Local search ðG;RÞ;

If jRjo jRj then R¼ R and k=1 else k=k+1;End

End

Initialization procedure (G, R)While Components (R)41

S¼ ftAðL\RÞ : min ComponentsðR [ ftgÞg;Select at random a label uAS and add it to R;

EndConsider the labels iAR one by one

Swap the label i with the first unused label that strictlylowers the MST cost;

EndWhile MSTCOST(R)4B

S¼ ftAðL\RÞ : min:6trueemMSTCOSTðR [ ftgÞg;

Select at random a label uAS and add it to R;End

Shaking_Phase (G, k, R)For i=1,y, k

r=random(0,1);If rr0.5 then delete at random a label from R else add at

random a label to R;EndReturn(R);

Local search(G,R)Consider the labels iAR one by one

Swap the label i with the first unused label that strictlylowers the MST cost;

EndConsider the labels iAR one by one

Delete label i from R, i.e. R=R\i;

If Components (R)41 or MSTCOST(R)4B then R¼ R [ i;End

4. Variable neighborhood search for the LCMST problem

We now adapt our VNS procedure for the CCMLST problem tothe LCMST problem. Recall, the objective of the LCMST problem isto find a minimum weight spanning tree that uses at most

K labels. Given a choice of labels, the cost of the tree is simply theminimum spanning tree cost on the given choice of labels.Consequently, the LCMST problem is also a problem thatessentially involves selecting a set of labels for the tree.Consequently, heuristics for the LCMST problem also search fora set of K or fewer labels that minimize the cost of the spanningtree. Our framework for the VNS method for the LCMST problem isessentially the same as the CCMLST problem with a few minorchanges that we now explain.

Initialization: As before, we first choose a set of labels in agreedy fashion to ensure that the resulting graph is connected.However, it is easy to observe that if R is a subset of labels, thenthe cost of an MST on any superset T of R is less than or equal tothe cost of the MST on R. In other words, if RDT thenMSTCOST(R)ZMSTCOST(T). Consequently, in order to try tominimize the cost, we iteratively add labels to the initial set oflabels until we have K labels. The label to be added in any iterationis selected from the set S¼ ftAðL\RÞ : min MSTCOSTðR [ ftgÞg, withties broken randomly.

Shaking Phase: The shaking phase is identical to the shakingphase for the VNS method for the CCMLST problem. It may resultin the selection of labels that do not result in a connected graph,or result in a set with more than K labels. If the set of labels resultsin a graph that is not connected, we add labels that are not in thecurrent solution one by one, at random, until the graph isconnected. If the set of labels exceeds K, we iteratively deletelabels from the current set of labels by choosing the label thatwhen deleted results in the lowest cost minimum spanningtree.

Local Search: The local search procedure first adds labels to agiven solution until it has K labels. The additional labels areselected iteratively, each time selecting a label that provides thegreatest decrease in the cost of the minimum spanning tree, untilwe have K labels. The local search procedure then tries to swapeach of the labels in the current solution with an unused one, if itresults in a lower minimum spanning tree cost. This is done in ananalogous fashion to the local search procedure in the VNSmethod for the CCMSLT problem. The pseudo code for the VNSmethod for the LCMST problem is provided in Algorithm 2.

Before we turn our attention to our computational experi-ments we must note the recent work by Consoli et al. [4] on theMLST problem that includes a VNS method for the MLST problem.We briefly compare the two VNS methods (albeit on differentproblems). Both of the methods follow the main structure of VNS.Consequently, at a high level they are somewhat similar, but theyare different in several details. For example, in the initializationphase instead of generating a random solution as done in thepaper by Consoli et al. [4] we use a more involved procedure togenerate an initial solution by considering the set of labels thatproduces fewer components and smaller MST cost. Additionally,we use local search to improve the solution found in theinitialization phase prior to applying VNS. Finally, since theobjectives of the LCMST and CCMLST problems are somewhatdifferent from the MLST problem, our local search operator isquite different from that in Consoli et al. [4].

Algorithm 2. Variable neighborhood search algorithm for theLCMST problem

VNS for LCMSTR=j;Initialization procedure(G,R);Local search (G,R);While termination criterion not met

k=1;While kr5

R ¼ Shaking_PhaseðG; k;RÞ;

Page 6: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–1964 1957

While Components ðRÞ41

Select at random a label uAL\R and add it to R;End

While jRj4K

S¼ ftAðL\RÞ : min MSTCOSTðR\ftgÞg;

Select at random a label uAS and delete it from R;End

Local search ðG;RÞ;

If MSTCOSTðRÞoMSTCOSTðRÞ then R ¼ R0 and k=1 elsek=k+1;

EndEnd

Initialization procedure(G,R)While Components (R)41

S¼ ftAðL\RÞ : min ComponentsðR [ ftgÞg;Select at random a label uAS and add it to R;

EndWhile |R|oK

S¼ ftAðL\RÞ : min MSTCOSTðR [ ftgÞg;Select at random a label uAS and add it to R.

End.

Shaking_Phase (G,k,R)For i=1,y, k

r=random(0,1);If rr0.5 then delete at random a label from R else add at

random a label to R;EndReturn(R);

Local search(G,R)While |R|oK

S¼ ftAðL\RÞ : min MSTCOSTðR [ ftgÞg;Select at random a label uAS and add it to R.

End.Consider the labels iAR one by one

Swap the label i with the first unused label that strictlylowers the MST cost;

End

5. Applying heuristics for the LCMST problem to the CCMLSTproblem by means of binary search

We first describe the heuristics of Xiong et al. [15] for theLCMST problem. We then describe how any heuristic for theLCMST problem may be applied to the CCMLST problem by usingbinary search. This allows us to apply the heuristics of Xiong et al.[15] to the CCMLST problem.

The first proposed heuristic, LS1, by Xiong et al. [15] for theLCMST, begins with an arbitrary feasible solution, A={a1,a2,yaK} ofK labels. Then, it starts a replacement loop as follows. It firstattempts to replace a1 by the label in L\A that gives the greatestreduction in MST cost. In other words, it checks each label in L\A andselects the label tAL\A that results in the lowest value ofMSTCOST({A\a1}[t). If it finds an improvement (i.e., MSTCOST({A\a1}[t)oMSTCOST(A)) it sets a1=t, otherwise it leaves a1

unchanged. Next, it repeats the procedure with label a2 attemptingto replace it by the label in L\A that gives the greatest reduction inMST cost. It continues the replacement loop in this fashion until itconsiders replacing label aK by the label in L\A that gives thegreatest reduction in MST cost. In one replacement loop all of the K

labels in A are considered for replacement. This continues until nocost improvement can be made between two consecutive replace-ment loops [15].

In the second heuristic, LS2, the algorithm starts with anarbitrary feasible solution A={a1,a2,y, aK} of K labels. Then, itattempts to find improvements as follows. It adds a label inaK +1AL\A to A. This results in a solution with K+1 labels, which isnot feasible. Consequently, it chooses amongst these K+1 labelsthe label to delete that results in the smallest MST cost. To findthe best possible improvement of this type, it searches among allpossible additions of labels in L\A to A and selects the one thatprovides the greatest improvement in cost. The procedurecontinues until no further improvement is found [15].

In the GA proposed by Xiong et al. [15], a queen-bee crossoveris applied. This approach has often been found to outperformmore traditional crossover operators. In each generation, the bestchromosome is declared the queen-bee and crossover is onlyallowed between the queen-bee and other chromosomes. Formore details regarding crossover, mutation, etc., see [15].

Binary search is a popular algorithmic paradigm (see page 171of [10]) that efficiently searches through an interval of integervalues to find the smallest (or largest) integer that satisfies aspecified property. It starts with an interval 1,y, l. In each step itchecks whether the midpoint of the interval satisfies the specifiedproperty. If so, one may conclude the smallest integer thatsatisfies the specified property is in the lower half of the interval,otherwise the smallest integer that satisfies the specified propertyis in the upper half of the interval. It recursively searches throughthe interval in which the solution lies, until (after dlog2le steps)the interval contains only one integer value and the procedureterminates. We now describe how to use the binary searchmethod to apply any algorithm for the LCMST problem to theCCMLST problem. Let ALG denote any heuristic for the LCMSTproblem. It takes as input a graph G and a threshold K. ALGattempts to find a minimum cost spanning tree that uses at mostK labels. Either it returns a feasible solution, i.e., a set of at mostK labels, or it indicates that no feasible solution has been found bysending back an empty set of labels. The cost of the solution canthen be determined by MSTCOST(R) where R denotes the set oflabels. Note that MSTCOST(R) is infinity if R is an empty set.

The details of the binary search method as applied to theCCMSLT problem are provided in Algorithm 3. The lower value forthe number of labels is set to 1 and the upper value for the numberof labels is set to l (the total number of labels). As is customary inbinary search, ALG is executed setting the threshold on the numberof labels to (lower+upper)/2. Note that since the number of labelsmust be integer, ALG rounds down any non-integral value of thethreshold K. Essentially, anytime the cost of the tree found by ALGexceeds the budget B, we need more labels and increase the lower

value to (lower+upper)/2. Anytime the cost of the tree found by ALGis within budget, we decrease the value of upper to (lower+upper)/2.

Algorithm 3. Binary search method for the CCMLST problem

BeginSet lower=1 and upper= l;While (upper� lower)Z1

mid=(lower+upper)/2;R=ALG(G,mid);If MSTCOST(R)4B then lower=mid else upper=mid;

EndOutput R=ALG(G,upper);

End

6. Computational results

In this section, we report on an extensive set of computationalexperiments on both the CCMLST and LCMST problems. Allheuristics have been tested on a Pentium IV machine with a

Page 7: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–19641958

2.61 GHz processor and 2 GB RAM, under the Windows operatingsystem. We also use ILOG CPLEX 10.2 to solve the MIP formulation.

The two parameters that are adjustable within the VNSprocedure are the value of k (the size of the largest neighborhoodNk(R) in the VNS method), and Iter, the number of iterations inwhich the algorithm is not able to improve the best knownsolution (which is the termination criterion). Increasing k,increases the size of the neighborhood but also increases therunning time. We found that setting k=5 provides the best resultswithout a significant increase in running time. Additionally, as thevalue of Iter is increased the running time of the algorithm isincrease, though the quality of the solution improves. We foundthat setting Iter=10 provides the best results in a reasonableamount of running time.

We now describe how we generated our datasets, and thendiscuss our computational experience on these datasets for bothCCMLST and LCMST problems.

6.1. Datasets

Xiong et al. [15] created a set of test instances for the LCMSTproblem. These include 37 small instances with 50 nodes or less,11 medium-sized instances that range from 100 to 200 nodes, andone large instance with 500 nodes. All of these instances arecomplete graphs. We used Xiong et al.’s small and medium-sizedinstances for our tests on the LCMST problem. For these problems,the optimal solutions are known for the small instances, but notfor any of the medium-sized instances. We also generated a set of18 large instances that range from 500 to 1000 nodes. These werecreated from nine large TSP instances in TSPLIB and considered tobe Euclidean (since the problems arise in the telecommunicationsindustry, the costs of edges are generally proportional to theirlength). To produce a labeled graph from a TSPLIB instance, weconstruct a complete graph using the coordinates of the nodes inthe TSPLIB instance. The number of labels in the instance is onehalf of the total number of nodes, and the labels are randomlyassigned. For the LCMST problem, we set the threshold on themaximum number of labels as 75 and 150, creating two instancesfor each labeled graph created from a TSPLIB instance.

We adapted the instances created by Xiong et al. [15] to theCCMLST as follows. Essentially, for the labeled graph instance, wecreate a set of different budget values. These budget values areselected starting from slightly more than MSTCOST(L) withincrements of approximately 500 units. The small instances inXiong et al. were somewhat limited in the sense that the numberof labels is equal to the number of nodes in the graph.Consequently, using Xiong et al.’s code, we generated additionallabeled graphs where the number of labels was less than the

Table 2VNS, GA, LS1, and LS2 for the CCMLST problem on 10 nodes.

#Labels Cost

restriction

Exact

method

LS1 LS2

Labels Time Labels Gap from

optimal (%)

Time Labels Gap fr

optima

10 7000 2 0.23 2 0 0.00 2 0

6500 2 0.02 2 0 0.00 2 0

6000 2 0.03 2 0 0.00 2 0

5500 2 0.02 2 0 0.01 2 0

5000 2 0.11 2 0 0.00 2 0

4500 2 0.09 2 0 0.00 2 0

4000 2 0.05 2 0 0.00 2 0

3500 2 0.06 2 0 0.00 2 0

3000 3 0.03 3 0 0.00 3 0

2500 4 0.13 4 0 0.01 4 0

number of nodes in the graph. The main aim of the smallinstances was to test the quality of the VNS method and thebinary search method on instances where the optimal solution isknown. Consequently, we restricted our attention on these smallinstances to problems where we were able to compute theoptimal solution using CPLEX on our MIP formulation. In this way,we created 104 small instances (with 10 to 50 nodes) where theoptimal solution is known, and 60 medium-sized instances (with100 to 200 nodes). We adapted the TSPLIB instances to create 27large instances. We used the labeled graph generated from theTSPLIB instance, and created three instances from each labeledgraph by varying the budget value. Specifically, we used budgetvalues of 1.3, 1.6, and 1.9 times MSTCOST(L).

6.2. Results for CCMLST instances

We first discuss our results on the 191 CCMLST instances.These results are described in Tables 2–6 for the small instances,Tables 7–9 for the medium-sized instances, and Table 10 for thelarge instances.

On the 104 small instances, the VNS method found the optimalsolution in all cases (recall that the optimal solution is known inall of these instances), while LS1, LS2, and GA generated theoptimal solution 100, 100, and 102 times, respectively, out of the104 instances. The average running time of the VNS method was0.05 s, while LS1, LS2, and GA took 0.06, 0.07, and 0.18 s,respectively. For the small and medium-sized instances, thetermination criterion used was 10 iterations without an improve-ment. On the 60 medium-sized instances, the VNS methodgenerated the best solution in 59 out of the 60 instances, whileLS1, LS2, and GA generated the best solution 46, 50, and 50 times,respectively, out of the 60 instances. The average running time ofthe VNS method was 20.59 s, while LS1, LS2, and GA took 63.52,68.75, and 62.19 s, respectively. This indicates that the VNSmethod finds better solutions in a greater number of instancesmuch more rapidly than any of the three comparative procedures.For the large instances the termination criterion used was aspecified running time which is shown in the tables with thecomputational results. On the 27 large instances, the VNS methodgenerated the best solution in 26 out of the 27 instances, whileLS1, LS2, and GA generated the best solution 19, 18, and 14 times,respectively, out of the 27 instances. The average running time ofthe VNS method was 667 s, while LS1, LS2, and GA took 22,816,25,814, and 11,477 s, respectively. This clearly shows the super-iority of the VNS method, especially as the instances get larger. Itfinds the best solution in a greater number of instances (and foralmost all instances) an order of magnitude faster than any of thethree comparative procedures.

GA VNS

om

l (%)

Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time

0.01 2 0 0.00 2 0 0.02

0.00 2 0 0.02 2 0 0.00

0.01 2 0 0.00 2 0 0.02

0.01 2 0 0.02 2 0 0.00

0.01 2 0 0.00 2 0 0.00

0.00 2 0 0.00 2 0 0.00

0.01 2 0 0.00 2 0 0.00

0.00 2 0 0.02 2 0 0.00

0.00 3 0 0.00 3 0 0.00

0.01 4 0 0.02 4 0 0.00

Page 8: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Table 3VNS, GA, LS1, and LS2 for the CCMLST Problem on 20 nodes.

#Labels Cost

restriction

Exact

method

LS1 LS2 GA VNS

Labels Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time

10 7500 2 3.33 2 0 0.01 2 0 0.00 2 0 0.02 2 0 0.02

7000 2 2.05 2 0 0.01 2 0 0.01 2 0 0.02 2 0 0.00

6500 2 3.78 2 0 0.01 2 0 0.01 2 0 0.00 2 0 0.02

6000 2 1.72 2 0 0.01 2 0 0.00 2 0 0.00 2 0 0.02

5500 2 2.70 2 0 0.01 2 0 0.01 2 0 0.00 2 0 0.00

5000 3 2.80 3 0 0.01 3 0 0.01 3 0 0.02 3 0 0.00

4500 3 6.72 3 0 0.01 3 0 0.01 3 0 0.00 3 0 0.02

4000 4 29.02 4 0 0.01 4 0 0.01 4 0 0.02 4 0 0.02

3500 5 7.08 5 0 0.01 5 0 0.01 5 0 0.02 5 0 0.02

3050 8 0.84 8 0 0.01 8 0 0.01 8 0 0.03 8 0 0.02

20 7500 2 0.69 2 0 0.02 2 0 0.02 2 0 0.02 2 0 0.02

7000 2 0.34 2 0 0.02 2 0 0.02 2 0 0.02 2 0 0.00

6500 2 1.14 2 0 0.02 2 0 0.02 2 0 0.00 2 0 0.00

6000 3 2.98 3 0 0.02 3 0 0.02 3 0 0.02 3 0 0.00

5500 3 15.98 3 0 0.00 3 0 0.02 3 0 0.02 3 0 0.02

5000 4 4.30 4 0 0.02 4 0 0.02 4 0 0.03 4 0 0.02

4500 5 18.16 5 0 0.02 5 0 0.02 5 0 0.03 5 0 0.02

4000 6 12.28 6 0 0.02 6 0 0.02 6 0 0.02 6 0 0.02

3500 8 3.83 8 0 0.02 8 0 0.02 8 0 0.03 8 0 0.02

3050 11 0.88 11 0 0.02 11 0 0.02 11 0 0.06 11 0 0.02

Table 4VNS, GA, LS1, and LS2 for the CCMLST Problem on 30 nodes.

#Labels Cost

restriction

Exact method LS1 LS2 GA VNS

Labels Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time

10 8000 2 31.61 2 0 0.01 2 0 0.01 2 0 0.03 2 0 0.02

7500 2 77.45 2 0 0.01 2 0 0.01 2 0 0.02 2 0 0.02

7000 2 20.59 2 0 0.01 2 0 0.01 2 0 0.02 2 0 0.02

6500 2 41.84 2 0 0.01 2 0 0.01 2 0 0.03 2 0 0.02

6000 3 45.59 3 0 0.01 3 0 0.01 3 0 0.03 3 0 0.02

5500 3 134.80 3 0 0.01 3 0 0.01 3 0 0.03 3 0 0.02

5000 3 56.22 3 0 0.01 3 0 0.01 3 0 0.03 3 0 0.02

4500 4 33.39 4 0 0.02 4 0 0.01 4 0 0.03 4 0 0.02

4000 5 15.42 5 0 0.01 5 0 0.01 5 0 0.03 5 0 0.03

3500 8 26.23 8 0 0.02 8 0 0.01 8 0 0.06 8 0 0.02

20 8000 3 69.92 3 0 0.02 3 0 0.02 3 0 0.08 3 0 0.02

7500 3 132.47 3 0 0.02 3 0 0.02 3 0 0.08 3 0 0.02

7000 3 9.69 3 0 0.02 3 0 0.03 3 0 0.08 3 0 0.02

6500 4 31.55 4 0 0.03 4 0 0.02 4 0 0.09 4 0 0.02

6000 4 76.02 4 0 0.02 4 0 0.02 4 0 0.08 4 0 0.03

5500 5 111.28 5 0 0.03 5 0 0.03 5 0 0.09 5 0 0.03

5000 5 74.59 5 0 0.03 5 0 0.03 5 0 0.09 5 0 0.04

4500 6 46.27 6 0 0.03 6 0 0.03 6 0 0.09 6 0 0.03

4000 8 22.90 8 0 0.03 8 0 0.03 8 0 0.13 8 0 0.04

3500 14 177.91 14 0 0.04 14 0 0.04 14 0 0.17 14 0 0.03

30 8000 3 19.12 3 0 0.03 3 0 0.03 3 0 0.06 3 0 0.05

7500 4 33.04 4 0 0.03 4 0 0.03 4 0 0.08 4 0 0.03

7000 4 62.60 4 0 0.05 4 0 0.03 4 0 0.08 4 0 0.05

6500 4 16.01 4 0 0.03 4 0 0.05 4 0 0.08 4 0 0.05

6000 5 4.56 5 0 0.03 5 0 0.05 5 0 0.08 5 0 0.05

5500 6 45.34 6 0 0.06 6 0 0.06 6 0 0.13 6 0 0.05

5000 7 82.30 7 0 0.05 7 0 0.05 7 0 0.13 7 0 0.06

4500 8 95.27 8 0 0.06 8 0 0.08 8 0 0.13 8 0 0.06

4000 10 55.84 11 10 0.09 11 10 0.08 11 10 0.13 10 0 0.06

3500 15 385.09 15 0 0.09 15 0 0.09 15 0 0.30 15 0 0.09

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–1964 1959

6.3. Results for LCMST instances

We now discuss our results on the 66 LCMST instances. Theseresults are described in Table 11 for the small instances, Table 12 forthe medium-sized instances and Table 13 for the large instances.

On the 37 small instances, the VNS method found the optimalsolution in all 37 instances (recall that the optimal solution is knownin all of these instances), while LS1, LS2, and GA generated the bestsolution 34, 33, and 29 times, respectively, out of the 37 instances.The average running time of the VNS method was 0.13 s while LS1,

Page 9: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Table 5VNS, GA, LS1, and LS2 for the CCMLST Problem on 40 nodes.

#Labels Cost

restriction

Exact method LS1 LS2 GA VNS

Labels Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time

10 9000 2 6125.77 2 0 0.01 2 0 0.01 2 0 0.03 2 0 0.03

8500 2 24,107.41 2 0 0.01 2 0 0.01 2 0 0.03 2 0 0.02

8000 2 74,551.36 2 0 0.01 2 0 0.01 2 0 0.03 2 0 0.03

7500 3 12,550.08 3 0 0.01 3 0 0.01 3 0 0.06 3 0 0.02

7000 3 1734.30 3 0 0.01 3 0 0.01 3 0 0.05 3 0 0.03

6500 3 1049.59 3 0 0.02 3 0 0.01 3 0 0.05 3 0 0.03

20 9000 3 3344.61 3 0 0.03 3 0 0.04 3 0 0.09 3 0 0.03

8500 3 3709.23 3 0 0.03 3 0 0.04 3 0 0.09 3 0 0.03

8000 4 1064.82 4 0 0.04 4 0 0.04 4 0 0.09 4 0 0.05

7500 4 1475.95 4 0 0.04 4 0 0.04 4 0 0.09 4 0 0.05

40 9000 5 1514.57 5 0 0.12 5 0 0.11 5 0 0.19 5 0 0.09

8500 5 2223.00 5 0 0.14 5 0 0.11 5 0 0.20 5 0 0.09

8000 6 17,008.94 6 0 0.14 6 0 0.12 6 0 0.20 6 0 0.09

7500 6 541.14 7 17 0.16 6 0 0.12 6 0 0.20 6 0 0.11

7000 7 424.28 7 0 0.16 7 0 0.16 7 0 0.27 7 0 0.13

6500 8 1713.71 8 0 0.20 8 0 0.17 8 0 0.31 8 0 0.14

6000 9 2519.92 9 0 0.20 9 0 0.20 9 0 0.31 9 0 0.16

5500 11 1437.73 11 0 0.22 11 0 0.22 11 0 0.41 11 0 0.16

5000 14 12,402.04 14 0 0.33 14 0 0.27 14 0 0.39 14 0 0.19

Table 6VNS, GA, LS1, and LS2 for the CCMLST Problem on 50 nodes.

#Labels Cost

restriction

Exact method LS1 LS2 GA VNS

Labels Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time Labels Gap from

optimal (%)

Time

10 9500 2 9798.88 2 0 0.02 2 0 0.02 2 0 0.6 2 0 0.02

9000 2 8972.91 2 0 0.02 2 0 0.02 2 0 0.6 2 0 0.02

8500 2 8984.60 2 0 0.02 2 0 0.02 2 0 0.6 2 0 0.02

8000 3 1443.76 3 0 0.02 3 0 0.02 3 0 0.6 3 0 0.03

7500 3 24,729.75 3 0 0.02 3 0 0.02 3 0 0.6 3 0 0.03

7000 3 28,404.61 3 0 0.02 3 0 0.02 3 0 0.8 3 0 0.03

6500 4 46,791.06 4 0 0.02 4 0 0.02 4 0 0.8 4 0 0.05

6000 4 6459.06 4 0 0.02 4 0 0.02 4 0 0.9 4 0 0.05

20 9500 3 9216.76 3 0 0.07 3 0 0.06 3 0 0.16 3 0 0.05

9000 3 1880.70 3 0 0.05 3 0 0.06 3 0 0.17 3 0 0.06

8500 4 7873.89 4 0 0.06 4 0 0.07 4 0 0.19 4 0 0.08

8000 4 12,730.25 4 0 0.06 4 0 0.07 4 0 0.19 4 0 0.08

7500 5 7572.67 5 0 0.12 5 0 0.07 5 0 0.19 5 0 0.08

7000 5 49,912.15 5 0 0.08 6 20 0.07 5 0 0.22 5 0 0.08

6500 6 22,651.00 6 0 0.07 6 0 0.09 6 0 0.28 6 0 0.09

30 9500 5 11,111.32 5 0 0.12 5 0 0.14 5 0 0.31 5 0 0.09

9000 5 12,638.70 5 0 0.12 5 0 0.14 5 0 0.30 5 0 0.09

8500 5 15,358.56 5 0 0.12 5 0 0.14 5 0 0.30 5 0 0.13

8000 6 36,336.06 6 0 0.13 6 0 0.15 6 0 0.30 6 0 0.13

50 9000 7 1322.60 7 0 0.33 7 0 0.37 7 0 0.52 7 0 0.22

8500 8 10,455.06 8 0 0.31 8 0 0.39 8 0 0.59 8 0 0.22

8000 8 62,948.11 9 13 0.31 9 13 0.44 8 0 0.61 8 0 0.27

7500 9 30,317.90 9 0 0.33 9 0 0.42 9 0 0.55 9 0 0.25

7000 11 97,384.13 11 0 0.50 11 0 0.53 11 0 0.98 11 0 0.27

6500 12 1,19,300.76 13 8 0.53 13 8 0.58 13 8 0.97 12 0 0.31

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–19641960

LS2, and GA took 0.11, 0.11, and 0.05 s, respectively. For the small andmedium-sized instances, the termination criterion used was 10iterations without an improvement. On the 11 medium-sizedinstances, the VNS method generated the best solution in all 11instances, while LS1, LS2, and GA generated the best solution 7, 8, and

3 times, respectively, out of the 11 instances. The average runningtime of the VNS method was 9.82 s while LS1, LS2, and GA took 35.51,38.05, and 8.03 s, respectively. This indicates that the VNS methodfinds better solutions in a much greater number of instances than anyof the three comparative procedures. This advantage is clear on the

Page 10: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Table 7VNS, GA, LS1, and LS2 for the CCMLST Problem on 100 nodes.

# Labels Cost restriction LS1 LS2 GA VNS

Labels Time Labels Time Labels Time Labels Time

50 11,500 11 2.13 11 2.33 11 3.61 11 1.05

11,000 12 2.28 12 2.39 12 3.61 12 1.09

10,500 13 3.32 13 3.43 13 4.53 13 1.22

10,000 14 3.48 14 3.29 14 4.31 14 1.25

9500 16 3.60 16 3.98 15 4.17 15 1.36

9000 18 3.41 17 3.84 17 5.22 17 1.47

8500 19 4.23 20 4.56 19 5.88 19 1.61

8000 22 3.82 22 4.35 23 6.89 22 2.70

7500 26 3.63 27 4.71 27 9.17 26 2.61

7000 36 3.78 36 3.65 36 9.13 36 1.36

100 11,500 16 7.22 16 8.94 16 7.95 16 2.73

11,000 17 8.28 17 9.98 17 9.95 17 3.20

10,500 20 8.58 19 10.33 19 9.86 19 3.13

10,000 21 10.41 21 11.01 21 10.31 21 3.13

9500 23 9.67 23 11.36 23 12.09 23 3.59

9000 25 12.68 25 11.68 25 11.86 25 3.83

8500 28 14.09 28 12.01 28 11.42 28 4.39

8000 32 15.07 32 12.95 33 15.81 32 4.84

7500 39 17.44 38 17.66 39 16.09 38 5.23

7000 50 18.72 50 18.94 50 17.42 50 5.86

The best solutions are in bold.

Table 8VNS, GA, LS1, and LS2 for the CCMLST problem on 150 nodes.

# Labels Cost restriction LS1 LS2 GA VNS

Labels Time Labels Time Labels Time Labels Time

75 13,000 17 14.96 18 12.83 17 17.00 17 4.78

12,500 18 14.58 18 12.51 19 21.66 19 4.91

12,000 20 23.81 20 17.57 20 21.30 20 5.44

11,500 22 24.91 22 18.00 22 22.02 22 6.06

11,000 25 22.48 25 19.98 24 22.64 24 10.36

10,500 27 22.62 27 20.55 27 35.56 27 6.55

10,000 31 18.92 31 20.08 30 35.53 30 7.84

9500 35 25.39 35 28.55 35 39.41 35 7.38

9000 41 25.09 41 22.48 41 35.78 41 7.61

8500 55 18.10 55 19.56 55 59.47 55 5.94

150 13,000 28 48.99 28 50.86 28 36.38 28 12.80

12,500 30 52.69 30 61.85 30 36.41 30 13.92

12,000 32 59.57 32 51.53 32 36.17 32 16.52

11,500 35 62.60 35 53.18 35 38.17 35 16.92

11,000 38 62.58 38 66.05 38 42.80 38 18.20

10,500 42 75.69 42 85.19 43 43.36 41 37.38

10,000 47 73.85 46 86.46 46 45.80 46 33.22

9500 53 75.80 53 90.01 53 51.80 53 23.42

9000 62 78.06 62 80.87 62 58.53 62 25.27

8500 79 93.13 79 98.56 79 79.47 79 26.19

The best solutions are in bold.

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–1964 1961

medium-sized instances. For the large instances the terminationcriterion used was a specified running time which is shown in thetables with the computational results. On the 18 large instances, theVNS method generated the best solution in 12 out of the 18 instances,while LS1, LS2, and GA generated the best solution 2, 4, and 2 times,respectively, out of the 18 instances. The average running time of theVNS method was 1089 s, while LS1, LS2, and GA took 3418, 3371, and1617 s, respectively. This suggests that the VNS method is the bestmethod in that it rapidly finds solutions for the LCMST problem themost number of times. However, there seem to be a fair number ofinstances (about a third) where an alternate heuristic like LS1, LS2, or

GA obtains a superior solution. On the whole, the VNS method isclearly the best amongst these four heuristics.

In order to demonstrate in a systematic way that the VNS methodoutperforms LS1, LS2, and GA, we applied the Wilcoxon signed ranktest, a well-known nonparametric statistical test to compare VNSagainst each of the other three heuristics (alternatively, we mighthave applied the Friedman test). In each comparison, we reject thenull hypothesis that both heuristics are equivalent in performance(the alternate hypothesis is that VNS is superior).

Based on this analysis, we infer that VNS outperforms LS1 at asignificance level of aLS1=0.00003167, VNS outperforms LS2

Page 11: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Table 9VNS, GA, LS1, and LS2 for the CCMLST problem on 200 nodes.

# Labels Cost restriction LS1 LS2 GA VNS

Labels Time Labels Time Labels Time Labels Time

100 14,000 22 37.67 22 46.45 22 50.78 22 13.05

13,500 24 39.91 24 46.96 24 45.77 24 14.41

13,000 26 55.61 26 63.99 26 59.70 26 15.33

12,500 29 59.08 29 60.53 29 64.97 29 16.34

12,000 32 66.69 32 69.67 32 62.81 32 17.47

11,500 36 54.23 35 60.70 35 61.25 35 19.44

11,000 40 70.93 40 80.11 40 83.45 40 20.30

10,500 46 72.70 46 75.86 46 123.86 46 39.14

10,000 55 81.17 55 87.80 55 138.72 55 22.73

9500 70 73.51 70 65.46 70 194.38 70 18.88

200 14,000 35 188.07 35 179.56 35 136.59 35 37.94

13,500 38 174.69 37 198.79 37 134.84 37 41.97

13,000 40 184.59 40 201.66 41 134.27 40 64.90

12,500 44 171.44 44 193.66 43 130.80 43 47.70

12,000 48 172.10 48 211.57 49 163.52 48 51.02

11,500 54 230.51 53 246.54 54 169.02 53 56.44

11,000 60 215.70 60 236.61 60 178.59 59 133.78

10,500 68 248.04 68 312.48 67 214.44 67 69.19

10,000 80 280.02 80 300.20 80 290.39 80 73.61

9500 101 284.59 101 308.48 101 334.92 101 115.42

The best solutions are in bold.

Table 10VNS, GA, LS1, and LS2 for the CCMLST problem on large datasets.

# Nodes, # labels Cost restriction LS1 LS2 GA VNS

Labels Time Labels Time Labels Time Labels Time Max time

532, 266 98,800 70 5182 70 4890 70 3131 70 111 300

121,600 44 3252 44 3505 44 2500 44 103 300

144,400 32 2468 32 3013 32 2157 32 81 300

574, 287 42,250 86 6830 86 7103 86 4234 85 278 300

52,000 57 4999 57 51,333 57 2945 56 190 300

61,750 42 3796 42 3753 42 2600 42 86 300

575, 287 8190 79 6697 79 6575 79 4664 78 233 300

10,080 47 4371 47 4585 47 2971 47 133 300

11,970 33 3272 33 3897 33 2695 33 70 300

654, 327 39,000 54 6839 55 7332 55 4279 54 203 700

48,000 36 5422 36 5622 36 3494 35 257 700

57,000 28 4373 28 4542 28 3117 27 199 700

657, 328 55,900 95 15,237 95 16,660 95 8763 95 603 700

68,800 61 10,451 61 10,294 61 6158 61 249 700

81,700 44 8139 44 8458 44 4789 44 183 700

666, 333 3510 75 9763 75 10,329 76 6466 75 477 700

4320 49 6815 49 6894 49 4089 48 647 700

5130 35 5208 35 5340 35 3515 35 232 700

724, 362 49,400 102 18,755 102 24,650 102 13,053 101 693 1000

60,800 65 15,802 65 17,318 66 8995 65 478 1000

72,200 47 12,550 47 12,856 47 7522 47 259 1000

783, 391 10,660 111 26,056 111 30,572 111 16,231 111 975 1000

13,120 68 19,425 69 19,754 69 10,605 69 492 1000

15,580 48 17,509 48 16,581 49 9539 48 358 1000

1000, 500 20,800,000 141 156,482 140 188,704 141 76,294 140 2923 4000

25,600,000 86 134,765 86 119,253 86 49,080 86 3920 4000

30,400,000 62 101,576 62 103,167 62 45,996 62 3587 4000

The best solutions are in bold.

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–19641962

at a significance level of aLS2=0.0001078, VNS outperforms GA at asignificance level of aGA=0.00001335. Thus, VNS outperforms LS1,LS2, and GA at a significance level of 1�(1�aLS1)(1�aLS2)(1�aGA)=0.00015.

7. Conclusion

In this paper, we considered both the CCMLST and LCMSTproblems. We developed a VNS method for both the CCMLST

Page 12: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Table 11VNS, GA, LS1, and LS2 for the LCMST problem on small datasets.

# Nodes, # labels K MIP value LS1 LS2 GA VNS

Gap Time Gap Time Gap Time Gap Time

20, 20 2 6491.35 0.00 0.02 0.00 0.01 0.00 0.00 0.00 0.01

3 5013.51 0.00 0.04 0.00 0.04 0.00 0.00 0.00 0.02

4 4534.67 0.00 0.05 0.00 0.05 0.00 0.01 0.00 0.02

5 4142.57 0.00 0.05 0.00 0.04 0.00 0.01 0.00 0.02

6 3846.50 0.00 0.05 0.00 0.04 0.00 0.02 0.00 0.02

7 3598.05 0.00 0.05 0.00 0.05 0.00 0.02 0.00 0.03

8 3436.57 0.00 0.05 0.00 0.05 0.00 0.02 0.00 0.03

9 3281.05 0.00 0.04 0.00 0.05 0.00 0.02 0.00 0.03

10 3152.05 0.00 0.05 0.00 0.04 0.00 0.02 0.00 0.03

11 3034.01 0.00 0.05 0.00 0.04 0.00 0.03 0.00 0.03

30, 30 3 7901.81 0.00 0.04 0.00 0.05 0.00 0.01 0.00 0.05

4 6431.58 0.00 0.06 0.00 0.05 0.00 0.02 0.00 0.05

5 5597.36 0.00 0.07 0.00 0.06 0.00 0.03 0.00 0.06

6 5106.94 0.00 0.11 0.00 0.07 0.00 0.04 0.00 0.06

7 4751.00 0.00 0.12 0.46 0.07 0.00 0.05 0.00 0.08

8 4473.11 0.00 0.07 0.00 0.07 0.00 0.05 0.00 0.09

9 4196.71 0.00 0.08 0.00 0.11 0.00 0.05 0.00 0.10

10 3980.99 0.52 0.09 0.52 0.16 0.00 0.06 0.00 0.12

11 3827.23 0.00 0.09 0.00 0.13 1.41 0.07 0.00 0.14

12 3702.08 0.00 0.09 0.23 0.09 0.00 0.07 0.00 0.13

13 3585.42 0.00 0.09 0.00 0.10 0.00 0.07 0.00 0.14

40, 40 3 11,578.61 0.00 0.05 0.00 0.06 0.00 0.02 0.00 0.11

4 9265.42 1.72 0.06 2.56 0.07 1.72 0.03 0.00 0.09

5 8091.45 0.00 0.09 0.00 0.10 0.75 0.03 0.00 0.15

6 7167.27 0.00 0.11 0.00 0.11 2.53 0.07 0.00 0.20

7 6653.23 0.13 0.12 0.00 0.13 0.13 0.05 0.00 0.23

8 6221.63 0.00 0.29 0.00 0.20 0.00 0.09 0.00 0.28

9 5833.39 0.00 0.26 0.00 0.22 0.48 0.10 0.00 0.25

10 5547.08 0.00 0.16 0.00 0.24 0.00 0.10 0.00 0.33

11 5315.92 0.00 0.18 0.00 0.20 0.00 0.15 0.00 0.31

12 5164.14 0.00 0.35 0.00 0.21 0.00 0.09 0.00 0.31

50, 50 3 14,857.09 0.00 0.07 0.00 0.08 3.08 0.02 0.00 0.14

4 12,040.89 0.00 0.15 0.00 0.12 0.00 0.04 0.00 0.14

5 10,183.95 0.00 0.21 0.00 0.26 0.00 0.12 0.00 0.28

6 9343.69 0.00 0.25 0.00 0.34 0.00 0.12 0.00 0.25

7 8594.36 0.00 0.30 0.00 0.31 1.51 0.12 0.00 0.25

8 7965.52 0.00 0.24 0.00 0.34 0.00 0.20 0.00 0.30

Table 12VNS, GA, LS1, and LS2 for the LCMST problem on medium-sized datasets.

# Nodes, # Labels Labels LS1 LS2 GA VNS

Cost Time Cost Time Cost Time Cost Time

100, 50 20 8308.68 3.92 8308.68 3.26 8335.75 2.94 8308.68 1.54

100, 100 20 10,055.85 5.89 10,055.85 6.35 10,138.27 1.70 10,055.85 1.24

100, 100 40 7344.72 11.36 7335.61 12.70 7335.61 2.68 7335.61 2.44

150, 75 20 11,882.62 7.47 11,846.80 13.95 11,854.17 4.49 11,846.80 12.61

150, 75 40 9046.71 17.42 9046.71 18.57 9047.22 12.63 9046.71 2.83

150, 150 20 15,427.54 25.33 15,398.42 19.88 15,688.78 3.82 15,398.42 17.73

150, 150 40 10,618.58 41.67 10,627.36 55.38 10,728.93 7.04 10,618.58 18.45

200, 100 20 14,365.95 27.19 14,365.95 17.25 14,382.65 10.64 14,365.95 14.21

200, 100 40 10,970.94 46.23 10,970.94 49.26 10,970.94 12.84 10,970.93 5.80

200, 200 20 18,951.05 44.61 18,959.37 89.58 18,900.25 13.06 18,900.25 9.50

200, 200 40 12,931.46 159.56 12,941.85 132.40 12,987.29 16.49 12,931.46 21.64

The best solutions are in bold.

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–1964 1963

and LCMST problems. We compared the solutions obtainedby the VNS method to optimal solutions for small instances andto solutions obtained by three heuristics LS1, LS2, and GA thatwere previously proposed for the LCMST problem (but can beeasily adapted to the CCMLST problem as well). We generatedsmall and medium-sized instances in a similar fashion to Xiong

et al. [15], and generated a set of large instances from the TSPLIBdataset.

The VNS method was clearly the best heuristic for the CCMLSTinstances. Of the 191 instances, it provided the best solution in 189instances. Furthermore, for the large instances, its running time is anorder of magnitude faster than those of the three other heuristics. For

Page 13: Variable neighborhood search for the cost constrained minimum label spanning tree and label constrained minimum spanning tree problems

ARTICLE IN PRESS

Table 13VNS, GA, LS1, and LS2 for the LCMST problem on large datasets.

# Nodes, # labels Labels LS1 LS2 GA VNS

Cost Time Cost Time Cost Time Cost Time Max time

532, 266 75 95,671.13 397 95,617.19 558 95,808.91 383 95,562.78 236 500

150 78,418.55 703 78,392.49 1011 78,400.90 627 78,400.90 363 500

574, 287 75 44,799.25 451 44,650.40 485 44,682.80 415 44,633.91 377 500

150 34,521.44 850 34,501.63 944 34,502.26 753 34,457.48 442 500

575, 287 75 8319.41 654 8309.18 706 8335.96 501 8307.45 335 500

150 6683.95 892 6683.48 996 6683.95 822 6683.48 370 500

654, 327 75 34,751.11 861 34,809.59 1117 34,795.62 551 34,722.55 451 1200

150 30,107.18 1325 30,092.77 1901 30,103.64 1004 30,090.78 1108 1200

657, 328 75 61,970.46 2001 61,978.58 1290 61,977.86 684 61,941.07 502 1200

150 47,355.97 2430 47,351.50 2624 47,413.26 1443 47,397.51 1185 1200

666, 333 75 3509.17 706 3500.22 981 3505.82 663 3496.59 414 1200

150 2821.43 1441 2821.70 2043 2820.87 1920 2820.63 773 1200

724, 362 75 56,600.13 1374 56,377.24 1435 56,245.65 973 56,510.83 700 1200

150 42,874.35 4869 42,855.57 3694 42,847.26 2273 42,835.26 1060 1200

783, 391 75 12,560.76 2839 12,539.51 2466 12,529.96 1039 12,542.81 875 1200

150 9509.07 2766 9495.57 3748 9502.96 2024 9495.57 1180 1200

1000, 500 75 27,246,908.00 9100 27,280,794.00 13,092 27,428,732.00 4955 27,257,080.00 3333 7200

150 20,196,378.00 27,869 20,216,576.00 21,587 20,258,258.00 8085 20,229,668.00 5903 7200

The best solutions are in bold.

Z. Naji-Azimi et al. / Computers & Operations Research 37 (2010) 1952–19641964

all the 104 instances where the optimal solution was known, the VNSmethod obtained the optimal solution. On the LCMST instances, theVNS method was also by far the best heuristic. It provided the bestsolution in all of the 48 small and medium-sized instances. On thelarge instances, it was by far the best, providing the best solution in 12out of the 18 instances. For all of the 37 instances where the optimalsolution was known, the VNS method obtained the optimal solution.It also ran significantly faster than the three heuristics LS1, LS2, andGA. Furthermore, our statistical analysis revealed that VNS clearlyoutperforms the three other heuristics at a significance level of0.00015.

Acknowledgements

This work was conducted while the first two authors werevisiting the University of Maryland and we thank DEIS, Universityof Bologna for the financial support of their stay at this university.We thank Dr. Yupei Xiong for the use of his software for theLCMST problem. We also thank the referees.

References

[1] Bruggemann T, Monnot J, Woeginger GJ. Local search for the minimum labelspanning tree problem with bounded color classes. Operations ResearchLetters 2003;31:195–201.

[2] Chang R-S, Leu S-J. The minimum labeling spanning trees. InformationProcessing Letters 1997;63(5):277–82.

[3] Cerulli R, Fink A, Gentili M, Voß S. Metaheuristics comparison for theminimum labelling spanning tree problem. In: Golden B, Raghavan S, Wasil E,editors. The Next Wave in Computing, Optimization, and Decision Technol-ogies. Berlin: Springer-Verlag; 2008. p. 93–106.

[4] Consoli S, Darby-Dowman K, Mladenovic N, Moreno-Perez JA. Greedyrandomized adaptive search and variable neighbourhood search for theminimum labelling spanning tree problem. European Journal of OperationalResearch 2009;196(2):440–9.

[5] Eiselt HA, Sandblom C-L. Integer Programming and Network Models.Springer; 2000.

[6] Krumke SO, Wirth HC. On the minimum label spanning tree problem.Information Processing Letters 1998;66(2):81–5.

[7] Kruskal JB. On the shortest spanning subset of a graph and the travelingsalesman problem. Proceedings of the American Mathematical Society1956;7(1):48–50.

[8] Magnanti T, Wolsey L. Optimal trees. In: Ball M, Magnanti T, Monma C,Nemhauser G, editors. Network models. Handbooks in operationsresearch and management science, Vol. 7. Amsterdam: North-Holland;1996. p. 503–615.

[9] Mladenovic N, Hansen P. Variable neighborhood search. Computers &Operations Research 1997;24:1097–100.

[10] Papadimitriou CH, Steiglitz K. Combinatorial Optimization: Algorithms andComplexity. New Jersey: Prentice-Hall; 1982.

[11] Prim RC. Shortest connection networks and some generalizations. BellSystem Technical Journal 1957;36:1389–401.

[12] Reinelt G. TSPLIB, a traveling salesman problem library. ORSA Journal onComputing 1991;3:376–84.

[13] Tarjan R. Depth first search and linear graph algorithms. SIAM Journal ofComputing 1972;1(2):215–25.

[14] Wan Y, Chen G, Xu Y. A note on the minimum label spanning tree.Information Processing Letters 2002;84:99–101.

[15] Xiong Y, Golden B, Wasil E, Chen S. The label-constrained minimumspanning tree problem. In: Raghavan S, Golden B, Wasil E, editors.Telecommunications Modeling, Policy, and Technology. Berlin: Springer;2008. p. 39–58.

[16] Xiong Y, Golden B, Wasil E. A one-parameter genetic algorithm for theminimum labeling spanning tree problem. IEEE Transactions on EvolutionaryComputation 2005;9(1):55–60.

[17] Xiong Y, Golden B, Wasil E. Worst-case behavior of the MVCA heuristic for theminimum labeling spanning tree problem. Operations Research Letters2005;33(1):77–80.

[18] Xiong Y, Golden B, Wasil E. Improved heuristics for the minimum labelspanning tree problem. IEEE Transactions on Evolutionary Computation2006;10(6):700–3.