11
A Real Coded Genetic Algorithm for Data Partitioning and Scheduling in Networks with Arbitrary Processor Release Time S. Suresh 1 , V. Mani 1 , S.N. Omkar 1 , and H.J. Kim 2 1 Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India {suresh99, mani, omkar}@aero.iisc.ernet.in 2 Department of Control and Instrumentation Engineering, Kangwon National University, Chunchon 200-701, Korea {khj}@kangwon.ac.kr Abstract. The problem of scheduling divisible loads in distributed com- puting systems, in presence of processor release time is considered. The objective is to find the optimal sequence of load distribution and the opti- mal load fractions assigned to each processor in the system such that the processing time of the entire processing load is a minimum. This is a dif- ficult combinatorial optimization problem and hence genetic algorithms approach is presented for its solution. 1 Introduction One of the primary issues in the area of parallel and distributed computing is how to partition and schedule a divisible processing load among the available processors in the network/system such that the processing time of the entire load is a minimum. In the case of computation-intensive tasks, the divisible pro- cessing load consists of large number of data points, that must be processed by the same algorithms/programs that are resident in all the processors in the net- work. Partitioning and scheduling computation-intensive tasks incorporating the communication delays (in sending the load fractions of the data to processors) is commonly referred to as divisible load scheduling. The objective is to find the optimal sequence of load distribution and the optimal load fractions assigned to each processor in the system such that the processing time of the entire pro- cessing load is a minimum. The research on the problem of scheduling divisible loads in distributed computing systems started in 1988 [1] and has generated considerable amount of interest among researchers and many more results are available in [2,3]. Recent results in this area are available in [4]. Divisible load scheduling problem will be more difficult, when practical is- sues like processor release time, finite buffer conditions and start-up time are considered. It is shown in [5] and [6], that this problem is NP hard when the buffer constraints and start-up delays in communication are included. A study on computational complexity of divisible load scheduling problem is presented T. Srikanthan et al. (Eds.): ACSAC 2005, LNCS 3740, pp. 529–539, 2005. c Springer-Verlag Berlin Heidelberg 2005

[Lecture Notes in Computer Science] Advances in Computer Systems Architecture Volume 3740 || A Real Coded Genetic Algorithm for Data Partitioning and Scheduling in Networks with Arbitrary

Embed Size (px)

Citation preview

A Real Coded Genetic Algorithm for Data

Partitioning and Scheduling in Networks withArbitrary Processor Release Time

S. Suresh1, V. Mani1, S.N. Omkar1, and H.J. Kim2

1 Department of Aerospace Engineering,Indian Institute of Science, Bangalore, India

{suresh99, mani, omkar}@aero.iisc.ernet.in2 Department of Control and Instrumentation Engineering,Kangwon National University, Chunchon 200-701, Korea

{khj}@kangwon.ac.kr

Abstract. The problem of scheduling divisible loads in distributed com-puting systems, in presence of processor release time is considered. Theobjective is to find the optimal sequence of load distribution and the opti-mal load fractions assigned to each processor in the system such that theprocessing time of the entire processing load is a minimum. This is a dif-ficult combinatorial optimization problem and hence genetic algorithmsapproach is presented for its solution.

1 Introduction

One of the primary issues in the area of parallel and distributed computing ishow to partition and schedule a divisible processing load among the availableprocessors in the network/system such that the processing time of the entireload is a minimum. In the case of computation-intensive tasks, the divisible pro-cessing load consists of large number of data points, that must be processed bythe same algorithms/programs that are resident in all the processors in the net-work. Partitioning and scheduling computation-intensive tasks incorporating thecommunication delays (in sending the load fractions of the data to processors)is commonly referred to as divisible load scheduling. The objective is to find theoptimal sequence of load distribution and the optimal load fractions assigned toeach processor in the system such that the processing time of the entire pro-cessing load is a minimum. The research on the problem of scheduling divisibleloads in distributed computing systems started in 1988 [1] and has generatedconsiderable amount of interest among researchers and many more results areavailable in [2,3]. Recent results in this area are available in [4].

Divisible load scheduling problem will be more difficult, when practical is-sues like processor release time, finite buffer conditions and start-up time areconsidered. It is shown in [5] and [6], that this problem is NP hard when thebuffer constraints and start-up delays in communication are included. A studyon computational complexity of divisible load scheduling problem is presented

T. Srikanthan et al. (Eds.): ACSAC 2005, LNCS 3740, pp. 529–539, 2005.c© Springer-Verlag Berlin Heidelberg 2005

530 S. Suresh et al.

in [6]. The effect of communication latencies (start-up time delays in commu-nication) are studied in [7,8,9,10,11] using single-round and multi-round loaddistribution strategies. The problem of scheduling divisible loads in presence ofprocessor release times is considered in [12,13]. In these studies, it is assumedthat the processors in the network are busy with some other computation pro-cess. The time at which the processors are ready to start the computation of itsload fraction (of the divisible load) is called “release time” of the processors.

The release time of processors in the network, affect the partitioning andscheduling the divisible load. In [12], scheduling divisible loads in bus network(homogeneous processors) with identical and non-identical release time are con-sidered. The heuristic scheduling algorithms for identical and non-identical re-lease time are derived based on multi-installment scheduling technique presentedbased on the release time and communication time of the complete load. In [13],heuristic strategies for identical and non-identical release time are presented fordivisible load scheduling in linear network. In these studies the problem of ob-taining the optimal sequence of load distribution by the root processor is notconsidered.

In this paper, the divisible load scheduling problem with arbitrary processorrelease time in single level tree network is considered. When the processors inthe network have arbitrary release time, it is difficult to obtain a closed-formexpression for optimal size of load fractions. Hence, for a network with arbi-trary processor release times, there are two important problems: (i) For a givensequence of load distribution, how to obtain the load fractions assigned to theprocessors, such that the processing time of the entire processing load is a min-imum, and (ii) For a given network, how to obtain the optimal sequence of loaddistribution.

In this paper, Problem (i), of obtaining the processing time and the loadfractions assigned to the processors, for a given sequence of load distribution issolved using a real coded hybrid genetic algorithm. Problem (ii), of obtaining theoptimal sequence of load distribution is a combinatorial optimization problem.For a single-level tree network with m child processors there are m! sequences ofload distribution are possible. The optimal sequence of load distribution is thesequence for which the processing time is a minimum. We use genetic algorithmto obtain the optimal sequence of load distribution. The genetic algorithm forobtaining the optimal sequence of load distribution uses the results of real-codedgenetic algorithm used to solve Problem (i). To the best of our knowledge, thisis the first attempt to solve this problem of scheduling divisible with processorrelease times.

2 Definitions and Problem Formulation

Consider a single-level tree network with (m + 1) processors as shown in Figure1. The child processors in the network denoted as p1, p2, · · ·, pm are connectedto the root processor (p0) via communication links l1, l2, · · · lm. The divisibleload originates at the root processor (p0) and the root processor divides the

A Real Coded Genetic Algorithm 531

0

1

2 i

l

P

P

P P

l

l

1

2

Pi+1

li+1

Pm

lm

i

Fig. 1. Single-level tree network

load into (m + 1) fractions (α0, α1, · · ·, αm) and keeps the part α0 for itself toprocess/compute and distributes the load fractions (α1,α2, · · ·, αm) to other mprocessors in the sequence (p1, p2, · · ·, pm) one after another.

The child processors in the network may not be available for computationsprocess immediately after the load fractions are assigned to it. This will introducea delay in starting the computation process of the load fraction. This delay iscommonly referred as processor release time. The release time is different fordifferent child processors. The objective here is to obtain the processing time andload fractions assigned to the processors. In this paper, we follow the standardnotations and definitions used in divisible load scheduling literature.Definitions:

– Load distribution: This is denoted as α, and is defined as an (m+1)-tuple(α0, α1, α2, · · ·, αm) such that 0 < αi ≤ 1, and

∑mi=0 αi = 1. The equation∑m

i=0 αi = 1 is the normalization equation, and the space of all possible loaddistribution is denoted as Γ .

– Finish time: This is denoted as Ti and is defined as the time differencebetween the instant at which the ith processor stops computing and the timeinstant at which the root processor initiates the load distribution process.

– Processing time: This is the time at which the entire load is processed. Thisis given by the maximum of the finish time of all processors; i.e.,T = max{Ti}i = 0, 1, · · · , m, where Ti is the finish time of processor pi.

– Release time: The release time of a child processor pi is the time instantat which the child processor is available to start the computation of its loadfraction of the divisible load.

Notations:

– αi: The load fraction assigned to the processor pi.– wi: The ratio of the time taken by processor pi, to compute a given load, to

the time taken by a standard processor, to compute the same load;

532 S. Suresh et al.

– zi: The ratio of the time taken by communication link li, to communicate agiven load, to the time taken by a standard link, to communicate the load.

– Tcp: The time taken by a standard processor to process a unit load;– Tcm: The time taken by a standard link to communicate a unit load.– bi: Release time for processor pi.

Based on these notations, we can see that αiwiTcp is the time to process theload fraction αi of the total processing load by the processor pi. In the sameway, αiziTcm is the time to communicate the load fraction αi of the total pro-cessing load over the link li to the processor pi. We can see that both αiwiTcp

and αiziTcm are in units of time. In divisible load scheduling literature, timingdiagram is the usual way of representing the load distribution process. In thisdiagram the communication process is shown above the time axis, the compu-tation process is shown below the time axis and the release time is shown asshaded region below time axis. The timing diagram for the single-level tree net-work with arbitrary processor release time is shown in Figure 2. From timingdiagram, we can write the finish time (T0) for the root processor (p0) as

T0 = α0w0Tcp (1)

Now, we find the finish time for any child processor pi. The time taken by theroot processor to distribute the load fraction (αi) to the child processor pi is∑i

j=1 αjzjTcm. Let bi be the release time of the child processor pi then the child

processor starts the computation process at max(bi,

∑ij=1 αjzjTcm

). Hence, the

��������������������������������

��������������������������������

�������������������������

�������������������������

��������������������������������������������������������������������������������������������

��������������������������������������������������������������������������������������������

����������������������������������������������������������������������������

����������������������������������������������������������������������������

��������������������������������������������������������������������������������������������������������������������������������������������

��������������������������������������������������������������������������������������������������������������������������������������������

����

����

P0

P1

P2

Pi

Pi+1

Pm

α α α α α

α

α

α

α

1z1Tcm 2z2Tcm i zi Tcm i+1zi+1Tcm mzmTcm

0w0Tcp

1w1Tcp

2w2Tcp

i wi Tcp

αi+1wi+1Tcp

αmwmTcp

− Release Time

Fig. 2. Timing diagram for single level tree network with release time

A Real Coded Genetic Algorithm 533

finish time (Ti) of any child processor pi is

Ti = max

⎝bi,i∑

j=1

αjzjTcm

⎠ + αjwjTcp i = 1, 2, · · · , m (2)

The above equation is valid, if the load fraction assigned to the child processoris greater than zero (the child processors participate in the load distributionprocess) otherwise Ti = 0.

From the timing diagram, the processing time (T ) of the entire processingload is

T = max (Ti, ∀ i = 0, 1, · · · , m) (3)

Obtaining the processing time (T ), and the load fractions (αi) for the divisibleload scheduling problem with arbitrary processor release time is difficult. Hence,in this paper, we propose a real coded genetic algorithm approach to solve theabove scheduling problem.

3 Real-Coded Genetic Algorithm

The Genetic Algorithm (GA) is perhaps the most well-known of all evolutionbased search techniques. The genetic algorithm is a probabilistic technique thatuses a population of solutions rather than a single solution at a time [14,15,16].In these studies, the search space solutions are coded using binary alphabet.For optimization problems floating point representation of solution in the searchspace outperform binary representations because they are more consistent, moreprecise and lead to faster convergence. This fact is discussed in [17]. Geneticalgorithms using real number representation for solutions are called real-codedgenetic algorithms. More details about how genetic algorithms work for a givenproblem can be found in literature [14,15,16,17].

A good representation scheme for solution is important in a GA and it shouldclearly define meaningful crossover, mutation and other problem-specific opera-tors such that minimal computational effort is involved in these procedures. Tomeet these requirements, we propose real coded hybrid genetic algorithm ap-proach to solve the divisible load scheduling problem with arbitrary processorrelease time. The real coded approach seems adequate when tackling problemsof parameters with variables in continuous domain [18,19]. A detailed study oneffect of hybrid crossovers for real coded genetic algorithm is presented in [20,21].

3.1 Real-Coded Genetic Algorithm for Divisible Load SchedulingProblem

In this paper, a hybrid real coded genetic algorithm is presented to solve thedivisible scheduling problem with arbitrary processor release time. In real codedGA, solution (chromosome) is represented as an array of real numbers. The chro-mosome representation and genetic operators are defined such that it satisfiesthe normalization equation.

534 S. Suresh et al.

String Representation. The string representation is the process of encoding asolution to the problem. Each string in a population represent possible solutionto the scheduling problem. For our problem, the string consists of an array ofreal numbers. The values of real number represents the load fractions assignedto the processors in the network. The length of the string is m + 1, the numberof processors in the system. A valid string is the one in which the total loadfractions (sum of all real numbers) is equal to one.

In case of four (m = 4) processor system, a string will represent the loadfractions {α0, α1, α2, α3, α4} assigned to the processors p0, p1, p2, p3 and p4

respectively. For example, a valid string {0.2, 0.2, 0.2, 0.2, 0.2}, in our problemrepresents the load fraction assigned to the processors (α0 = 0.2, α1 = 0.2,α2 = 0.2, α3 = 0.2 and α4 = 0.2) in the network. The sum of load fractionsassigned to the processors is equal to 1. In general, for an m-processor system,the length of the string is equal to m + 1.

Population Initialization. Genetic algorithms search from a population ofsolution points instead of a single solution point. The initial population size,and the method of population initialization will affect the rate of convergenceof the solution. For our problem the initial population of solutions is selected inthe following manner.

– Equal allocation: The value of load fraction (αj) in the solution is 1M+1 , for

j = 1, ..., M + 1.– Random allocation: Generate M + 1 random numbers. These random num-

bers are normalized such that the sum is equal to one. This is the value ofαj in the solution, for j = 1, ..., M + 1.

– Zero allocation: Select a solution using equal or random allocation. Any oneelement in the selected solution is assigned zero and its value is equallyallocated to other elements.

– Proportional allocation: The value of load fraction αj in the solution is αj =wj∑

Mi=1 wi

.

Selection Function. Normalized Geometric Ranking Method: The solutions(population) are arranged in descending order of their fitness value. Let q be theselection probability for selecting best solution and rj be the rank of jth solutionin the partially ordered set. The probability of solution j being selected usingnormalized geometric ranking method is

sj = q′(1 − q)rj−1 (4)

where q′= q

1−(1−q)N and N is the population size. The details of the normalizedgeometric ranking method can be found in [22].

Genetic Operators. Genetic operators used in genetic algorithms are anal-ogous to those which occur in the natural world: reproduction (crossover, orrecombination); mutation.

A Real Coded Genetic Algorithm 535

Crossover Operator: It is a primary operator in GA. The role of crossoveroperator is to recombine information from the two selected solutions to producebetter solutions. The crossover operator improves the diversity of the solutionvector. Four different crossover operators used in our divisible load schedul-ing problem are Two-point crossover (TPX), Simple crossover (SCX), Uniformcrossover (UCX) and Averaging crossover (ACX). These crossover operators aredescribed in [23].Hybrid Crossover: We have used four types of crossover operators. The per-formance of these operators in terms of convergence to optimal solution dependson the problem. One type of crossover operator which performs well for oneproblem may not perform well for another problem. Hence many research worksare carried out to study the effect of combining crossover operators in a geneticalgorithm [24,25,26,27] for a given problem. Hybrid crossovers are a simple wayof combining different crossover operators. The hybrid crossover operators usedifferent kinds of crossover operators to produce diverse offsprings from the sameparents. The hybrid crossover operator presented in this study generates eightoffsprings for each pair of parents by SPX, TPX, UCX and ACX crossover op-erators. The most promising offsprings of the eight substitute their parents inthe population.Mutation Operator: The mutation operator alters one solution to produce anew solution. The mutation operator is needed to ensure diversity in the popu-lation, and to overcome the premature convergence and local minima problems.Mutation operators used in this study are Swap mutation (SM) and RandomZero Mutation (RZM) are described in [23].

Fitness Function. The objective in our scheduling problem is to determinethe load fractions assigned to the processors such that the processing time ofthe entire processing load is a minimum. The calculation of fitness function iseasy. The string gives the load fractions α0, α1, · · ·, αm assigned to the pro-cessors in the network. Once the load fractions are given, the finish time of allprocessors can be easily obtained. For example, the finish time Ti of processorpi is max

(bi,

∑ij=1 αjzjTcm

)+ αiwiTcp. If the value of αi is zero for any pro-

cessor pi, then the finish time of that processor Ti is zero. The processing timeof the entire processing load T is max (Ti, ∀ i = 0, 1, · · · , m). Since, the geneticalgorithm maximizes the fitness function, the fitness is defined as negative of theprocessing time (T ).

F = −T (5)

Termination Criteria. In genetic algorithm, the evolution process continuesuntil a termination criterion is satisfied. The maximum number of generationsis the most widely used termination criterion and is used in our simulationstudies.

536 S. Suresh et al.

3.2 Genetic Algorithm

Step 1. Select population of size N using initialization methods described ear-lier.

Step 2. Calculate the fitness of the solutions in the population using the equa-tion (3).

Step 3. Select the solutions from the population using normalized geometricranking method, for genetic operations.

Step 4. Perform different types of crossover and mutation on the selected solu-tions (parents). Select the N best solutions using elitist model.

Step 5. Repeat the Steps 2-4 until the termination criteria is satisfied.

4 Numerical Example

We have successfully implemented and tested the real coded hybrid genetic al-gorithm approach for divisible load scheduling problems in tree network witharbitrary processor release time. The convergence of the genetic algorithm de-pends on population size (N), selection probability (Sc), crossover rate (pc) andmutation rate (pm). In our simulations the numerical values used are: N = 30,Sc = 0.08, Pc = 0.8, pm = 0.2, and Tcm, Tcp are 1.0 .

Let us consider a single-level tree network with eight child processors (m = 8)attached to the root processor p0. The computation and communication speedparameters, and processor release time are given in Table 1. The root proces-sor (p0) distributes the load fractions to the child processors in the followingsequence p1, p2, · · ·, p8. The result obtained from real coded hybrid genetic algo-rithm is presented in Table 1. The processing time of the entire processing loadis 0.29717

In this numerical example, the processor-link pair (p3,l3) is removed fromthe network by assigning zero load fractions. The processors p1, p2, p4, p5, p6,p7 and p8 receives their load fractions at times 0.0392, 0.0815, 0.0896, 0.1397,0.1436 0.1509 and 0.1631. Thus, the processors p1, p2, p4 and p6 will start thecomputation process from the release time where as the processors p5, p7 andp8 will be idle until their load fractions are received.

Table 1. System Parameters and Results for Numerical Example 1

Sequence Comp. speed Comm. speed Release Loadparameter parameter Time Fraction

p0 w0 = 1.1 - - α0 = 0.2702p1 w1 = 1.5 z1 = 0.4 b1 = 0.15 α1 = 0.0981p2 w2 = 1.4 z2 = 0.3 b2 = 0.1 α2 = 0.1408p3 w3 = 1.3 z3 = 0.2 b3 = 0.50 α3 = 0p4 w4 = 1.2 z4 = 0.1 b4 = 0.2 α4 = 0.0810p5 w5 = 1.1 z5 = 0.35 b5 = 0.05 α5 = 0.1431p6 w6 = 1.2 z6 = 0.1 b6 = 0.25 α6 = 0.0393p7 w7 = 1.0 z7 = 0.05 b7 = 0.1 α7 = 0.1462p8 w8 = 1.65 z8 = 0.15 b8 = 0.12 α8 = 0.0812

A Real Coded Genetic Algorithm 537

5 Optimal Sequence of Load Distribution

Sequence of load distribution is the order in which the child processors are ac-tivated in divisible load scheduling problem. In the earlier section, the sequence(activation order) of load distribution by the root processor is {p1, p2, · · ·, pm}.For a system with m child processors there are m! sequences of load distribu-tion are possible. Optimal sequence of load distribution is the sequence of loaddistribution for which the processing time is a minimum.

First we will show that the problem of finding the optimal sequence of loaddistribution, is similar to the well-known Travelling Salesman Problem (TSP)studied in operations research literature. TSP is very easy to understand; atravelling salesman, must visit every city exactly once, in a given area and returnto the starting city. The cost of travel between all cities are known. The travellingsalesman to plan a sequence (or order) of visit to the cities, such that the cost oftravel is a minimum. Let the number of cities be m. Any single permutation of mcities is a sequence (solution) to the problem. The number of sequences possibleare m!. A genetic algorithm approach to TSP is well discussed in [17]. We nowshow the similarities between finding the optimal sequence of load distributionand Travelling Salesman Problem (TSP).

In TSP a solution is represented as string of integers representing the se-quence in which the cities are visited. For example, the string {4 3 5 1 2} rep-resents the tour as c4 −→ c3 −→ c5 −→ c1 −→ c2, where ci is the city i. Inour problem, the string {4 3 5 1 2} represents the sequence of load distributionby the root processor to the child processors is {p4 p3 p5 p1 p2}. So we can seethe solution representation in our problem is the same as solution representationproblem in TSP [17]. Hence, for our problem we have used the genetic algorithmapproach given for TSP given in [17]. From genetic algorithm point of view,the only difference between TSP and divisible load scheduling problem is thefitness function. The fitness function is the cost of travel in TSP for the givensequence but the fitness function in our problem is the processing time for agiven sequence of load distribution.

Fitness Function: The fitness function is based on the processing time of theentire processing load. Here for a given sequence, fitness function is the solutionof real-coded genetic algorithm described in the earlier section. The objectivein our problem is processing time minimization. Hence, in our case the fitnessfunction is (−T )

F = − T (6)

In order to determine the fitness (F ), for any given sequence, the processing time(T ), is obtained by solving the Problem (i) using real-coded genetic algorithmmethodology given in the earlier section. The optimal or best sequence of loaddistribution can be found by using the genetic algorithm approach given for TSPgiven in [17], with the above fitness function.

Optimal Sequence for Numerical Example 1: The optimal sequence of loaddistribution (by root processor p0) obtained using the above genetic algorithm

538 S. Suresh et al.

is {p7 p5 p2 p8 p1 p6 p4}. The processing time for this sequence is T = 0.2781.The load fractions assigned to processors are: α0 = 0.25286, α7 = 0.17814,α5 = 0.18568, α2 = 0.12015, α8 = 0.093447, α1 = 0.081151, α6 = 0.023453,α4 = 0.06512. The processor p3 is assigned a zero load fraction. Processor p3 isassigned a zero load fraction because the release time (b3 = 0.5) is high.

6 Conclusions

The problem of scheduling divisible loads in distributed computing system, inpresence of processor release time is considered. In this situation, there are twoimportant problems: (i) For a given sequence of load distribution, how to obtainthe load fractions assigned to the processors, such that the processing time ofthe entire processing load is a minimum, and (ii) For a given network, how toobtain the optimal sequence of load distribution. A real-coded genetic algorithmis presented for the solution of Problem (i). It is shown that Problem (ii), of ob-taining the optimal sequence of load distribution is a combinatorial optimizationproblem similar to Travelling Salesman Problem. For a single-level tree networkwith m child processors there are m! sequences of load distribution are possible.Optimal sequence of load distribution is the sequence for which the processingtime is a minimum. We use another genetic algorithm to obtain the optimalsequence of load distribution. The genetic algorithm for obtaining the optimalsequence of load distribution uses the real-coded genetic algorithm used to solveProblem (i).

Acknowledgement. This work was in part supported by MSRC-ITRC underthe auspices of Ministry of Information and Communication, Korea.

References

1. Cheng, Y. C., and Robertazzi, T. G.: Distributed Computation with Communica-tion Delay, IEEE Trans. Aerospace and Electronic Systems, 24(6) (1988) 700-712.

2. Bharadwaj, V., Ghose, D., Mani, V., and Robertazzi, T. G.: Scheduling DivisibleLoads in Parallel and Distributed Systems, Los Alamitos, California, IEEE CSPress. (1996).

3. Special Issue on: Divisible Load Scheduling, Cluster Computing, 6 (2003).4. http://www.ee.sunysb.edu/tom/dlt.html5. Drozdowski, M., and Wolniewicz, M.: On the Complexity of Divisible Job Schedul-

ing with Limited Memory Buffers, Technical Report: R-001/2001. Institute of Com-puting Science, Poznan University of Technology, (2001).

6. Beaumont, O., Legrand, A., Marchal, L., and Robert, Y.: Independent and Di-visible Task Scheduling on Heterogeneous Star-Shaped Platforms with LimitedMemory, Research Report: 2004-22. LIP, ENS, Lyon, France. (2004).

7. Beaumont, O., Legrand, A., and Robert, Y.: Optimal Algorithms for SchedulingDivisible Work Loads on Heterogeneous Systems, Proceedings of the InternationalParallel and Distributed Processing Symposium (IPDPS’03), IEEE Computer So-ciety Press, (2003).

8. Banini, C., Beaumont, O., Carter, L., Ferrante, J., Legrand, A., and Robert, Y.:Scheduling Strategies for Master-Slave Tasking on Heterogeneous Processor Plat-forms. IEEE Trans. on Parallel and Distributed Systems, 15(4) (2004) 319-330.

A Real Coded Genetic Algorithm 539

9. Veeravalli, B., Li, X., and Ko, C. C.: On the Influence of Start-up Costs in Schedul-ing Divisible Loads on Bus Networks, IEEE Trans. on Parallel and DistributedSystems, 11(12) (2000) 1288-1305.

10. Suresh, S., Mani, V., and Omkar, S. N.: The Effect of Start-up Delays in Schedul-ing Divisible Loads on Bus Networks: An Alternate Approach, Computers andMathematics with Applications, 40 (2003) 1545-1557.

11. Drozdowski, M., and Wolniewicz, P.: Multi-Installment Divisible Job Processingwith Communication Start-up Cost, Foundations of Computing and Decision Sci-ences, 27(1) (2002) 43-57.

12. Bharadwaj, V., Li, H. F., and Radhakrishnan, T.: Scheduling Divisible Loads inBus Network with Arbitary Release Time, Computers and Mathematics with Ap-plications, 32(7) (1996) 57-77.

13. Bharadwaj, V., and Wong Han Min.: Scheduling Divisible Loads on Heteroge-neous Linear Daisy Chain Networks with Arbitrary Processor Release Times, IEEETrans. on Parallel and Distributed Systems, 15(3) (2004) 273-288.

14. Holland, H. J.: Adaptation in Natural and Artificial Systems, University of Michi-gan Press. Ann Arbor. (1975).

15. Goldberg, D. E.: Genetic Algorithms in Search, Optimization and Machine Learn-ing, Addison-Wesley, New York. (1989).

16. David, L.: Handbook of Genetic Algorithms, Van Nostrand Reingold, New York.(1991).

17. Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Programs, AISeries, Springer-Verlag, New York. (1994).

18. Herrera, F., Lozano, M., and Verdegay, J. L.: Tackling Real-Coded Genetic Algo-rithms: Operators and Tools for Behavioural Analysis, Artificial Intelligence Re-view, 12(4) (1998) 265-319.

19. Deb, K.: Multi-Objective Optimization using Evolutionary Algorithms, Wiley,Chichester. (2001).

20. Herrera, F., Lozano, M., and Sanchez, A. M.: Hybrid Crossover Operators for Real-Coded Genetic Algorithms: An Experimental Study, Soft Computing - A Fusionof Foundations, Methodologies and Applications, (2002).

21. Herrera, F., Lozano, M., and Snchez, A. M.: A Taxonomy for the Crossover Op-erator for Real-Coded Genetic Algorithms: An Experimental Study, InternationalJournal of Intelligent Systems, 18 (2003) 309-338.

22. Houck, C. R., Joines, J. A., and Kay, M. G.: A Genetic Algorithm for Function Op-timization: A Matlab Implementation. ACM Transactions on Mathematical Soft-ware, 22 (1996) 1-14.

23. Mani, V., Suresh, S., and Kim, H. J.: Real-Coded Genetic Algorithms for OptimalStatic Load Balancing in Distributed Computing System with CommunicationDelays, Lecture Notes in Computer Science, LNCS 3483, (2005) 269-279.

24. Herrera, F. and Lozano, M.: Gradual Distributed Real-Coded Genetic Algorithms,IEEE Transactions on Evolutionary Computation, 4(1) (2000) 43-63.

25. Hong, T. P., and Wang, H. S.: Automatically Adjusting Crossover Ratios of Mul-tiple Crossover Operators, Journal of Information Science and Engineering, 14(2)(1998) 369-390.

26. Hong, T. P., Wang, H. S., Lin, W. Y., and Lee, W. Y.: Evolution of AppropriateCrossover and Mutation Operators in a Genetic Process, Applied Intelligence, 16(2002) 7-17.

27. Yoon, H. S., and Moon, B. R.: An Empirical Study on the Synergy of Multi-ple Crossover Operators, IEEE Transactions on Evolutionary Computation, 6(2)(2002) 212-223.