payal (1)

Embed Size (px)

Citation preview

TABLE OF CONTENTS Topic Page

CERTIFICATE ii ABSTRACT iii ACKNOWLEDGEMET ivCHAPTER 1 INTRODUCTION 2

1.1 Inspiration from nature 4 1.1.1 Social insects and stigmery 4 1.1.2 Natural navigation 5 1.1.3 Natural clustering 5

1.2 GENERAL PRINCIPLES 6 1.2.1 Proximity principle 6 1.2.2 Quality principles 6 1.2.3 Principle of diverse response 7 1.2.4 Principle of stability and adaptibi 7 1.3 Particle swarm optimization 7 CHAPTER 2 LITERATURE SURVEY 2.1 Problem of Load Balancing 9

2.2 Particle Swarm Optimization Algorithm 9

CHAPTER 3 PAST WORK 13

CHAPTER 4 CONCLUSION AND FUTURE WORK 14

REFERENCES CHAPTER 1INTRODUCTION

People learn a lot from Mother Nature. Applying the analogy to biological systems with lots of individuals, or swarms, we are able to handle the challenges in the algorithm and application with optimization techniques. In this paper, we focus on the overview of several popular swarm intelligence algorithms, pointing out their concepts, and proposing some enhancements of the algorithms with the results of our research group Nature provides inspiration to computer scientists in many ways. One source of such inspiration is the way in which natural organisms behave when they are in groups. Consider a swarm of ants, a swarm of bees, a colony of bacteria, or a flock of starlings. In these cases and in many more, biologists have told us (and we have often seen for ourselves) that the group of individuals itself exhibits behavior that the individual members do not, or cannot. In other words, if we consider the group itself as an individual the swarm in some ways, at least, the swarm seems to be more intelligent than any of the individuals within it. This observation is the seed for a cloud of concepts and algorithms, some of which have become associated with swarm intelligence. Indeed, it turns out that swarm intelligence is only closely associated with a small portion of this cloud. If we search nature for scenarios in which a collection of agents exhibits behaviour that the individuals do not (or cannot), it is easy to find entire and vast sub-areas of science, especially in the bio-sciences. For example, any biological organism seems to exemplify this concept, when we consider the individual organism as the swarm, and its cellular components as the agents. We might consider brains, and nervous systems in general, as a supreme exemplar of this concept, when individual neurons are considered as the agents. Or we might zoom in on certain inhomogeneous sets of bio-molecules as our agents, and herald gene transcription, say, as an example of swarm behaviour. Fortunately, for the sake of this chapter's brevity and depth, it turns out that the swarm intelligence literature has come to refer to a small and rather specific set of observations and associated algorithms. This is not to say that that computer scientists are uninspired by the totality of natures wonders that exhibit such more than the sum of the parts behaviour much of this volume makes it clear that this is not so at all. However, if we focus on the specific concept of swarm intelligence and attempt to define it intentionally, the result might be thus: 1. Useful behaviour that emerges from the cooperative efforts of a group of individual agents; 2. In which the individual agents are largely homogeneous; 3. In which the individual agents act asynchronously in parallel; 4. In which there is little or no centralized control; 5. In which communication between agents is largely effected by some form of stigmergy; 6. In which the `useful behaviour' is relatively simple (finding a good place for food, or building a nest not writing a symphony, or surviving for many years in a dynamic environment). So, swarm intelligence is not about how collections of cells yield brains (which falls foul of at least items 2, 5, and 6), and it is not about how individuals form civilizations (violating mainly items 3, 5 and 6), and it is not about such things as the lifecycle of the slime mould (item 6). However, it is about individuals co- operating (knowingly or not) to achieve a definite goal. Such as, ants finding the shortest path between their nest and a good source of food, or bees finding the best sources of nectar within the range of their hive. These and similar natural processes have led directly to families of algorithms that have proved to be very substantial contributions to the sciences of computational optimization and machine learning. So, originally inspired, respectively, by certain natural behaviours of swarms of ants, and flocks of birds, the backbone of swarm intelligence research is built mainly upon two families of algorithms: ant colony optimization, and particle swarm optimization. Seminal works on ant colony optimization were Dorigo et al (1991) and Colorni et al (1992a; 1992b), and particle swarm optimization harks back to Kennedy & Eberhart (1995). More recently, alternative inspirations have led to new algorithms that are becoming accepted under the swarm intelligence umbrella; among these are search strategies inspired by bee swarm behaviour, bacterial foraging, and the way that ants manage to cluster and sort items. Notably, this latter behaviour is explored algorithmically in a subfield known as swarm robotics. Meanwhile, the way in which insect colonies collectively build complex and functional constructions is a very intriguing study that continues to be carried out in the swarm intelligence arena. Finally, another field that is often considered in the swarm intelligence community is the synchronized movement of swarms; in particular, the problem of defining simple rules for individual behaviour that led to realistic and natural behaviour in a simulated swarm. Reynolds rules provided a general solution to this problem in 1987, and this can be considered an early triumph for swarm intelligence, which has been exploited much in the film and entertainment industries.To put it in a simple way, swarm intelligence can be described as the collective behavior emerged from social insects working under very few rules. Self-organization is the main theme with limited restrictions from interactions among agents. Many famous examples of swarm intelligence come from the world of animals, such as birds flock, fish school and bugs swarm. The social interactions among individual agent help them to adapt to the environment more efficiently since more information are gathered from the whole swarm. This paper aims to introduce several well-known and interesting algorithms based on metaheuristic derived from nature and their applications in problem solving.

1.1 Inspiration from Nature 1.1.1 Social Insects and Stigmergy Ants, termites and bees, among many other insect species, are known to have a complex social structure. Swarm behaviour is one of several emergent properties of colonies of such so-called social insects. A ubiquitous characteristic that we see again and again in such scenarios is stigmergy. Stigmergy is the name for the indirect communication that seems to underpin cooperation among social insects (as well as between cells, or between arbitrary entities, as long as the communication is indirect). The term was introduced by Pierre-Paul Grass in the late 1950s. Quite simply, stigmergy means communication via signs or cues placed in the environment by one entity, which affect the behaviour of other entities who encounter them. Stigmergy was originally defined by Grass in his research on the construction of termite nests.It shows a simplified schematic of a termite nest. We will say more about termite nests in section 2.1.3, but for now it suffices to point out that these can be huge structures, several meters high, constructed largely from mud and from the saliva of termite workers. Naturally, the complexity and functionality of the structure is quite astounding, given what we understand to be the cognitive capabilities of a single termite. Of a particularly close food source (or, alternatively, a faster or safer route to a food source). The first ant to find this source will return relatively quickly. Other ants that take this route will also return relatively quickly, so that the best routes will enjoy a greater frequency of pheromone laying over time, becoming strongly fancied by other ants. The overall collective behaviour amounts to finding the best path to a nearby food source, and there is enough stochasticity in the process to avoid convergence to poor local optima trail evaporation ensures that suboptimal paths discovered early are not converged upon too quickly, while individual ants maintain stochasticity in their choices, being influenced by but not enslaved by the strongest pheromone trail they sense. Figure 2 shows a simple illustration, indicating how ants will converge via stigmergy towards a safer and faster way to cross a flow of water between their nest and a food source. One aspect of this behaviour in particular is that it is arguably not exemplary of swarm behaviour; i.e. it is perhaps not collective intelligence. The explanatory model that seems intuitively correct and confirmed by observation (Theraulaz et al, 2002) is one in which an individual operates.1.1.2 Natural Navigation Navigation to food sources seems to depend on the deposition of pheromone by individual ants. In the natural environment, the initial behaviours of a colony of ants in seeking a new food source is for individual ants to wander randomly. When an ant happens to find a suitable food source they will return to their colony; throughout, the individual ants have been laying pheromone trails. Subsequent ants setting out to seek food will sense the pheromone laid down by their precursors, and this will influence the path they take up. Over time, of course, pheromone trails evaporate.

1.1.3 Natural Clustering Turning now to a different style of swarm behaviour, it is well known that ant species (as well as other insect species) exhibit emergent sorting and clustering of objects. Two of the most well-known examples are the clustering of corpses of dead ants into cemeteries (achieved by the species Lasius Niger, Pheidole pallidula, and Messor sancta (Depickere et al, 2004), and the arrangement of larvae into similar-age groups (so called brood-sorting, achieved by Leptothorax unifasciatus (Franks & Sendova-Franks, 1992)). For example, in Leptothorax unifasciatus ant colonies, the ants' young are organized into concentric rings called annuli. Each ring comprises young of a different type. The youngest (eggs, and micro-larvae) are clustered together in the central cluster. As we move outwards from the Centre, progressively older groups of larvae are encountered. Also, the spaces between these rings increase as we move outwards. In cemetery formation, certain ant species are known to cluster their dead into piles, with individual piles maintained at least a minimal distance from each other. In this way, corpses are removed from the living surroundings, and cease to be a hindrance to the colony.

1.2 General principlesTo model the broad behaviors arisen from a swarm, we introduce several general principles for swarm intelligence[2]:1.2.1 Proximity principle: The basic units of a swarm should be capable of simple computation related to its surrounding environment. Here computation is regarded as a direct behavioral response to environmental variance, such as those triggered by interactions among agents. Depending on the complexity of agents involved, responses may vary greatly. However, some fundamental behaviors are shared, such as living-resource searching and nest building. 1.2.2 Quality principle: Apart from basic computation ability, a swarm should be able to response to quality factors, such as food and safety. 1.2.3 Principle of diverse response: Resources should not be concentrated in narrow region. The distribution should be designed so that each agent will be maximally protected facing environmental fluctuations. 1.2.4 Principle of stability and adaptability: Swarms are expected to adapt environmental fluctuations without rapidly changing modes since mode changing costs energy.1.3 Particle swarm optimization algorithmsBird flocking and fish schooling are the inspirations from nature behind particle swarm Optimization algorithms. It was first proposed by Eberhart and Kennedy [9]. Mimicking Physical quantities such as velocity and position in bird flocking, artificial particles are constructed to \y" inside the search space of optimization problems.

Figure : Bird flocksHowever, different from the previous two algorithms using pheromone or feedback as tools to get rid of undesired solutions, particle swarm optimization algorithms updates the current solution directly. As you can tell from the following description of the frame- work of PSO algorithms, with fewer parameters, PSO algorithms are easy to implement and achieve global optimal solutions with high probability. Initially, a population of particles is distributed uniformly in the multidimensional space of the objective function of the optimization problem. The social connection topology is called swarm's population topology. There are different types, for example, the gbest topology. With such a topology, particles are all attracted to the global best solution; therefore, it represents the fully connected social network. Connects each particle with only C neighbors. This kind of topology slows the process of convergence compared to gbest, however, it also makes the PSO algorithms more capable of avoiding local minima. The wheel topology connects neighboring particles to only one particle {the focal particle. This type is the one with the smallest number of connections [11].

Chapter 2Literature surveyLiterature Survey for our research comprised of study of various research papers and various websites regarding the study of the problem of load balancing and various solutions for this problem. The various algorithms were studied to swarm intelligence. The various other topics relevant for our study were other evolutionary techniques used, the concept of optimization-local optimum and global optimum solutions and how to reach to these solutions in a given search space. The hybridization of particle swarm optimization algorithm with the local optimal algorithm.2.1 Problem of Load Balancing with swarm intelligenceLoad Balancing algorithms are designed essentially to equally spread the load on processors and maximize their utilization while minimizing the total task execution time. In order to achieve these goals, the load-balancing mechanism should be fair enough in distributing the load across the processors.The objective is to minimize the total execution and communication cost encountered by the task assignment, subject to the resource constraints. A particle is evaluated by calculating its fitness function. Fitness function indicates the goodness of the schedule. The objective function calculates the total execution time of the set of tasks allocated to each processor. The fitness function calculates the average of the total execution time of the set of tasks allocated to the processors. 2.2 Particle Swarm Optimization Algorithm Kennedy and Eberhart introduced the concept of function-optimization by means of a particle swarm. Suppose the global optimum of an n-dimensional function is to be located. The function may be mathematically represented as: f(x1, x2, x3, . . . , xn) = f(X)Where x is the search-variable vector, which actually represents the set of independent variables of the given function. The task is to find out such a x, that the function value f(x) is either a minimum or a maximum denoted by f* in the search range. If the components of x assume real values then the task is to locate a particular point in the n-dimensional hyperspace.The PSO algorithm works by simultaneously maintaining several candidate solutions in the search space. During each iteration of the algorithm, each candidate solution is evaluated by the objective function being optimized, determining the fitness of that solution. Each candidate solution can be thought of as a particle flying through the fitness landscape finding the maximum or minimum of the objective function.

Figure : Initial state of four particles in PSO

PSO is initialized with a group of random particles (solutions) and then searches for optima by updating generations. In each iteration, each particle is updated by following two "best" values. The first one is the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called gbest.

Generation of the Population

When a particle takes part of the population as its topological neighbors, the best value is a local best and is called lbest.After finding the two best values, the particle updates its velocity and positions with following equation (a) and (b).(Shown in figure 2.2)v[k+1] = v[k] + c1 * rand() * (pbest[] - present[]) + c2 * rand() * (gbest[] - present[]) (a)present[] = present[] + v[] (b)v[] is the particle velocity, present[] is the current particle (solution). pbest[] and gbest[] are defined as stated before. rand () is a random number between (0,1). c1, c2 are learning factors. Usually c1 = c2 = 2.

skvkvpbestvgbestsk+1vk+1skvkvpbestvgbestsk+1vk+1

Figure : Concept of Modification of a searching point by PSO

History teaches us that the behavior of large masses of people can sometimes be very stupid. But crowds can also display surprisingly smart behavior. Recently, attempts have been made to mimic this collective intelligence of crowds in computers. For a general introduction, see (Wolpert and Lawson, 2002).Amazon.com uses a seemingly simple form of collective intelligence. When displaying a book, the site also shows books that people who bought the first book have bought. Clicking on one of these books leads to a page where yet more books that have been bought by persons buying the second book are shown. In this way, it is possible for a browser to utilize the collective intelligence of all other book-buyers and find new, interesting books.The search site Google can be seen as another form of collective intelligence. Here, the page ranking system increases a pages rating if many people link or surf to it. Another example is Alexa, which was a system for finding related sites. (A swarming approach to a similar but more restricted problem is reported in (Semet et al., 2003).)

Chapter 3Past workTremendous research work has been done in the field of swarm intelligence over the years. Many Research papers have been published to focus on particle swarm optimization and hybrid particle swarm optimization algorithms to solve the problem of load balancing. In the previous research papers the researchers showed that the particle swarm optimization and hybrid particle swar optimization technique is the best technique in solving the problem of load balancing.Until now the load balancing problem is done via traditional and static methods like putting a round robin DNS, using virtual servers but now many evolutionary techniques of optimization for the load balancing have been introduced. Theses evolutionary techniques give us a dynamic view to approach of Load Balancing. The evolutionary methods are Ant Colony Optimization algorithm, Genetic algorithm, Particle Swarm Optimization algorithm and Differential Evolution algorithm, Hybrid Particle Swarm Optimization algorithm. [2][3][6]It is a recent algorithm for evolutionary optimization. The algorithm is inspired by biological and sociological motivations and can take care of optimality on rough, discontinuous and multimodal surfaces. Particle Swarm Optimization (PSO) is introduced by James Kennedy and Russell Eberhart in 1995. PSO is an evolutionary computation technique like genetic algorithms. Since PSO have many advantages such as comparative simplicity, rapid convergence and little parameters to be adjusted, it has been used in many fields such as function optimization, neural network training and pattern identification. Since the introduction of the PSO method in 1995, there has been a considerable amount of work done in developing the original version of PSO, through empirical simulations two significant previous developments, which serve as both a basis for and performance gauge.[4][9]

Chapter 4Conclusion and Future work

One of the major challenges in swarm intelligence research is to design agent-level behaviors in order to obtain a certain desired behavior at the collective-level. Since a general .methodology for achieving this goal has been elusive, most researchers in the field concentrate their efforts on specific applications. In doing so, a number of assumptions are made. One of these assumptions is that the size of a swarm of agents remains constant over time. In many cases, this assumption may not be well justified.The framework proposed in this dissertation challenges the constant population size assumption. In the ISL framework, the population size changes over time and we have demonstrated that some benefits can be obtained with such an approach. We are not the only ones to realize that an incremental deployment of agents (robots) can bring benefits and can even simplify the design of the agent-level behaviors. fact, in many practical applications of swarm intelligence systems, in particular in swarm robotics and affine fields, such as sensor networks, it is actually more difficult to deploy hundreds of robots at once, than to deploy a few robots at different points in time. For example, consider the deployment of the 30 Galileo satellites (European Space Agency, 2010). It is not reasonable to assume that tens of satellites can be deployed at once. Rather, the deployment is painfully slow, with one or two satellites being deployed at a time. If these satellites were part of a swarm of satellites with specific tasks such as maintaining a formation in space, the rules needed to fulfill that task would be quite complex. Instead, with an incremental deployment, each satellite could take its position without disturbing the behavior of other satellites. In other words, the interference between satellites would be greatly reduced. With our proposal, we hope that researchers in the swarm intelligence field will con- sider the possibility of an incremental deployment of agents in the design of new swarm intelligence systems. The other aspect of our proposal, the use of some form of social learning, can potentially have a bigger impact in the field.

References[1] Om Prakash Verma, Deepti Aggarwal and Tejna Patodi (2015), Opposition and Dimensional Based Modified Firefly Algorithm Expert Systems with Applications,Accepted forpublication,impact factor2.240.[2] M. Millonas. Swarms, Phase Transitions, and Collective Intelligence. Addison- Wesley Publishing Company, Reading (1994).[3] M. Dorigo, and T. Sttzle. Ant Colony Optimization. A Bradford Book, (2004).[4] M. Dorigo, T. Sttzle, and M. Birattari. (2006). Ant colony optimization. Computa- tional Intelligence Magazine, 1(4), 28-39.[5] C. P. Lim, L. C. Jain, and S. Dehuri. Innovations in Swarm Intelligence, Springer, (2009).[6] X.-S. Yang. Introduction to Computational Mathematics, World Scientic Publishing Company, (2008).[7] Ins Alaya (2007) Ant Colony Optimization for Multi-objective Optimization Problems, in Proceedings of the 19th IEEE International Conference on Tools with Artificial Intelligence, pages 450--457, 2007. [6]Chunming Yang, Simon D, A new particle swarm optimization technique,Proceedings of the International Conference on Systems Engineering,pp.164-169 (2005).[8] Eberhart, R.C. and Kennedy, Recent advances in particle swarm, 2004, http:// www.swarmintelligence.org[9] Parsopoulos K.E, and Vrahatis M.N, Recent approaches to global optimization problems through particle swarm optimization, Natural Computing, pp. 235 306 Vol.1 (2002). [10] Asanga Ratnaweera, Saman K. Halgamuge, Member, IEEE, and Harry C. Watson, Self-Organizing Hierarchical Particle Swarm Optimizer With Varying acceleration Coefficients, IEEE Transactions On Evolutionary Computation, Vol. 8, No. 3, June 2004.[11] D. A. Pelta, N. Krasnogo, D. Dumitrescu, C. Chira, and R. Lung. Nature Inspired Cooperative Strategies for Optimization, Springer, (2011).

16