Multimodal functions have more than one optima, but can either have a single or more than one global optima. In the first case, the problem that a optimization algorithm has to deal with is to avoid premature convergence to a local optimum. In the second, the presence of several solutions with equal optimum fitness arises the issue of how an algorithm can locate all the global optima.
One of the first applications of PSO to multimodal problems was performed in . However, in this paper the problem is to locate the global optimum in a fitness landscape with multiple local optima but a single global optimum. These problems may be challenging for standard GA because crossover operations can be destructive when performed on individuals located near different optima. The results of several versions of GA where compared with the results of PSO, which performs well independently of the degree of modality (number of optima).
In , an hybrid PSO algorithm is tested on multimodal problems having single global optima. The authors include a nonuniform mutation operator proposed by Michalewicz in 1996; this modification improves the performance of PSO compared to classical PSO, which is already competitive with other techniques when dealing with the test functions analyzed.
When the fitness landscape has more than one global optima, the swarm will probably fall into one of the following behaviors [3,4]:
The general procedure proposed to avoid this inconvenience, is to perform a transformation of the fitness function once a (possibly local) optimum is found. That is,
Two different techniques are proposed to transform the fitness function:
In this proposal, the swarm is stopped when the swarm has converged to a solution whose fitness is ``good enough'' (that is, it is numerically close to the optimum fitness for the problem). The best solution found by the swarm is used as a potential global optimum, but a new swarm is used to perform a local search around that point, while the original swarm continues finding new optima using the ``deflated'' or ``stretched'' fitness function.
In , a niching PSO (NichePSO) is proposed. In this algorithm, multiple sub-swarms are created from the initial swarm to locate each of the optima in the test functions. In a way similar to the one proposed in , a particle is considered a started to perform a search in its neighborhood, but a measure of the particle's fitness variance over the time is used as a criterium for this selection. Particles from a subswarm can be absorbed into a different subswarm, and whole subswarms can be merged when they are likely to converge to the same solution.
In , a species-based PSO is used to simultaneously finding all the optima, regardless of being local or global. The swarm is partitioned in species, and a given individual is used as leader (called ``seed''). Partitioning and species seed selection is performed at each iteration of the algorithm: species are based on the euclidean distance between the particles, and the seed is selected as the best particle in the species. This particle is selected as lbest for the other particles in the same species. The results are better than with NichePSO on the same test functions.
|Copyright ©2005, University CARLOS III of Madrid|