This section details the variations that have been studied on the basic PSO equations with the purpose of improving some aspects of its performance. This review is based in
 and .
In , the velocity clamping effect was introduced to avoid the phenomenon of
"swarm explosion". With no restriction on the maximum velocity of the particles, a simple one-dimensional analysis of the swarm dynamic concludes that the particle velocity can grow unbounded while the particle oscillates around an optimum, increasing its distance to the optimum on each iteration.
In several studies, the velocity clamping parameter was set to the maximum values for the positions in the search space (called ).
Further studies demonstrated that this mechanism was not enough to properly control the particle's velocities. When compared to EC techniques  the PSO quickly identified the region where the optimum was located, but had trouble to adjust the velocity to lower values to perform a fine grained search of the area.
In later versions of the PSO algorithm , the following modified velocity equation is used, where the velocity clamping technique is no longer needed:
The term is called "inertia weight", and the term is the "constriction factor". The term is the "cognitive parameter" and the term is the "social parameter". The combination of these parameters determines the convergence properties of the algorithm.
For the initial version of the PSO, the values for ,
, and have to be selected. This selection has an impact on the convergence speed and the ability of the algorithm to find the optimum, but different values may be better for different problems. Much work has been done to select a combination of values that works well in a wide range of problems.
For the constricted version of PSO, the following restrictions are proposed :
Regarding the inertia weight, it determines how the previous velocity of the particle influences the velocity in the next iteration:
- If w=0, the velocity of the particle is only determined by the pi
and pg positions; this means that the particle may change its
velocity instantly if it is moving far from the best positions in its
knowledge. Thus, low inertia weights favor exploitation (local search).
- If w is high, the rate at which the particle may change its
velocity is lower (it has an "inertia" that makes it follow its original
path) even when better fitness values are known. Thus, high inertia weights
favor exploration (global search).
In [6,7] a decaying inertia weight is proposed and tested, with the aim of favoring global search at the start of the algorithm and local search later. If the inertia weight is not reduced with time, they suggest selecting a value
The values of the cognitive and social factors are not critical for the algorithm, but selection of proper values may result in better performance, both in terms of speed of convergence and alleviation of local minima. Their values have to be taken into account when choosing the inertia weight.
Several studies propose different values for these parameters, that are considered adequate for some of the usual benchmark functions.
In [5,8], theoretical studies of convergence were performed; in the latter, an heuristic procedure for parameter selection and tuning was proposed, based on the convergence speed of the different combinations.
In , best results were achieved when both the constraint and the velocity clamping mechanisms were combined, using
K. E. Parsopoulos and M. N. Vrahatis.
Recent approaches to global optimization problems through particle
Natural Computing: an international journal, 1(2-3):235-306,
Frans van den Bergh.
An analysis of particle swarm optimizers.
PhD thesis, University of Pretoria, South Africa, 2002.
J. Kennedy, R.C. Eberhart, and Y. Shi.
Morgan Kaufmann Publishers, San Francisco, 2001.
Evolutionary optimization versus particle swarm optimization:
Philosophy and performance differences.
In Proceedings of the VII Conference on Evolutionary
Programming, pages 601-610. Springer-Verlag GmbH, 1998.
Maurice Clerc and James Kennedy.
The particle swarm - explosion, stability, and convergence in a
multidimensional complex space.
IEEE Trans. Evolutionary Computation, 6(1):58-73, 2002.
Y. Shi and R.C. Eberhart.
Parameter selection in particle swarm optimization.
In Proceedings of the Seventh Annual Conference on Evolutionary
Programming, pages 591-600, 1998.
Y. Shi and R.C. Eberhart.
Empirical study of particle swarm optimization.
In Proceedings of the IEEE Congress on Evolutionary Computation
(CEC), pages 1945-1950, 1999.
Ioan Cristian Trelea.
The particle swarm optimization algorithm: convergence analysis and
Inf. Process. Lett., 85(6):317-325, 2003.
Russ C. Eberhart and Y. Shi.
Comparing inertia weights and constriction factors in particle swarm
In Proceedings of the Congress on Evolutionary Computing, pages
| Copyright ©2005, University CARLOS III of Madrid