Particle Swarm Optimization Algorithm
The particle swarm algorithm begins by creating the initial particles, and assigning them initial velocities.
It evaluates the objective function at each particle location, and determines the best (lowest) function value and the best location.
It chooses new velocities, based on the current velocity, the particles’ individual best locations, and the best locations of their neighbors.
It then iteratively updates the particle locations (the new location is the old one plus the velocity, modified to keep particles within bounds), velocities, and neighbors.
Iterations proceed until the algorithm reaches a stopping criterion.
Here are the details of the steps.
particleswarm creates particles
at random uniformly within bounds. If there is an unbounded component,
particles with a random uniform distribution from –1000 to
1000. If you have only one bound,
the creation to have the bound as an endpoint, and a creation interval
2000 wide. Particle
i has position
which is a row vector with
nvars elements. Control
the span of the initial swarm using the
particleswarm creates initial particle velocities
v at random uniformly within the range
r is the vector of initial
ranges. The range of component
min(ub(k) - lb(k),InitialSwarmSpan(k)).
particleswarm evaluates the objective function
at all particles. It records the current position
i. In subsequent iterations,
be the location of the best objective function that particle
b is the best over all particles:
d is the location such
b = fun(d).
particleswarm initializes the neighborhood size
particleswarm initializes the inertia
= max(InertiaRange), or if
negative, it sets
W = min(InertiaRange).
particleswarm initializes the stall counter
For convenience of notation, set the variable
y2 = SocialAdjustmentWeight,
The algorithm updates the swarm as follows. For particle
which is at position
Choose a random subset
Nparticles other than
fbest(S), the best objective function among the neighbors, and
g(S), the position of the neighbor with the best objective function.
u2uniformly (0,1) distributed random vectors of length
nvars, update the velocity
v = W*v + y1*u1.*(p-x) + y2*u2.*(g-x).
This update uses a weighted sum of:
The previous velocity
The difference between the current position and the best position the particle has seen
The difference between the current position and the best position in the current neighborhood
Update the position
x = x + v.
Enforce the bounds. If any component of
xis outside a bound, set it equal to that bound. For those components that were just set to a bound, if the velocity
vof that component points outside the bound, set that velocity component to zero.
Evaluate the objective function
f = fun(x).
f < fun(p), then set
p = x. This step ensures
phas the best position the particle has seen.
The next steps of the algorithm apply to parameters of the entire swarm, not the individual particles. Consider the smallest
f = min(f(j))among the particles
jin the swarm.
f < b, then set
b = fand
d = x. This step ensures
bhas the best objective function in the swarm, and
dhas the best location.
If, in the previous step, the best function value was lowered, then set
flag = true. Otherwise,
flag = false. The value of
flagis used in the next step.
Update the neighborhood. If
flag = true:
c = max(0,c-1).
c < 2, then set
W = 2*W.
c > 5, then set
W = W/2.
Wis in the bounds of the
flag = false:
c = c+1.
N = min(N + minNeighborhoodSize,SwarmSize).
particleswarm iterates until it reaches
a stopping criterion.
|Stopping Option||Stopping Test||Exit Flag|
|Relative change in the best objective function value |
|Number of iterations reaches |
|Best objective function value |
|Best objective function value |
|Function run time exceeds |
particleswarm stops with exit flag
it optionally calls a hybrid function after it exits.
 Kennedy, J., and R. Eberhart. "Particle Swarm Optimization." Proceedings of the IEEE International Conference on Neural Networks. Perth, Australia, 1995, pp. 1942–1945.
 Mezura-Montes, E., and C. A. Coello Coello. "Constraint-handling in nature-inspired numerical optimization: Past, present and future." Swarm and Evolutionary Computation. 2011, pp. 173–194.
 Pedersen, M. E. "Good Parameters for Particle Swarm Optimization." Luxembourg: Hvass Laboratories, 2010.