Clever Algorithms: Nature-Inspired Programming Recipes

A book by Jason Brownlee

Home | Read Online | Amazon | GoodReads | Google Books | PDF (code) | GitHub



Particle Swarm Optimization

Particle Swarm Optimization, PSO.

Taxonomy

Particle Swarm Optimization belongs to the field of Swarm Intelligence and Collective Intelligence and is a sub-field of Computational Intelligence. Particle Swarm Optimization is related to other Swarm Intelligence algorithms such as Ant Colony Optimization and it is a baseline algorithm for many variations, too numerous to list.

Inspiration

Particle Swarm Optimization is inspired by the social foraging behavior of some animals such as flocking behavior of birds and the schooling behavior of fish.

Metaphor

Particles in the swarm fly through an environment following the fitter members of the swarm and generally biasing their movement toward historically good areas of their environment.

Strategy

The goal of the algorithm is to have all the particles locate the optima in a multi-dimensional hyper-volume. This is achieved by assigning initially random positions to all particles in the space and small initial random velocities. The algorithm is executed like a simulation, advancing the position of each particle in turn based on its velocity, the best known global position in the problem space and the best position known to a particle. The objective function is sampled after each position update. Over time, through a combination of exploration and exploitation of known good positions in the search space, the particles cluster or converge together around an optima, or several optima.

Procedure

The Particle Swarm Optimization algorithm is comprised of a collection of particles that move around the search space influenced by their own best past location and the best past location of the whole swarm or a close neighbor. Each iteration a particle's velocity is updated using:

$v_{i}(t+1) = v_{i}(t) + \big( c_1 \times rand() \times (p_{i}^{best} - p_{i}(t)) \big) + \big( c_2 \times rand() \times (p_{gbest} - p_{i}(t)) \big)$

where $v_{i}(t+1)$ is the new velocity for the $i^{th}$ particle, $c_1$ and $c_2$ are the weighting coefficients for the personal best and global best positions respectively, $p_{i}(t)$ is the $i^{th}$ particle's position at time $t$, $p_{i}^{best}$ is the $i^{th}$ particle's best known position, and $p_{gbest}$ is the best position known to the swarm. The $rand$ function generate a uniformly random variable $\in [0,1]$. Variants on this update equation consider best positions within a particles local neighborhood at time $t$.

A particle's position is updated using:

$p_{i}(t+1) = p_{i}(t) + v_{i}(t)$

Algorithm (below) provides a pseudocode listing of the Particle Swarm Optimization algorithm for minimizing a cost function.

Input: ProblemSize, $Population_{size}$
Output: $P_{g\_best}$
Population $\leftarrow$ $\emptyset$
$P_{g\_best}$ $\leftarrow$ $\emptyset$
For ($i=1$ To $Population_{size}$)
    $P_{velocity}$ $\leftarrow$ RandomVelocity()
    $P_{position}$ $\leftarrow$ RandomPosition{$Population_{size}$}
    $P_{p\_best}$ $\leftarrow$ $P_{position}$
    If (Cost{$P_{p\_best}$} $\leq$ Cost{$P_{g\_best}$})
        $P_{g\_best}$ $\leftarrow$ $P_{p\_best}$
    End
End
While ($\neg$StopCondition())
    For ($P$ $\in$ Population)
        $P_{velocity}$ $\leftarrow$ UpdateVelocity{$P_{velocity}$, $P_{g\_best}$, $P_{p\_best}$}
        $P_{position}$ $\leftarrow$ UpdatePosition{$P_{position}$, $P_{velocity}$}
        If (Cost{$P_{position}$} $\leq$ Cost{$P_{p\_best}$})
            $P_{p\_best}$ $\leftarrow$ $P_{position}$
            If (Cost{$P_{p\_best}$} $\leq$ Cost{$P_{g\_best}$})
                $P_{g\_best}$ $\leftarrow$ $P_{p\_best}$
            End
        End
    End
End
Return ($P_{g\_best}$)
Pseudocode for PSO.

Heuristics

Code Listing

Listing (below) provides an example of the Particle Swarm Optimization algorithm implemented in the Ruby Programming Language. The demonstration problem is an instance of a continuous function optimization that seeks $\min f(x)$ where $f=\sum_{i=1}^n x_{i}^2$, $-5.0\leq x_i \leq 5.0$ and $n=3$. The optimal solution for this basin function is $(v_0,\ldots,v_{n-1})=0.0$. The algorithm is a conservative version of Particle Swarm Optimization based on the seminal papers. The implementation limits the velocity at a pre-defined maximum, and bounds particles to the search space, reflecting their movement and velocity if the bounds of the space are exceeded. Particles are influenced by the best position found as well as their own personal best position. Natural extensions may consider limiting velocity with an inertia coefficient and including a neighborhood function for the particles.

def objective_function(vector)
  return vector.inject(0.0) {|sum, x| sum +  (x ** 2.0)}
end

def random_vector(minmax)
  return Array.new(minmax.size) do |i|
    minmax[i][0] + ((minmax[i][1] - minmax[i][0]) * rand())
  end
end

def create_particle(search_space, vel_space)
  particle = {}
  particle[:position] = random_vector(search_space)
  particle[:cost] = objective_function(particle[:position])
  particle[:b_position] = Array.new(particle[:position])
  particle[:b_cost] = particle[:cost]
  particle[:velocity] = random_vector(vel_space)
  return particle
end

def get_global_best(population, current_best=nil)
  population.sort!{|x,y| x[:cost] <=> y[:cost]}
  best = population.first
  if current_best.nil? or best[:cost] <= current_best[:cost]
    current_best = {}
    current_best[:position] = Array.new(best[:position])
    current_best[:cost] = best[:cost]
  end
  return current_best
end

def update_velocity(particle, gbest, max_v, c1, c2)
  particle[:velocity].each_with_index do |v,i|
    v1 = c1 * rand() * (particle[:b_position][i] - particle[:position][i])
    v2 = c2 * rand() * (gbest[:position][i] - particle[:position][i])
    particle[:velocity][i] = v + v1 + v2
    particle[:velocity][i] = max_v if particle[:velocity][i] > max_v
    particle[:velocity][i] = -max_v if particle[:velocity][i] < -max_v
  end
end

def update_position(part, bounds)
  part[:position].each_with_index do |v,i|
    part[:position][i] = v + part[:velocity][i]
    if part[:position][i] > bounds[i][1]
      part[:position][i]=bounds[i][1]-(part[:position][i]-bounds[i][1]).abs
      part[:velocity][i] *= -1.0
    elsif part[:position][i] < bounds[i][0]
      part[:position][i]=bounds[i][0]+(part[:position][i]-bounds[i][0]).abs
      part[:velocity][i] *= -1.0
    end
  end
end

def update_best_position(particle)
  return if particle[:cost] > particle[:b_cost]
  particle[:b_cost] = particle[:cost]
  particle[:b_position] = Array.new(particle[:position])
end

def search(max_gens, search_space, vel_space, pop_size, max_vel, c1, c2)
  pop = Array.new(pop_size) {create_particle(search_space, vel_space)}
  gbest = get_global_best(pop)
  max_gens.times do |gen|
    pop.each do |particle|
      update_velocity(particle, gbest, max_vel, c1, c2)
      update_position(particle, search_space)
      particle[:cost] = objective_function(particle[:position])
      update_best_position(particle)
    end
    gbest = get_global_best(pop, gbest)
    puts " > gen #{gen+1}, fitness=#{gbest[:cost]}"
  end
  return gbest
end

if __FILE__ == $0
  # problem configuration
  problem_size = 2
  search_space = Array.new(problem_size) {|i| [-5, 5]}
  # algorithm configuration
  vel_space = Array.new(problem_size) {|i| [-1, 1]}
  max_gens = 100
  pop_size = 50
  max_vel = 100.0
  c1, c2 = 2.0, 2.0
  # execute the algorithm
  best = search(max_gens, search_space, vel_space, pop_size, max_vel, c1,c2)
  puts "done! Solution: f=#{best[:cost]}, s=#{best[:position].inspect}"
end
Particle Swarm Optimization in Ruby
Download: pso.rb.

References

Primary Sources

Particle Swarm Optimization was described as a stochastic global optimization method for continuous functions in 1995 by Eberhart and Kennedy [Eberhart1995] [Kennedy1995]. This work was motivated as an optimization method loosely based on the flocking behavioral models of Reynolds [Reynolds1987]. Early works included the introduction of inertia [Shi1998] and early study of social topologies in the swarm by Kennedy [Kennedy1999].

Learn More

Poli, Kennedy, and Blackwell provide a modern overview of the field of PSO with detailed coverage of extensions to the baseline technique [Poli2007]. Poli provides a meta-analysis of PSO publications that focus on the application the technique, providing a systematic breakdown on application areas [Poli2008a]. An excellent book on Swarm Intelligence in general with detailed coverage of Particle Swarm Optimization is "Swarm Intelligence" by Kennedy, Eberhart, and Shi [Kennedy2001].

Bibliography

[Eberhart1995] R. C. Eberhart and J. Kennedy, "A new optimizer using particle swarm theory", in Proceedings of the sixth international symposium on micro machine\n\tand human science, 1995.
[Kennedy1995] J. Kennedy and R. C. Eberhart, "Particle swarm optimization", in Proceedings of the IEEE International Conference on Neural Networks, 1995.
[Kennedy1999] J. Kennedy, "Small Worlds and Mega-Minds: Effects of Neighborhood Topology on\n\tParticle Swarm Performance", in Proceedings of the 1999 Congress on Evolutionary Computation, 1999.
[Kennedy2001] J. Kennedy and R. C. Eberhart and Y. Shi, "Swarm Intelligence", Morgan Kaufmann, 2001.
[Poli2007] R. Poli and J. Kennedy and T. Blackwell, "Particle swarm optimization An overview", Swarm Intelligence, 2007.
[Poli2008a] R. Poli, "Analysis of the publications on the applications of particle swarm\n\toptimisation", Journal of Artificial Evolution and Applications, 2008.
[Reynolds1987] C. W. Reynolds, "Flocks, herds and schools: A distributed behavioral model", in Proceedings of the 14th annual conference on Computer graphics and\n\tinteractive techniques, 1987.
[Shi1998] Y. Shi and R. C. Eberhart, "A Modified Particle Swarm Optimizers", in Proceedings of the IEEE International Conference on Evolutionary\n\tComputation, 1998.



Please Note: This content was automatically generated from the book content and may contain minor differences.