Clever Algorithms: Nature-Inspired Programming Recipes

A book by Jason Brownlee

Home | Read Online | Amazon | GoodReads | Google Books | PDF (code) | GitHub



Evolution Strategies

Evolution Strategies, Evolution Strategy, Evolutionary Strategies, ES.

Taxonomy

Evolution Strategies is a global optimization algorithm and is an instance of an Evolutionary Algorithm from the field of Evolutionary Computation. Evolution Strategies is a sibling technique to other Evolutionary Algorithms such as Genetic Algorithms, Genetic Programming, Learning Classifier Systems, and Evolutionary Programming. A popular descendant of the Evolution Strategies algorithm is the Covariance Matrix Adaptation Evolution Strategies (CMA-ES).

Inspiration

Evolution Strategies is inspired by the theory of evolution by means of natural selection. Specifically, the technique is inspired by macro-level or the species-level process of evolution (phenotype, hereditary, variation) and is not concerned with the genetic mechanisms of evolution (genome, chromosomes, genes, alleles).

Strategy

The objective of the Evolution Strategies algorithm is to maximize the suitability of collection of candidate solutions in the context of an objective function from a domain. The objective was classically achieved through the adoption of dynamic variation, a surrogate for descent with modification, where the amount of variation was adapted dynamically with performance-based heuristics. Contemporary approaches co-adapt parameters that control the amount and bias of variation with the candidate solutions.

Procedure

Instances of Evolution Strategies algorithms may be concisely described with a custom terminology in the form $(\mu,\lambda)-ES$, where $\mu$ is number of candidate solutions in the parent generation, and $\lambda$ is the number of candidate solutions generated from the parent generation. In this configuration, the best $\mu$ are kept if $\lambda > \mu$, where $\lambda$ must be great or equal to $\mu$. In addition to the so-called comma-selection Evolution Strategies algorithm, a plus-selection variation may be defined $(\mu + \lambda)-ES$, where the best members of the union of the $\mu$ and $\lambda$ generations compete based on objective fitness for a position in the next generation. The simplest configuration is the $(1+1)-ES$, which is a type of greedy hill climbing algorithm. Algorithm (below) provides a pseudocode listing of the $(\mu,\lambda)-ES$ algorithm for minimizing a cost function. The algorithm shows the adaptation of candidate solutions that co-adapt their own strategy parameters that influence the amount of mutation applied to a candidate solutions descendants.

Input: $\mu$, $\lambda$, ProblemSize
Output: $S_{best}$
Population $\leftarrow$ InitializePopulation{$\mu$, ProblemSize}
EvaluatePopulation{Population}
$S_{best}$ $\leftarrow$ GetBest{Population, 1}
While ($\neg$StopCondition())
    Children $\leftarrow \emptyset$
    For ($i=0$ To $\lambda$)
        $Parent_{i}$ $\leftarrow$ GetParent{Population, $i$}
        $S_{i}$ $\leftarrow \emptyset$
        $Si_{problem}$ $\leftarrow$ Mutate{$Pi_{problem}$, $Pi_{strategy}$}
        $Si_{strategy}$ $\leftarrow$ Mutate{$Pi_{strategy}$}
        Children $\leftarrow$ $S_{i}$
    End
    EvaluatePopulation{Children}
    $S_{best}$ $\leftarrow$ GetBest{Children $+$ $S_{best}$, 1}
    Population $\leftarrow$ SelectBest{Population, Children, $\mu$}
End
Return ($S_{best}$)
Pseudocode for $(\mu,\lambda)$ Evolution Strategies.

Heuristics

Code Listing

Listing (below) provides an example of the Evolution Strategies algorithm implemented in the Ruby Programming Language. The demonstration problem is an instance of a continuous function optimization that seeks $\min f(x)$ where $f=\sum_{i=1}^n x_{i}^2$, $-5.0\leq x_i \leq 5.0$ and $n=2$. The optimal solution for this basin function is $(v_0,\ldots,v_{n-1})=0.0$. The algorithm is a implementation of Evolution Strategies based on simple version described by Bäck and Schwefel [Back1993b], which was also used as the basis of a detailed empirical study [Yao1997]. The algorithm is an $(30+20)-ES$ that adapts both the problem and strategy (standard deviations) variables. More contemporary implementations may modify the strategy variables differently, and include an additional set of adapted strategy parameters to influence the direction of mutation (see [Rudolph2000] for a concise description).

def objective_function(vector)
  return vector.inject(0.0) {|sum, x| sum +  (x ** 2.0)}
end

def random_vector(minmax)
  return Array.new(minmax.size) do |i|
    minmax[i][0] + ((minmax[i][1] - minmax[i][0]) * rand())
  end
end

def random_gaussian(mean=0.0, stdev=1.0)
  u1 = u2 = w = 0
  begin
    u1 = 2 * rand() - 1
    u2 = 2 * rand() - 1
    w = u1 * u1 + u2 * u2
  end while w >= 1
  w = Math.sqrt((-2.0 * Math.log(w)) / w)
  return mean + (u2 * w) * stdev
end

def mutate_problem(vector, stdevs, search_space)
  child = Array(vector.size)
  vector.each_with_index do |v, i|
    child[i] = v + stdevs[i] * random_gaussian()
    child[i] = search_space[i][0] if child[i] < search_space[i][0]
    child[i] = search_space[i][1] if child[i] > search_space[i][1]
  end
  return child
end

def mutate_strategy(stdevs)
  tau = Math.sqrt(2.0*stdevs.size.to_f)**-1.0
  tau_p = Math.sqrt(2.0*Math.sqrt(stdevs.size.to_f))**-1.0
  child = Array.new(stdevs.size) do |i|
    stdevs[i] * Math.exp(tau_p*random_gaussian() + tau*random_gaussian())
  end
  return child
end

def mutate(par, minmax)
  child = {}
  child[:vector] = mutate_problem(par[:vector], par[:strategy], minmax)
  child[:strategy] = mutate_strategy(par[:strategy])
  return child
end

def init_population(minmax, pop_size)
  strategy = Array.new(minmax.size) do |i|
    [0,  (minmax[i][1]-minmax[i][0]) * 0.05]
  end
  pop = Array.new(pop_size) { Hash.new }
  pop.each_index do |i|
    pop[i][:vector] = random_vector(minmax)
    pop[i][:strategy] = random_vector(strategy)
  end
  pop.each{|c| c[:fitness] = objective_function(c[:vector])}
  return pop
end

def search(max_gens, search_space, pop_size, num_children)
  population = init_population(search_space, pop_size)
  best = population.sort{|x,y| x[:fitness] <=> y[:fitness]}.first
  max_gens.times do |gen|
    children = Array.new(num_children) do |i|
      mutate(population[i], search_space)
    end
    children.each{|c| c[:fitness] = objective_function(c[:vector])}
    union = children+population
    union.sort!{|x,y| x[:fitness] <=> y[:fitness]}
    best = union.first if union.first[:fitness] < best[:fitness]
    population = union.first(pop_size)
    puts " > gen #{gen}, fitness=#{best[:fitness]}"
  end
  return best
end

if __FILE__ == $0
  # problem configuration
  problem_size = 2
  search_space = Array.new(problem_size) {|i| [-5, +5]}
  # algorithm configuration
  max_gens = 100
  pop_size = 30
  num_children = 20
  # execute the algorithm
  best = search(max_gens, search_space, pop_size, num_children)
  puts "done! Solution: f=#{best[:fitness]}, s=#{best[:vector].inspect}"
end
Evolution Strategies in Ruby
Download: evolution_strategies.rb.

References

Primary Sources

Evolution Strategies was developed by three students (Bienert, Rechenberg, Schwefel) at the Technical University in Berlin in 1964 in an effort to robotically optimize an aerodynamics design problem. The seminal work in Evolution Strategies was Rechenberg's PhD thesis [Rechenberg1971] that was later published as a book [Rechenberg1973], both in German. Many technical reports and papers were published by Schwefel and Rechenberg, although the seminal paper published in English was by Klockgether and Schwefel on the two–phase nozzle design problem [Klockgether1970].

Learn More

Schwefel published his PhD dissertation [Schwefel1975] not long after Rechenberg, which was also published as a book [Schwefel1977], both in German. Schwefel's book was later translated into English and represents a classical reference for the technique [Schwefel1981]. Bäck et al. provide a classical introduction to the technique, covering the history, development of the algorithm, and the steps that lead it to where it was in 1991 [Back1991]. Beyer and Schwefel provide a contemporary introduction to the field that includes a detailed history of the approach, the developments and improvements since its inception, and an overview of the theoretical findings that have been made [Beyer2002].

Bibliography

[Back1991] T. B\&auml;ck and F. Hoffmeister and H–P. Schwefel, "A survey of evolution strategies", in Proceedings of the Fourth International Conference on Genetic Algorithms, 1991.
[Back1993b] T. B\&auml;ck and H–P. Schwefel, "An Overview of Evolutionary Algorithms for Parameter Optimization", Evolutionary Computation, 1993.
[Beyer2002] H–G. Beyer and H–P. Schwefel, "Evolution strategies: A comprehensive introduction", Natural Computing: an international journal, 2002.
[Klockgether1970] J. Klockgether and H–P. Schwefel, "Two–phase nozzle and hollow core jet experiments", in Proceedings of the Eleventh Symp. Engineering Aspects of Magnetohydrodynamics, 1970.
[Rechenberg1971] I. Rechenberg, "Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien\n\tder biologischen Evolution", [PhD Thesis] Technical University of Berlin, Department of Process Engineering, 1971.
[Rechenberg1973] I. Rechenberg, "Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien\n\tder biologischen Evolution", Frommann-Holzboog Verlag, 1973.
[Rudolph2000] G. Rudolph, "9: Evolution Strategies", in Evolutionary Computation 1: Basic Algorithms and Operations, pages 81–88, IoP Press, 2000.
[Schwefel1975] H–P. Schwefel, "Evolutionsstrategie und numerische Op\ti\mie\rung", [PhD Thesis] Technical University of Berlin, Department of Process Engineering, 1975.
[Schwefel1977] H–P. Schwefel, "Numerische Optimierung von Computer – Modellen mittels der Evolutionsstrategie", Birkhaeuser, 1977.
[Schwefel1981] H–P. Schwefel, "Numerical Optimization of Computer Models", John Wiley \& Sons, 1981.
[Yao1997] X. Yao and Y. Liu, "Fast Evolution Strategies", in Proceedings of the 6th International Conference on Evolutionary Programming\n\tVI, 1997.



Please Note: This content was automatically generated from the book content and may contain minor differences.