Home | Read Online | Amazon | GoodReads | Google Books | PDF (code) | GitHub
Adaptive Random Search, ARS, Adaptive Step Size Random Search, ASSRS, Variable Step-Size Random Search.
The Adaptive Random Search algorithm belongs to the general set of approaches known as Stochastic Optimization and Global Optimization. It is a direct search method in that it does not require derivatives to navigate the search space. Adaptive Random Search is an extension of the Random Search and Localized Random Search algorithms.
The Adaptive Random Search algorithm was designed to address the limitations of the fixed step size in the Localized Random Search algorithm. The strategy for Adaptive Random Search is to continually approximate the optimal step size required to reach the global optimum in the search space. This is achieved by trialling and adopting smaller or larger step sizes only if they result in an improvement in the search performance.
The Strategy of the Adaptive Step Size Random Search algorithm (the specific technique reviewed) is to trial a larger step in each iteration and adopt the larger step if it results in an improved result. Very large step sizes are trialled in the same manner although with a much lower frequency. This strategy of preferring large moves is intended to allow the technique to escape local optima. Smaller step sizes are adopted if no improvement is made for an extended period.
Algorithm (below) provides a pseudocode listing of the Adaptive Random Search Algorithm for minimizing a cost function based on the specification for 'Adaptive Step-Size Random Search' by Schummer and Steiglitz [Schumer1968].
Input
:
$Iter_{max}$, $Problem_{size}$, SearchSpace
, $StepSize_{factor}^{init}$, $StepSize_{factor}^{small}$, $StepSize_{factor}^{large}$, $StepSize_{factor}^{iter}$, $NoChange_{max}$
Output
:
$S$
InitializeStepSize
{SearchSpace
, $StepSize_{factor}^{init}$}RandomSolution
{$Problem_{size}$, SearchSpace
}For
($i=0$ To
$Iter_{max}$)TakeStep
{SearchSpace
, $S$, $StepSize_{i}$}If
($i$ $\bmod{$StepSize_{factor}^{iter}$}$)Else
End
TakeStep
{SearchSpace
, $S$, $StepSize_{i}^{large}$}If
(Cost
{$S_1$}$\leq$Cost
{$S$} || Cost
{$S_2$}$\leq$Cost
{$S$})If
(Cost
{$S_2$}$<$Cost
{$S_1$})Else
End
Else
If
($NoChange_{count}$ $>$ $NoChange_{max}$)End
End
End
Return
($S$)Listing (below) provides an example of the Adaptive Random Search Algorithm implemented in the Ruby Programming Language, based on the specification for 'Adaptive Step-Size Random Search' by Schummer and Steiglitz [Schumer1968]. In the example, the algorithm runs for a fixed number of iterations and returns the best candidate solution discovered. The example problem is an instance of a continuous function optimization that seeks $\min f(x)$ where $f=\sum_{i=1}^n x_{i}^2$, $-5.0 < x_i < 5.0$ and $n=2$. The optimal solution for this basin function is $(v_0,\ldots,v_{n-1})=0.0$.
def objective_function(vector) return vector.inject(0) {|sum, x| sum + (x ** 2.0)} end def rand_in_bounds(min, max) return min + ((max-min) * rand()) end def random_vector(minmax) return Array.new(minmax.size) do |i| rand_in_bounds(minmax[i][0], minmax[i][1]) end end def take_step(minmax, current, step_size) position = Array.new(current.size) position.size.times do |i| min = [minmax[i][0], current[i]-step_size].max max = [minmax[i][1], current[i]+step_size].min position[i] = rand_in_bounds(min, max) end return position end def large_step_size(iter, step_size, s_factor, l_factor, iter_mult) return step_size * l_factor if iter>0 and iter.modulo(iter_mult) == 0 return step_size * s_factor end def take_steps(bounds, current, step_size, big_stepsize) step, big_step = {}, {} step[:vector] = take_step(bounds, current[:vector], step_size) step[:cost] = objective_function(step[:vector]) big_step[:vector] = take_step(bounds,current[:vector],big_stepsize) big_step[:cost] = objective_function(big_step[:vector]) return step, big_step end def search(max_iter, bounds, init_factor, s_factor, l_factor, iter_mult, max_no_impr) step_size = (bounds[0][1]-bounds[0][0]) * init_factor current, count = {}, 0 current[:vector] = random_vector(bounds) current[:cost] = objective_function(current[:vector]) max_iter.times do |iter| big_stepsize = large_step_size(iter, step_size, s_factor, l_factor, iter_mult) step, big_step = take_steps(bounds, current, step_size, big_stepsize) if step[:cost] <= current[:cost] or big_step[:cost] <= current[:cost] if big_step[:cost] <= step[:cost] step_size, current = big_stepsize, big_step else current = step end count = 0 else count += 1 count, step_size = 0, (step_size/s_factor) if count >= max_no_impr end puts " > iteration #{(iter+1)}, best=#{current[:cost]}" end return current end if __FILE__ == $0 # problem configuration problem_size = 2 bounds = Array.new(problem_size) {|i| [-5, +5]} # algorithm configuration max_iter = 1000 init_factor = 0.05 s_factor = 1.3 l_factor = 3.0 iter_mult = 10 max_no_impr = 30 # execute the algorithm best = search(max_iter, bounds, init_factor, s_factor, l_factor, iter_mult, max_no_impr) puts "Done. Best Solution: c=#{best[:cost]}, v=#{best[:vector].inspect}" end
Many works in the 1960s and 1970s experimented with variable step sizes for Random Search methods. Schummer and Steiglitz are commonly credited the adaptive step size procedure, which they called 'Adaptive Step-Size Random Search' [Schumer1968]. Their approach only modifies the step size based on an approximation of the optimal step size required to reach the global optima. Kregting and White review adaptive random search methods and propose an approach called 'Adaptive Directional Random Search' that modifies both the algorithms step size and direction in response to the cost function [Kregting1971].
White reviews extensions to Rastrigin's 'Creeping Random Search' [Rastrigin1963] (fixed step size) that use probabilistic step sizes drawn stochastically from uniform and probabilistic distributions [White1971]. White also reviews works that propose dynamic control strategies for the step size, such as Karnopp [Karnopp1963] who proposes increases and decreases to the step size based on performance over very small numbers of trials. Schrack and Choit review random search methods that modify their step size in order to approximate optimal moves while searching, including the property of reversal [Schrack1976]. Masri et al. describe an adaptive random search strategy that alternates between periods of fixed and variable step sizes [Masri1980].
[Karnopp1963] | D. C. Karnopp, "Random search techniques for optimization problems", Automatica, 1963. |
[Kregting1971] | J. Kregting and R. C. White, "Adaptive random search", Technical Report TH-Report 71-E-24, Eindhoven University of Technology, Eindhoven, Netherlands, 1971. |
[Masri1980] | S. F. Masri and G. A. Bekey and F. B. Safford, "Global Optimization Algorithm Using Adaptive Random Search", Applied Mathematics and Computation, 1980. |
[Rastrigin1963] | L. A. Rastrigin, "The Convergence of the Random Search Method in the Extremal Control\n\tof a Many Parameter System", Automation and Remote Control, 1963. |
[Schrack1976] | G. Schrack and M. Choit, "Optimized relative step size random searches", Mathematical Programming, 1976. |
[Schumer1968] | M. Schumer and K. Steiglitz, "Adaptive step size random search", IEEE Transactions on Automatic Control, 1968. |
[White1971] | R. C. White, "A survey of random methods for parameter optimization", Simulation, 1971. |
Please Note: This content was automatically generated from the book content and may contain minor differences.