Home | Read Online | Amazon | GoodReads | Google Books | PDF (code) | GitHub
Ant System, AS, Ant Cycle.
The Ant System algorithm is an example of an Ant Colony Optimization method from the field of Swarm Intelligence, Metaheuristics and Computational Intelligence. Ant System was originally the term used to refer to a range of Ant based algorithms, where the specific algorithm implementation was referred to as Ant Cycle. The so-called Ant Cycle algorithm is now canonically referred to as Ant System. The Ant System algorithm is the baseline Ant Colony Optimization method for popular extensions such as Elite Ant System, Rank-based Ant System, Max-Min Ant System, and Ant Colony System.
The Ant system algorithm is inspired by the foraging behavior of ants, specifically the pheromone communication between ants regarding a good path between the colony and a food source in an environment. This mechanism is called stigmergy.
Ants initially wander randomly around their environment. Once food is located an ant will begin laying down pheromone in the environment. Numerous trips between the food and the colony are performed and if the same route is followed that leads to food then additional pheromone is laid down. Pheromone decays in the environment, so that older paths are less likely to be followed. Other ants may discover the same path to the food and in turn may follow it and also lay down pheromone. A positive feedback process routes more and more ants to productive paths that are in turn further refined through use.
The objective of the strategy is to exploit historic and heuristic information to construct candidate solutions and fold the information learned from constructing solutions into the history. Solutions are constructed one discrete piece at a time in a probabilistic step-wise manner. The probability of selecting a component is determined by the heuristic contribution of the component to the overall cost of the solution and the quality of solutions from which the component has historically known to have been included. History is updated proportional to the quality of candidate solutions and is uniformly decreased ensuring the most recent and useful information is retained.
Algorithm (below) provides a pseudocode listing of the main Ant System algorithm for minimizing a cost function. The pheromone update process is described by a single equation that combines the contributions of all candidate solutions with a decay coefficient to determine the new pheromone value, as follows:
$\tau_{i,j} \leftarrow (1-\rho) \times \tau_{i,j} + \sum_{k=1}^m \Delta_{i,j}^k$where $\tau_{i,j}$ represents the pheromone for the component (graph edge) ($i,j$), $\rho$ is the decay factor, $m$ is the number of ants, and $\sum_{k=1}^m \Delta_{i,j}^k $ is the sum of $\frac{1}{S_{cost}} $ (maximizing solution cost) for those solutions that include component $i,j$. The Pseudocode listing shows this equation as an equivalent as a two step process of decay followed by update for simplicity.
The probabilistic step-wise construction of solution makes use of both history (pheromone) and problem-specific heuristic information to incrementally construction a solution piece-by-piece. Each component can only be selected if it has not already been chosen (for most combinatorial problems), and for those components that can be selected from (given the current component $i$), their probability for selection is defined as:
$P_{i,j} \leftarrow \frac{\tau_{i,j}^{\alpha} \times \eta_{i,j}^{\beta}}{\sum_{k=1}^c \tau_{i,k}^{\alpha} \times \eta_{i,k}^{\beta}}$where $\eta_{i,j}$ is the maximizing contribution to the overall score of selecting the component (such as $\frac{1.0}{distance_{i,j}}$ for the Traveling Salesman Problem), $\alpha$ is the heuristic coefficient, $\tau_{i,j}$ is the pheromone value for the component, $\beta$ is the history coefficient, and $c$ is the set of usable components.
Input
:
ProblemSize
, $Population_{size}$, $m$, $\rho$, $\alpha$, $\beta$
Output
:
$P_{best}$
CreateHeuristicSolution
{ProblemSize
}Cost
{$S_{h}$}Pheromone
$\leftarrow$ InitializePheromone
{$Pbest_{cost}$}While
($\neg$StopCondition
())Candidates
$\leftarrow$ $\emptyset$For
($i=1$ To
$m$)ProbabilisticStepwiseConstruction
{Pheromone
, ProblemSize
, $\alpha$, $\beta$}Cost
{$S_{i}$}If
($Si_{cost}$ $\leq$ $Pbest_{cost}$)End
Candidates
$\leftarrow$ $S_{i}$End
DecayPheromone
{Pheromone
, $\rho$}For
($S_{i}$ $\in$ Candidates
)UpdatePheromone
{Pheromone
, $S_{i}$, $Si_{cost}$}End
End
Return
($P_{best}$)Listing (below) provides an example of the Ant System algorithm implemented in the Ruby Programming Language. The algorithm is applied to the Berlin52 instance of the Traveling Salesman Problem (TSP), taken from the TSPLIB. The problem seeks a permutation of the order to visit cities (called a tour) that minimized the total distance traveled. The optimal tour distance for Berlin52 instance is 7542 units. Some extensions to the algorithm implementation for speed improvements may consider pre-calculating a distance matrix for all the cities in the problem, and pre-computing a probability matrix for choices during the probabilistic step-wise construction of tours.
def euc_2d(c1, c2) Math.sqrt((c1[0] - c2[0])**2.0 + (c1[1] - c2[1])**2.0).round end def cost(permutation, cities) distance =0 permutation.each_with_index do |c1, i| c2 = (i==permutation.size-1) ? permutation[0] : permutation[i+1] distance += euc_2d(cities[c1], cities[c2]) end return distance end def random_permutation(cities) perm = Array.new(cities.size){|i| i} perm.each_index do |i| r = rand(perm.size-i) + i perm[r], perm[i] = perm[i], perm[r] end return perm end def initialise_pheromone_matrix(num_cities, naive_score) v = num_cities.to_f / naive_score return Array.new(num_cities){|i| Array.new(num_cities, v)} end def calculate_choices(cities, last_city, exclude, pheromone, c_heur, c_hist) choices = [] cities.each_with_index do |coord, i| next if exclude.include?(i) prob = {:city=>i} prob[:history] = pheromone[last_city][i] ** c_hist prob[:distance] = euc_2d(cities[last_city], coord) prob[:heuristic] = (1.0/prob[:distance]) ** c_heur prob[:prob] = prob[:history] * prob[:heuristic] choices << prob end choices end def select_next_city(choices) sum = choices.inject(0.0){|sum,element| sum + element[:prob]} return choices[rand(choices.size)][:city] if sum == 0.0 v = rand() choices.each_with_index do |choice, i| v -= (choice[:prob]/sum) return choice[:city] if v <= 0.0 end return choices.last[:city] end def stepwise_const(cities, phero, c_heur, c_hist) perm = [] perm << rand(cities.size) begin choices = calculate_choices(cities,perm.last,perm,phero,c_heur,c_hist) next_city = select_next_city(choices) perm << next_city end until perm.size == cities.size return perm end def decay_pheromone(pheromone, decay_factor) pheromone.each do |array| array.each_with_index do |p, i| array[i] = (1.0 - decay_factor) * p end end end def update_pheromone(pheromone, solutions) solutions.each do |other| other[:vector].each_with_index do |x, i| y=(i==other[:vector].size-1) ? other[:vector][0] : other[:vector][i+1] pheromone[x][y] += (1.0 / other[:cost]) pheromone[y][x] += (1.0 / other[:cost]) end end end def search(cities, max_it, num_ants, decay_factor, c_heur, c_hist) best = {:vector=>random_permutation(cities)} best[:cost] = cost(best[:vector], cities) pheromone = initialise_pheromone_matrix(cities.size, best[:cost]) max_it.times do |iter| solutions = [] num_ants.times do candidate = {} candidate[:vector] = stepwise_const(cities, pheromone, c_heur, c_hist) candidate[:cost] = cost(candidate[:vector], cities) best = candidate if candidate[:cost] < best[:cost] solutions << candidate end decay_pheromone(pheromone, decay_factor) update_pheromone(pheromone, solutions) puts " > iteration #{(iter+1)}, best=#{best[:cost]}" end return best end if __FILE__ == $0 # problem configuration berlin52 = [[565,575],[25,185],[345,750],[945,685],[845,655], [880,660],[25,230],[525,1000],[580,1175],[650,1130],[1605,620], [1220,580],[1465,200],[1530,5],[845,680],[725,370],[145,665], [415,635],[510,875],[560,365],[300,465],[520,585],[480,415], [835,625],[975,580],[1215,245],[1320,315],[1250,400],[660,180], [410,250],[420,555],[575,665],[1150,1160],[700,580],[685,595], [685,610],[770,610],[795,645],[720,635],[760,650],[475,960], [95,260],[875,920],[700,500],[555,815],[830,485],[1170,65], [830,610],[605,625],[595,360],[1340,725],[1740,245]] # algorithm configuration max_it = 50 num_ants = 30 decay_factor = 0.6 c_heur = 2.5 c_hist = 1.0 # execute the algorithm best = search(berlin52, max_it, num_ants, decay_factor, c_heur, c_hist) puts "Done. Best Solution: c=#{best[:cost]}, v=#{best[:vector].inspect}" end
The Ant System was described by Dorigo, Maniezzo, and Colorni in an early technical report as a class of algorithms and was applied to a number of standard combinatorial optimization algorithms [Dorigo1991]. A series of technical reports at this time investigated the class of algorithms called Ant System and the specific implementation called Ant Cycle. This effort contributed to Dorigo's PhD thesis published in Italian [Dorigo1992]. The seminal publication into the investigation of Ant System (with the implementation still referred to as Ant Cycle) was by Dorigo in 1996 [Dorigo1996].
The seminal book on Ant Colony Optimization in general with a detailed treatment of Ant system is "Ant colony optimization" by Dorigo and Stützle [Dorigo2004]. An earlier book "Swarm intelligence: from natural to artificial systems" by Bonabeau, Dorigo, and Theraulaz also provides an introduction to Swarm Intelligence with a detailed treatment of Ant System [Bonabeau1999].
[Bonabeau1999] | E. Bonabeau and M. Dorigo and G. Theraulaz, "Swarm Intelligence: From Natural to Artificial Systems", Oxford University Press US, 1999. |
[Dorigo1991] | M. Dorigo and V. Maniezzo and A. Colorni, "Positive Feedback as a Search Strategy", ipartimento di Elettronica, Politecnico di Milano, 1991. |
[Dorigo1992] | M. Dorigo, "Optimization, Learning and Natural Algorithms (in Italian)", [PhD Thesis] Dipartimento di Elettronica, Politecnico di Milano, Milan, Italy, 1992. |
[Dorigo1996] | M. Dorigo, "The Ant System: Optimization by a colony of cooperating agents", IEEE Transactions on Systems, Man, and Cybernetics–Part B, 1996. |
[Dorigo2004] | M. Dorigo and T. St\ützle, "Ant Colony Optimization", MIT Press, 2004. |
Please Note: This content was automatically generated from the book content and may contain minor differences.