Home | Read Online | Amazon | GoodReads | Google Books | PDF (code) | GitHub
Ant Colony System, ACS, Ant-Q.
The Ant Colony System algorithm is an example of an Ant Colony Optimization method from the field of Swarm Intelligence, Metaheuristics and Computational Intelligence. Ant Colony System is an extension to the Ant System algorithm and is related to other Ant Colony Optimization methods such as Elite Ant System, and Rank-based Ant System.
The Ant Colony System algorithm is inspired by the foraging behavior of ants, specifically the pheromone communication between ants regarding a good path between the colony and a food source in an environment. This mechanism is called stigmergy.
Ants initially wander randomly around their environment. Once food is located an ant will begin laying down pheromone in the environment. Numerous trips between the food and the colony are performed and if the same route is followed that leads to food then additional pheromone is laid down. Pheromone decays in the environment, so that older paths are less likely to be followed. Other ants may discover the same path to the food and in turn may follow it and also lay down pheromone. A positive feedback process routes more and more ants to productive paths that are in turn further refined through use.
The objective of the strategy is to exploit historic and heuristic information to construct candidate solutions and fold the information learned from constructing solutions into the history. Solutions are constructed one discrete piece at a time in a probabilistic step-wise manner. The probability of selecting a component is determined by the heuristic contribution of the component to the overall cost of the solution and the quality of solutions from which the component has historically known to have been included. History is updated proportional to the quality of the best known solution and is decreased proportional to the usage if discrete solution components.
Algorithm (below) provides a pseudocode listing of the main Ant Colony System algorithm for minimizing a cost function. The probabilistic step-wise construction of solution makes use of both history (pheromone) and problem-specific heuristic information to incrementally construct a solution piece-by-piece. Each component can only be selected if it has not already been chosen (for most combinatorial problems), and for those components that can be selected from given the current component $i$, their probability for selection is defined as:
$P_{i,j} \leftarrow \frac{\tau_{i,j}^{\alpha} \times \eta_{i,j}^{\beta}}{\sum_{k=1}^c \tau_{i,k}^{\alpha} \times \eta_{i,k}^{\beta}}$where $\eta_{i,j}$ is the maximizing contribution to the overall score of selecting the component (such as $\frac{1.0}{distance_{i,j}}$ for the Traveling Salesman Problem), $\beta$ is the heuristic coefficient (commonly fixed at 1.0), $\tau_{i,j}$ is the pheromone value for the component, $\alpha$ is the history coefficient, and $c$ is the set of usable components. A greediness factor ($q0$) is used to influence when to use the above probabilistic component selection and when to greedily select the best possible component.
A local pheromone update is performed for each solution that is constructed to dissuade following solutions to use the same components in the same order, as follows:
$\tau_{i,j} \leftarrow (1-\sigma) \times \tau_{i,j} + \sigma \times \tau_{i,j}^{0}$where $\tau_{i,j}$ represents the pheromone for the component (graph edge) ($i,j$), $\sigma$ is the local pheromone factor, and $\tau_{i,j}^{0}$ is the initial pheromone value.
At the end of each iteration, the pheromone is updated and decayed using the best candidate solution found thus far (or the best candidate solution found for the iteration), as follows:
$\tau_{i,j} \leftarrow (1-\rho) \times \tau_{i,j} + \rho \times \Delta\tau{i,j}$where $\tau_{i,j}$ represents the pheromone for the component (graph edge) ($i,j$), $\rho$ is the decay factor, and $\Delta\tau{i,j}$ is the maximizing solution cost for the best solution found so far if the component $ij$ is used in the globally best known solution, otherwise it is 0.
Input
:
ProblemSize
, $Population_{size}$, $m$, $\rho$, $\beta$, $\sigma$, $q0$
Output
:
$P_{best}$
CreateHeuristicSolution
{ProblemSize
}Cost
{$S_{h}$}ProblemSize
\times $Pbest_{cost}$}$Pheromone
$\leftarrow$ InitializePheromone
{$Pheromone_{init}$}While
($\neg$StopCondition
())For
($i=1$ To
$m$)ConstructSolution
{Pheromone
, ProblemSize
, $\beta$, $q0$}Cost
{$S_{i}$}If
($Si_{cost}$ $\leq$ $Pbest_{cost}$)End
LocalUpdateAndDecayPheromone
{Pheromone
, $S_{i}$, $Si_{cost}$, $\sigma$}End
GlobalUpdateAndDecayPheromone
{Pheromone
, $P_{best}$, $Pbest_{cost}$, $\rho$}End
Return
($P_{best}$)Listing (below) provides an example of the Ant Colony System algorithm implemented in the Ruby Programming Language. The algorithm is applied to the Berlin52 instance of the Traveling Salesman Problem (TSP), taken from the TSPLIB. The problem seeks a permutation of the order to visit cities (called a tour) that minimized the total distance traveled. The optimal tour distance for Berlin52 instance is 7542 units. Some extensions to the algorithm implementation for speed improvements may consider pre-calculating a distance matrix for all the cities in the problem, and pre-computing a probability matrix for choices during the probabilistic step-wise construction of tours.
def euc_2d(c1, c2) Math.sqrt((c1[0] - c2[0])**2.0 + (c1[1] - c2[1])**2.0).round end def cost(permutation, cities) distance =0 permutation.each_with_index do |c1, i| c2 = (i==permutation.size-1) ? permutation[0] : permutation[i+1] distance += euc_2d(cities[c1], cities[c2]) end return distance end def random_permutation(cities) perm = Array.new(cities.size){|i| i} perm.each_index do |i| r = rand(perm.size-i) + i perm[r], perm[i] = perm[i], perm[r] end return perm end def initialise_pheromone_matrix(num_cities, init_pher) return Array.new(num_cities){|i| Array.new(num_cities, init_pher)} end def calculate_choices(cities, last_city, exclude, pheromone, c_heur, c_hist) choices = [] cities.each_with_index do |coord, i| next if exclude.include?(i) prob = {:city=>i} prob[:history] = pheromone[last_city][i] ** c_hist prob[:distance] = euc_2d(cities[last_city], coord) prob[:heuristic] = (1.0/prob[:distance]) ** c_heur prob[:prob] = prob[:history] * prob[:heuristic] choices << prob end return choices end def prob_select(choices) sum = choices.inject(0.0){|sum,element| sum + element[:prob]} return choices[rand(choices.size)][:city] if sum == 0.0 v = rand() choices.each_with_index do |choice, i| v -= (choice[:prob]/sum) return choice[:city] if v <= 0.0 end return choices.last[:city] end def greedy_select(choices) return choices.max{|a,b| a[:prob]<=>b[:prob]}[:city] end def stepwise_const(cities, phero, c_heur, c_greed) perm = [] perm << rand(cities.size) begin choices = calculate_choices(cities, perm.last, perm, phero, c_heur, 1.0) greedy = rand() <= c_greed next_city = (greedy) ? greedy_select(choices) : prob_select(choices) perm << next_city end until perm.size == cities.size return perm end def global_update_pheromone(phero, cand, decay) cand[:vector].each_with_index do |x, i| y = (i==cand[:vector].size-1) ? cand[:vector][0] : cand[:vector][i+1] value = ((1.0-decay)*phero[x][y]) + (decay*(1.0/cand[:cost])) phero[x][y] = value phero[y][x] = value end end def local_update_pheromone(pheromone, cand, c_local_phero, init_phero) cand[:vector].each_with_index do |x, i| y = (i==cand[:vector].size-1) ? cand[:vector][0] : cand[:vector][i+1] value = ((1.0-c_local_phero)*pheromone[x][y])+(c_local_phero*init_phero) pheromone[x][y] = value pheromone[y][x] = value end end def search(cities, max_it, num_ants, decay, c_heur, c_local_phero, c_greed) best = {:vector=>random_permutation(cities)} best[:cost] = cost(best[:vector], cities) init_pheromone = 1.0 / (cities.size.to_f * best[:cost]) pheromone = initialise_pheromone_matrix(cities.size, init_pheromone) max_it.times do |iter| solutions = [] num_ants.times do cand = {} cand[:vector] = stepwise_const(cities, pheromone, c_heur, c_greed) cand[:cost] = cost(cand[:vector], cities) best = cand if cand[:cost] < best[:cost] local_update_pheromone(pheromone, cand, c_local_phero, init_pheromone) end global_update_pheromone(pheromone, best, decay) puts " > iteration #{(iter+1)}, best=#{best[:cost]}" end return best end if __FILE__ == $0 # problem configuration berlin52 = [[565,575],[25,185],[345,750],[945,685],[845,655], [880,660],[25,230],[525,1000],[580,1175],[650,1130],[1605,620], [1220,580],[1465,200],[1530,5],[845,680],[725,370],[145,665], [415,635],[510,875],[560,365],[300,465],[520,585],[480,415], [835,625],[975,580],[1215,245],[1320,315],[1250,400],[660,180], [410,250],[420,555],[575,665],[1150,1160],[700,580],[685,595], [685,610],[770,610],[795,645],[720,635],[760,650],[475,960], [95,260],[875,920],[700,500],[555,815],[830,485],[1170,65], [830,610],[605,625],[595,360],[1340,725],[1740,245]] # algorithm configuration max_it = 100 num_ants = 10 decay = 0.1 c_heur = 2.5 c_local_phero = 0.1 c_greed = 0.9 # execute the algorithm best = search(berlin52, max_it, num_ants, decay, c_heur, c_local_phero, c_greed) puts "Done. Best Solution: c=#{best[:cost]}, v=#{best[:vector].inspect}" end
The algorithm was initially investigated by Dorigo and Gambardella under the name Ant-Q [Dorigo1996a] [Gambardella1995]. It was renamed Ant Colony System and further investigated first in a technical report by Dorigo and Gambardella [Dorigo1997a], and later published [Dorigo1997].
The seminal book on Ant Colony Optimization in general with a detailed treatment of Ant Colony System is "Ant colony optimization" by Dorigo and Stützle [Dorigo2004]. An earlier book "Swarm intelligence: from natural to artificial systems" by Bonabeau, Dorigo, and Theraulaz also provides an introduction to Swarm Intelligence with a detailed treatment of Ant Colony System [Bonabeau1999].
[Bonabeau1999] | E. Bonabeau and M. Dorigo and G. Theraulaz, "Swarm Intelligence: From Natural to Artificial Systems", Oxford University Press US, 1999. |
[Dorigo1996a] | M. Dorigo and L. M. Gambardella, "A study of some properties of Ant-Q", in Proceedings of PPSN IV–Fourth International Conference on Parallel\n\tProblem Solving From Nature, 1996. |
[Dorigo1997] | M. Dorigo and L. M. Gambardella, "Ant Colony System : A Cooperative Learning Approach to the Traveling\n\tSalesman Problem", IEEE Transactions on Evolutionary Computation, 1997. |
[Dorigo1997a] | M. Dorigo and L. M. Gambardella, "Ant Colony System: A Cooperative Learning Approach to the Traveling\n\tSalesman Problems", Technical Report TR/IRIDIA/1996-5, IRIDIA, Université Libre de Bruxelles, 1997. |
[Dorigo2004] | M. Dorigo and T. St\ützle, "Ant Colony Optimization", MIT Press, 2004. |
[Gambardella1995] | L. Gambardella and M. Dorigo, "Ant–Q: A reinforcement learning approach to the traveling salesman\n\tproblems", in Proceedings of ML-95, Twelfth International Conference on Machine\n\tLearning, 1995. |
Please Note: This content was automatically generated from the book content and may contain minor differences.