Clever Algorithms: Nature-Inspired Programming Recipes

A book by Jason Brownlee

Home | Read Online | Amazon | GoodReads | Google Books | PDF (code) | GitHub



Benchmarking Algorithms

When it comes to evaluating an optimization algorithm, every researcher has their own thoughts on the way it should be done. Unfortunately, many empirical evaluations of optimization algorithms are performed and reported without addressing basic experimental design considerations. This section provides a summary of the literature on experimental design and empirical algorithm comparison methodology. This summary contains rules of thumb and the seeds of best practice when attempting to configure and compare optimization algorithms, specifically in the face of the no-free-lunch theorem.

Issues of Benchmarking Methodology

Empirically comparing the performance of algorithms on optimization problem instances is a staple for the fields of Heuristics and Biologically Inspired Computation, and the problems of effective comparison methodology have been discussed since the inception of these fields. Johnson suggests that the coding of an algorithm is the easy part of the process; the difficult work is getting meaningful and publishable results [Johnson2002a]. He goes on to provide a very through list of questions to consider before racing algorithms, as well as what he describes as his "pet peeves" within the field of empirical algorithm research.

Hooker [Hooker1995] (among others) practically condemns what he refers to as competitive testing of heuristic algorithms, calling it "fundamentally anti-intellectual". He goes on to strongly encourage a rigorous methodology of what he refers to as scientific testing where the aim is to investigate algorithmic behaviors.

Barr, Golden et al. [Barr1995] list a number of properties worthy of a heuristic method making a contribution, which can be paraphrased as; efficiency, efficacy, robustness, complexity, impact, generalizability, and innovation. This is interesting given that many (perhaps a majority) of conference papers focus on solution quality alone (one aspect of efficacy). In their classical work on reporting empirical results of heuristics Barr, Golden et al. specify a loose experimental setup methodology with the following steps:

  1. Define the goals of the experiment.
  2. Select measure of performance and factors to explore.
  3. Design and execute the experiment.
  4. Analyze the data and draw conclusions.
  5. Report the experimental results.

They then suggest eight guidelines for reporting results, in summary they are; reproducibility, specify all influential factors (code, computing environment, etc), be precise regarding measures, specify parameters, use statistical experimental design, compare with other methods, reduce variability of results, and ensure results are comprehensive. They then clarify these points with examples.

Peer, Engelbrecht et al. [Peer2003] summarize the problems of algorithm benchmarking (with a bias toward particle swarm optimization) to the following points: duplication of effort, insufficient testing, failure to test against state-of-the-art, poor choice of parameters, conflicting results, and invalid statistical inference. Eiben and Jelasity [Eiben2002] sight four problems with the state of benchmarking evolutionary algorithms; 1) test instances are chosen ad hoc from the literature, 2) results are provided without regard to research objectives, 3) scope of generalized performance is generally too broad, and 4) results are hard to reproduce. Gent and Walsh provide a summary of simple dos and don'ts for experimentally analyzing algorithms [Gent1994]. For an excellent introduction to empirical research and experimental design in artificial intelligence see Cohen's book "Empirical Methods for Artificial Intelligence" [Cohen1995].

The theme of the classical works on algorithm testing methodology is that there is a lack of rigor in the field. The following sections will discuss three main problem areas to consider before benchmarking, namely 1) treating algorithms as complex systems that need to be tuned before applied, 2) considerations when selecting problem instances for benchmarking, and 3) the selection of measures of performance and statistical procedures for testing experimental hypotheses. A final section 4) covers additional best practices to consider.

Selecting Algorithm Parameters

Optimization algorithms are parameterized, although in the majority of cases the effect of adjusting algorithm parameters is not fully understood. This is because unknown non-linear dependencies commonly exist between the variables resulting in the algorithm being considered a complex system. Further, one must be careful when generalizing the performance of parameters across problem instances, problem classes, and domains. Finally, given that algorithm parameters are typically a mixture of real and integer numbers, exhaustively enumerating the parameter space of an algorithm is commonly intractable.

There are many solutions to this problem such as self-adaptive parameters, meta-algorithms (for searching for good parameter values), and methods of performing sensitivity analysis over parameter ranges. A good introduction to the parameterization of genetic algorithms is Lobo, Lima et al. [Lobo2007]. The best and self-evident place to start (although often ignored [Eiben2002]) is to investigate the literature and see what parameters been used historically. Although not a robust solution, it may prove to be a useful starting point for further investigation. The traditional approach is to run an algorithm on a large number of test instances and generalize the results [Schaffer1989]. We, as a field, haven't really come much further than this historical methodology other than perhaps the application of more and differing statistical methods to decrease effort and better support findings.

A promising area of study involves treating the algorithm as a complex system, where problem instances may become yet another parameter of the model [Saltelli2002] [Campolongo2000]. From here, sensitivity analysis can be performed in conjunction with statistical methods to discover parameters that have the greatest effect [Chan1997] and perhaps generalize model behaviors.

Francois and Lavergne [Francois2001] mention the deficiencies of the traditional trial-and-error and experienced-practitioner approaches to parameter tuning, further suggesting that seeking general rules for parameterization will lead to optimization algorithms that offer neither convergent or efficient behaviors. They offer a statistical model for evolutionary algorithms that describes a functional relationship between algorithm parameters and performance. Nannen and Eiben [Nannen2007] [Nannen2006] propose a statistical approach called REVAC (previously Calibration and Relevance Estimation) to estimating the relevance of parameters in a genetic algorithm. Coy, Golden et al. [Coy2001] use a statistical steepest decent method procedure for locating good parameters for metaheuristics on many different combinatorial problem instances.

Bartz-Beielstein [Bartz-Beielstein2003] used a statistical experimental design methodology to investigate the parameterization of the Evolutionary Strategy (ES) algorithm. A sequential statistical methodology is proposed by Bartz-Beielstein, Parsopoulos et al. [Bartz-Beielstein2004] for investigating the parameterization and comparisons between the Particle Swarm Optimization (PSO) algorithm, the Nelder-Mead Simplex Algorithm (direct search), and the Quasi-Newton algorithm (derivative-based). Finally, an approach that is popular within the metaheuristic and Ant Colony Optimization (ACO) community is to use automated Monte Carlo and statistical procedures for sampling discretized parameter space of algorithms on benchmark problem instances [Birattari2002]. Similar racing procedures have also been applied to evolutionary algorithms [Yuan2004].

Problem Instances

This section focuses on issues related to the selection of function optimization test instances, but the general theme of cautiously selecting problem instances is generally applicable.

Common lists of test instances include; De Jong [Jong1975], Fogel [Fogel1995], and Schwefel [Schwefel1995]. Yao, Lui et al. [Yao1999] list many canonical test instances as does Schaffer, Caruana et al. [Schaffer1989]. Gallagher and Yuan [Gallagher2006] review test function generators and propose a tunable mixture of Gaussians test problem generators. Finally, McNish [MacNish2005] proposes using fractal-based test problem generators via a web interface.

The division of test problems into classes is another axiom of modern optimization algorithm research, although the issues with this methodology are the taxonomic criterion for problem classes and on the selection of problem instances for classes.

Eiben and Jelasity [Eiben2002] strongly support the division of problem instances into categories and encourage the evaluation of optimization algorithms over a large number of test instances. They suggest classes could be natural (taken from the real world), or artificial (simplified or generated). In their paper on understanding the interactions of GA parameters, Deb and Agrawal [Deb1999] propose four structural properties of problems for testing genetic algorithms; multi-modality, deception, isolation, and collateral noise. Yao, Lui et al. [Yao1999] divide their large test dataset into the categories of unimodal, 'multimodal-many local optima', and 'multimodal-few local optima'. Whitley, Rana et al. [Whitley1996] provide a detailed study on the problems of selecting test instances for genetic algorithms. They suggest that difficult problem instances should be non-linear, non-separable, and non-symmetric.

English [English1996] suggests that many functions in the field of EC are selected based on structures in the response surface (as demonstrated in the above examples), and that they inherently contain a strong Euclidean bias. The implication is that the algorithms already have some a priori knowledge about the domain built into them and that results are always reported on a restricted problem set. This is a reminder that instances are selected to demonstrate algorithmic behavior, rather than performance.

Measures and Statistical Methods

There are many ways to measure the performance of an optimization algorithm for a problem instance, although the most common involves a quality (efficacy) measure of solution(s) found (see the following for lists and discussion of common performance measures [Bartz-Beielstein2004] [Birattari2005a] [Hughes2006] [Eiben2002] [Barr1995]). Most biologically inspired optimization algorithms have a stochastic element, typically in their starting position(s) and in the probabilistic decisions made during sampling of the domain. Thus, the performance measurements must be repeated a number of times to account for the stochastic variance, which could also be a measure of comparison between algorithms.

Irrespective of the measures used, sound statistical experimental design requires the specification of 1) a null hypothesis (no change), 2) alternative hypotheses (difference, directional difference), and 3) acceptance or rejection criteria for the hypothesis. The null hypothesis is commonly stated as the equality between two or more central tendencies (mean or medians) of a quality measure in a typical case of comparing stochastic-based optimization algorithms on a problem instance.

Peer, Engelbrech et al. [Peer2003] and Birattari and Dorigo [Birattari2005a] provide a basic introduction (suitable for an algorithm-practitioner) into the appropriateness of various statistical tests for algorithm comparisons. For a good introduction to statistics and data analysis see Peck et al. [Peck2005], for an introduction to non-parametric methods see Holander and Wolfe [Hollander1999], and for a detailed presentation of parametric and nonparametric methods and their suitability of application see Sheskin [Hughes2006]. For an excellent open source software package for performing statistical analysis on data see the R Project. (R Project is online at http://www.r-project.org)

To summarize, parametric statistical methods are used for interval and ratio data (like a real-valued performance measure), and nonparametric methods are used for ordinal, categorical and rank-based data. Interval data is typically converted to ordinal data when salient constraints of desired parametric tests (such as assumed normality of distribution) are broken such that the less powerful nonparametric tests can be used. The use of nonparametric statistical tests may be preferred as some authors [Peer2003] [Chiarandini2005] claim the distribution of cost values are very asymmetric and/or not Gaussian. It is important to remember that most parametric tests degrade gracefully.

Chiarandini, Basso et al. [Chiarandini2005] provide an excellent case study for using the permutation test (a nonparametric statistical method) to compare stochastic optimizers by running each algorithm once per problem instance, and multiple times per problem instance. While rigorous, their method appears quite complex and their results are difficult to interpret.

Barrett, Marathe et al. [Barrett2003] provide a rigorous example of applying the parametric test Analysis of Variance (ANOVA) of three different heuristic methods on a small sample of scenarios. Reeves and Write [Reeves1995] [Reeves1995a] also provide an example of using ANOVA in their investigation into epistasis on genetic algorithms. In their tutorial on the experimental investigation of heuristic methods, Rardin and Uzsoy [Rardin2001] warn against the use of statistical methods, claiming their rigidity as a problem, and the importance of practical significance over that of statistical significance. They go on in the face of their own objections to provide an example of using ANOVA to analyze the results of an illustrative case study.

Finally, Peer, Engelbrech et al. [Peer2003] highlight a number of case study example papers that use statistical methods inappropriately. In their OptiBench system and method, algorithm results are standardized, ranked according to three criteria and compared using the Wilcoxon Rank-Sum test, a non-parametric alternative to the Student-T test that is commonly used.

Other

Another pervasive problem in the field of optimization is the reproducibility (implementation) of an algorithm. An excellent solution to this problem is making source code available by creating or collaborating with open-source software projects. This behavior may result in implementation standardization, a reduction in the duplication of effort for experimentation and repeatability, and perhaps more experimental accountability [Eiben2002] [Peer2003].

Peer, Engelbrech et al. [Peer2003] stress the need to compare to the state-of-the-art implementations rather than the historic canonical implementations to give a fair and meaningful evaluation of performance.

Another area that is often neglected is that of algorithm descriptions, particularly in regard to reproducibility. Pseudocode is often used, although (in most cases) in an inconsistent manner and almost always without reference to a recognized pseudocode standard or mathematical notation. Many examples are a mix of programming languages, English descriptions and mathematical notation, making them difficult to follow, and commonly impossible to implement in software due to incompleteness and ambiguity.

An excellent tool for comparing optimization algorithms in terms of their asymptotic behavior from the field of computation complexity is the Big-O notation [Cormen2001]. In addition to clarifying aspects of the algorithm, it provides a problem independent way of characterizing an algorithms space and or time complexity.

Summary

It is clear that there is no silver bullet to experimental design for empirically evaluating and comparing optimization algorithms, although there are as many methods and options as there are publications on the topic. The field of stochastic optimization has not yet agreed upon general methods of application like the field of data mining (processes such as Knowledge Discovery in Databases (KDD) [Fayyad1996]). Although these processes are not experimental methods for comparing machine learning algorithms, they do provide a general model to encourage the practitioner to consider important issues before application of an approach.

Finally, it is worth pointing out a somewhat controversially titled paper by De Jong [Jong1992] that provides a reminder that although the genetic algorithm has been shown to solve function optimization, it is not innately a function optimizer, and function optimization is only a demonstration of this complex adaptive system's ability to learn. It is a reminder to be careful not to link an approach too tightly with a domain, particularly if the domain was chosen for demonstration purposes.

Bibliography

[Barr1995] R. Barr and B. Golden and J. Kelly and M. Rescende and \n\tW. Stewart, "Designing and Reporting on Computational Experiments with Heuristic\n\tMethods", Journal of Heuristics, 1995.
[Barrett2003] C. L. Barrett and A. Marathe and M. V. Marathe and D. Cook and G. Hicks and V. Faber and A. Srinivasan and Y. J. Sussmann and H. Thornquist, "Statistical Analysis of Algorithms: A Case Study of Market-Clearing\n\tMechanisms in the Power Industry", Journal of Graph Algorithms and Applications, 2003.
[Bartz-Beielstein2003] T. Bartz–Beielstein, "Experimental Analysis of Evolution Strategies – Overview and Comprehensive\n\tIntroduction", Computational Intelligence, University of Dortmund, 2003.
[Bartz-Beielstein2004] T. Bartz–Beielstein and K. E. Parsopoulos and M. N. Vrahatis, "Design and Analysis of Optimization Algorithms Using Computational\n\tStatistics", Applied Numerical Analysis \& Computational Mathematics, 2004.
[Birattari2002] M. Birattari and T. St\ützle and L. Paquete and \n\tK. Varrentrapp, "A Racing Algorithm for Configuring Metaheuristics", in Proceedings of the Genetic and Evolutionary Computation Conference, 2002.
[Birattari2005a] M. Birattari and M. Dorigo, "How to assess and report the performance of a stochastic algorithm\n\ton a benchmark problem: Mean or best result on a number of runs?", IRIDIA, Universite Libre de Bruxelles, 2005.
[Campolongo2000] F. Campolongo and A. Saltelli and S. Tarantola, "Sensitivity Anaysis as an Ingredient of Modeling", A Review Journal of The Institute of Mathematical Statistics., 2000.
[Chan1997] K. Chan and A. Saltelli and S. Tarantola, "Sensitivity analysis of model output: variance-based methods make\n\tthe difference", in Proceedings of the 29th conference on Winter simulation (Winter Simulation\n\tConference), 1997.
[Chiarandini2005] M. Chiarandini and D. Basso and T. St\ützle, "Statistical methods for the comparison of stochastic optimizers", in MIC2005: Proceedings of the 6th Metaheuristics International Conference, 2005.
[Cohen1995] P. R. Cohen, "Empirical Methods for Artificial Intelligence", The MIT Press, 1995.
[Cormen2001] T. H. Cormen and C. E. Leiserson and R. L. Rivest and C. Stein, "Introduction to Algorithms", MIT Press and McGraw-Hill, 2001.
[Coy2001] S. P. Coy and B. L. Golden and G. C. Runger and E. A. Wasil, "Using Experimental Design to Find Effective Parameter Settings for\n\tHeuristics", Journal of Heuristics, 2001.
[Deb1999] K. Deb and S. Agrawal, "Understanding Interactions among Genetic Algorithm Parameters", in Proceedings of the Fifth Workshop on Foundations of Genetic Algorithms\n\t(FOGA), 1999.
[Eiben2002] A. E. Eiben and M. Jelasity, "A critical note on experimental research methodology in EC", in Proceedings of the 2002 Congress on Evolutionary Computation (CEC\n\t'02), 2002.
[English1996] T. M. English, "Evaluation of Evolutionary and Genetic Optimizers: No Free Lunch", in Evolutionary Programming V: Proceedings of the Fifth Annual Conference\n\ton Evolutionary Programming, 1996.
[Fayyad1996] U. Fayyad and G. Piatetsky-Shapiro and P. Smyth, "The KDD process for extracting useful knowledge from volumes of\n\tdata", Communications of the ACM, 1996.
[Fogel1995] D. B. Fogel, "Evolutionary computation: Toward a new philosophy of machine intelligence", IEEE Press, 1995.
[Francois2001] O. François and C. Lavergne, "Design of evolutionary algorithms – A statistical perspective", IEEE Transactions on Evolutionary Computation, 2001.
[Gallagher2006] M. Gallagher and B. Yuan, "A general-purpose tunable landscape generator", IEEE Transactions on Evolutionary Computation, 2006.
[Gent1994] I. Gent and T. Walsh, "How not to do it", in Presented at the AAAI Workshop on Experimental Evaluation of Reasoning\n\tand Search Methods, 1994.
[Hollander1999] M. Hollander and D. A. Wolfe, "Nonparametric Statistical Methods", John Wiley \& Sons, Inc., 1999.
[Hooker1995] J. N. Hooker, "Testing heuristics: We have it all wrong", Journal of Heuristics, 1995.
[Hughes2006] E. J. Hughes, "Assessing Robustness of Optimisation Performance for Problems With\n\tExpensive Evaluation Functions", in IEEE Congress on Evolutionary Computation (CEC 2006), 2006.
[Johnson2002a] D. S. Johnson, "A Theoreticians guide for experimental analysis of algorithms", in Proceedings of the 5th and 6th DIMACS Implementation Challenges, 2002.
[Jong1975] K. A. De Jong, "An analysis of the behavior of a class of genetic adaptive systems", [PhD Thesis] University of Michigan Ann Arbor, MI, USA, 1975.
[Jong1992] K. A. De Jong, "Genetic Algorithms are NOT Function Optimizers", in Proceedings of the Second Workshop on Foundations of Genetic Algorithms, 1992.
[Lobo2007] F. G. Lobo and C. F. Lima and Z. Michalewicz, "Parameter Setting in Evolutionary Algorithms", Springer, 2007.
[MacNish2005] C. MacNish, "Benchmarking Evolutionary Algorithms: The Huygens Suite", in Late breaking paper at Genetic and Evolutionary Computation Conference, 2005.
[Nannen2006] V. Nannen and A. E. Eiben, "A method for parameter calibration and relevance estimation in evolutionary\n\talgorithms", in Proceedings of the 8th annual conference on Genetic and evolutionary\n\tcomputation, 2006.
[Nannen2007] V. Nannen and A. E. Eiben, "Relevance Estimation and Value Calibration of Evolutionary Algorithm\n\tParameters", in Joint International Conference for Artificial Intelligence (IJCAI), 2007.
[Peck2005] R. Peck and C. Olsen and J. Devore, "Introduction to Statistics and Data Analysis", Duxbury Publishing, 2005.
[Peer2003] E. S. Peer and A. P. Engelbrecht and F. van den Bergh, "CIRG\@UP OptiBench: A statistically sound framework for benchmarking\n\toptimisation algorithms", in The 2003 Congress on Evolutionary Computation, 2003.
[Rardin2001] R. L. Rardin and R. Uzsoy, "Experimental Evaluation of Heuristic Optimization Algorithms: A Tutorial", Journal of Heuristics, 2001.
[Reeves1995] C. R. Reeves and C. C. Wright, "Epistasis in Genetic Algorithms: An Experimental Design Perspective", in Proceedings of the 6th International Conference on Genetic Algorithms, 1995.
[Reeves1995a] C. Reeves and C. Wright, "An Experimental Design Perspective on Genetic Algorithms", in Foundations of Genetic Algorithms 3, 1995.
[Saltelli2002] A. Saltelli, "Making best use of model evaluations to compute sensitivity indices", Computer Physics Communications, 2002.
[Schaffer1989] J. D. Schaffer and R. A. Caruana and L. J. Eshelman and Rajarshi Das, "A study of control parameters affecting online performance of genetic\n\talgorithms for function optimization", in Proceedings of the third international conference on Genetic algorithms, 1989.
[Schwefel1995] H–P. Schwefel, "Evolution and optimum seeking", Wiley, 1995.
[Whitley1996] D. Whitley and S. Rana and J. Dzubera and K E. Mathias, "Evaluating evolutionary algorithms", Artificial Intelligence - Special volume on empirical methods, 1996.
[Yao1999] X. Yao and Y. Liu and G. Lin, "Evolutionary programming made faster", IEEE Transactions on Evolutionary Computation, 1999.
[Yuan2004] B. Yuan and M. Gallagher, "Statistical racing techniques for improved empirical evaluation of\n\tevolutionary algorithms", in Problem Solving From Nature, 2004.



Please Note: This content was automatically generated from the book content and may contain minor differences.