Clever Algorithms: Nature-Inspired Programming Recipes

A book by Jason Brownlee

Home | Read Online | Amazon | GoodReads | Google Books | PDF (code) | GitHub



Neural Algorithms

Overview

This chapter describes Neural Algorithms.

Biological Neural Networks

A Biological Neural Network refers to the information processing elements of the nervous system, organized as a collection of neural cells, called neurons, that are interconnected in networks and interact with each other using electrochemical signals. A biological neuron is generally comprised of dendrites which provide the input signals to the cell. The neuron reacts to input signals and may produce an output signal on its output connection called the axon which is connected to the dendrites of other neurons via synapses.

The study of biological neural networks falls within the domain of neuroscience which is a branch of biology concerned with the nervous system. Neuroanatomy is a subject that is concerned with the the structure and function of groups of neural networks both with regard to parts of the brain and the structures that lead from and to the brain from the rest of the body. Neuropsychology is another discipline concerned with the structure and function of the brain as they relate to abstract psychological behaviors. For further information, refer to a good textbook on any of these general topics.

Artificial Neural Networks

The field of Artificial Neural Networks (ANN) is concerned with the investigation of computational models inspired by theories and observation of the structure and function of biological networks of neural cells in the brain. They are generally designed as models for addressing mathematical, computational, and engineering problems. As such, there is a lot of interdisciplinary research in mathematics, neurobiology and computer science.

An Artificial Neural Network is generally comprised of a collection of artificial neurons that are interconnected in order to performs some computation on input patterns and create output patterns. They are adaptive systems capable of modifying their internal structure, typically the weights between nodes in the network, allowing them to be used for a variety of function approximation problems such as classification, regression, feature extraction and content addressable memory.

Given that the focus of the field is on performing computation with networks of discrete computing units, the field is traditionally called a 'connectionist' paradigm of Artificial Intelligence and 'Neural Computation'.

There are many types of neural networks, many of which fall into one of two categories:

Artificial Neural Network structures are made up of nodes and weights which typically require training based on samples of patterns from a problem domain. Some examples of learning strategies include:

Artificial Neural Networks are typically difficult to configure and slow to train, but once prepared are very fast in application. They are generally used for function approximation-based problem domains and prized for their capabilities of generalization and tolerance to noise. They are known to have the limitation of being opaque, meaning there is little explanation to the subject matter expert as to why decisions were made, only how.

There are many excellent reference texts for the field of Artificial Neural Networks, some selected texts include: "Neural Networks for Pattern Recognition" by Bishop [Bishop1995], "Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks" by Reed and Marks II [Reed1999] and "An Introduction to Neural Networks" by Gurney [Gurney1997].

Algorithms

Extensions

There are many other algorithms and classes of algorithm that were not described from the field of Artificial Neural Networks, not limited to:

Bibliography

[Bishop1995] C. M. Bishop, "Neural Networks for Pattern Recognition", Oxford University Press, 1995.
[Gurney1997] K. Gurney, "An Introduction to Neural Networks", CRC Press, 1997.
[Hawkins2005] J. Hawkins and S. Blakeslee, "On Intelligence", Henry Holt and Company, 2005.
[Howlett2001] Robert J. Howlett and L. C. Jain, "Radial basis function networks 1: recent developments in theory and\n\tapplications", Springer, 2001.
[Martinetz1991] T. Martinetz and K. Schulten, "A "neural gas" network learns topologies", in Artificial Neural Networks, 1991.
[Martinetz1993] T. M. Martinetz and S. G. Berkovich and K. J. Schulten, "'Neural-gas' network for vector quantization and its application\n\tto time-series prediction", IEEE Transactions on Neural Networks, 1993.
[Martinetz1994] T. Martinetz and K. Schulten, "Topology representing networks", Neural Networks, 1994.
[Reed1999] R. D. Reed and R. J. Marks II, "Neural Smithing: Supervised Learning in Feedforward Artificial Neural\n\tNetworks", Mit Press, 1999.



Please Note: This content was automatically generated from the book content and may contain minor differences.