Clever Algorithms: Nature-Inspired Programming Recipes

A book by Jason Brownlee

Home | Read Online | Amazon | GoodReads | Google Books | PDF (code) | GitHub



Hopfield Network

Hopfield Network, HN, Hopfield Model.

Taxonomy

The Hopfield Network is a Neural Network and belongs to the field of Artificial Neural Networks and Neural Computation. It is a Recurrent Neural Network and is related to other recurrent networks such as the Bidirectional Associative Memory (BAM). It is generally related to feedforward Artificial Neural Networks such as the Perceptron and the Back-propagation algorithm.

Inspiration

The Hopfield Network algorithm is inspired by the associated memory properties of the human brain.

Metaphor

Through the training process, the weights in the network may be thought to minimize an energy function and slide down an energy surface. In a trained network, each pattern presented to the network provides an attractor, where progress is made towards the point of attraction by propagating information around the network.

Strategy

The information processing objective of the system is to associate the components of an input pattern with a holistic representation of the pattern called Content Addressable Memory (CAM). This means that once trained, the system will recall whole patterns, given a portion or a noisy version of the input pattern.

Procedure

The Hopfield Network is comprised of a graph data structure with weighted edges and separate procedures for training and applying the structure. The network structure is fully connected (a node connects to all other nodes except itself) and the edges (weights) between the nodes are bidirectional.

The weights of the network can be learned via a one-shot method (one-iteration through the patterns) if all patterns to be memorized by the network are known. Alternatively, the weights can be updated incrementally using the Hebb rule where weights are increased or decreased based on the difference between the actual and the expected output. The one-shot calculation of the network weights for a single node occurs as follows:

$w_{i,j} = \sum_{k=1}^{N} v_k^i\times v_k^j$

where $w_{i,j}$ is the weight between neuron $i$ and $j$, $N$ is the number of input patterns, $v$ is the input pattern and $v_k^i$ is the $i^{th}$ attribute on the $k^{th}$ input pattern.

The propagation of the information through the network can be asynchronous where a random node is selected each iteration, or synchronously, where the output is calculated for each node before being applied to the whole network. Propagation of the information continues until no more changes are made or until a maximum number of iterations has completed, after which the output pattern from the network can be read. The activation for a single node is calculated as follows:

$n_i = \sum_{j=1}^n w_{i,j}\times n_j$

where $n_i$ is the activation of the $i^{th}$ neuron, $w_{i,j}$ with the weight between the nodes $i$ and $j$, and $n_j$ is the output of the $j^{th}$ neuron. The activation is transferred into an output using a transfer function, typically a step function as follows:

\[transfer(n_i) = \left{ \begin{array}{l l} 1 & \quad if \geq \theta \ -1 & \quad if < \theta \ \end{array} \right. \]

where the threshold $\theta$ is typically fixed at 0.

Heuristics

Code Listing

Listing (below) provides an example of the Hopfield Network algorithm implemented in the Ruby Programming Language. The problem is an instance of a recall problem where patters are described in terms of a $3 \times 3$ matrix of binary values ($\in {-1,1}$). Once the network has learned the patterns, the system is exposed to perturbed versions of the patterns (with errors introduced) and must respond with the correct pattern. Two patterns are used in this example, specifically 'T', and 'U'.

The algorithm is an implementation of the Hopfield Network with a one-shot training method for the network weights, given that all patterns are already known. The information is propagated through the network using an asynchronous method, which is repeated for a fixed number of iterations. The patterns are displayed to the console during the testing of the network, with the outputs converted from ${-1,1}$ to ${0,1}$ for readability.

def random_vector(minmax)
  return Array.new(minmax.size) do |i|
    minmax[i][0] + ((minmax[i][1] - minmax[i][0]) * rand())
  end
end

def initialize_weights(problem_size)
  minmax = Array.new(problem_size) {[-0.5,0.5]}
  return random_vector(minmax)
end

def create_neuron(num_inputs)
  neuron = {}
  neuron[:weights] = initialize_weights(num_inputs)
  return neuron
end

def transfer(activation)
  return (activation >= 0) ? 1 : -1
end

def propagate_was_change?(neurons)
  i = rand(neurons.size)
  activation = 0
  neurons.each_with_index do |other, j|
    activation += other[:weights][i]*other[:output] if i!=j
  end
  output = transfer(activation)
  change = output != neurons[i][:output]
  neurons[i][:output] = output
  return change
end

def get_output(neurons, pattern, evals=100)
  vector = pattern.flatten
  neurons.each_with_index {|neuron,i| neuron[:output] = vector[i]}
  evals.times { propagate_was_change?(neurons) }
  return Array.new(neurons.size){|i| neurons[i][:output]}
end

def train_network(neurons, patters)
  neurons.each_with_index do |neuron, i|
    for j in ((i+1)...neurons.size) do
      next if i==j
      wij = 0.0
      patters.each do |pattern|
        vector = pattern.flatten
        wij += vector[i]*vector[j]
      end
      neurons[i][:weights][j] = wij
      neurons[j][:weights][i] = wij
    end
  end
end

def to_binary(vector)
  return Array.new(vector.size){|i| ((vector[i]==-1) ? 0 : 1)}
end

def print_patterns(provided, expected, actual)
  p, e, a = to_binary(provided), to_binary(expected), to_binary(actual)
  p1, p2, p3 = p[0..2].join(', '), p[3..5].join(', '), p[6..8].join(', ')
  e1, e2, e3 = e[0..2].join(', '), e[3..5].join(', '), e[6..8].join(', ')
  a1, a2, a3 = a[0..2].join(', '), a[3..5].join(', '), a[6..8].join(', ')
  puts "Provided   Expected     Got"
  puts "#{p1}     #{e1}      #{a1}"
  puts "#{p2}     #{e2}      #{a2}"
  puts "#{p3}     #{e3}      #{a3}"
end

def calculate_error(expected, actual)
  sum = 0
  expected.each_with_index do |v, i|
    sum += 1 if expected[i]!=actual[i]
  end
  return sum
end

def perturb_pattern(vector, num_errors=1)
  perturbed = Array.new(vector)
  indicies = [rand(perturbed.size)]
  while indicies.size < num_errors do
    index = rand(perturbed.size)
    indicies << index if !indicies.include?(index)
  end
  indicies.each {|i| perturbed[i] = ((perturbed[i]==1) ? -1 : 1)}
  return perturbed
end

def test_network(neurons, patterns)
  error = 0.0
  patterns.each do |pattern|
    vector = pattern.flatten
    perturbed = perturb_pattern(vector)
    output = get_output(neurons, perturbed)
    error += calculate_error(vector, output)
    print_patterns(perturbed, vector, output)
  end
  error = error / patterns.size.to_f
  puts "Final Result: avg pattern error=#{error}"
  return error
end

def execute(patters, num_inputs)
  neurons = Array.new(num_inputs) { create_neuron(num_inputs) }
  train_network(neurons, patters)
  test_network(neurons, patters)
  return neurons
end

if __FILE__ == $0
  # problem configuration
  num_inputs = 9
  p1 = [[1,1,1],[-1,1,-1],[-1,1,-1]] # T
  p2 = [[1,-1,1],[1,-1,1],[1,1,1]] # U
  patters = [p1, p2]
  # execute the algorithm
  execute(patters, num_inputs)
end
Hopfield Network in Ruby
Download: hopfield.rb.

References

Primary Sources

The Hopfield Network was proposed by Hopfield in 1982 where the basic model was described and related to an abstraction of the inspiring biological system [Hopfield1982]. This early work was extended by Hopfield to 'graded' neurons capable of outputting a continuous value through use of a logistic (sigmoid) transfer function [Hopfield1984]. An innovative work by Hopfield and Tank considered the use of the Hopfield network for solving combinatorial optimization problems, with a specific study into the system applied to instances of the Traveling Salesman Problem [Hopfield1985]. This was achieved with a large number of neurons and a representation that decoded the position of each city in the tour as a sub-problem on which a customized network energy function had to be minimized.

Learn More

Popovici and Boncut provide a summary of the Hopfield Network algorithm with worked examples [Popovici2005]. Overviews of the Hopfield Network are provided in most good books on Artificial Neural Networks, such as [Rojas1996]. Hertz, Krogh, and Palmer present an in depth study of the field of Artificial Neural Networks with a detailed treatment of the Hopfield network from a statistical mechanics perspective [Hertz1991].

Bibliography

[Hertz1991] J. Hertz and Krogh A. and R. G. Palmer, "Introduction to the theory of neural computation", Westview Press, 1991.
[Hopfield1982] J. J. Hopfield, "Neural networks and physical systems with emergent collective computational\n\tabilities", in Proceedings of the National Academy of Sciences of the USA, 1982.
[Hopfield1984] J. J. Hopfield, "Neurons with graded response have collective computational properties\n\tlike those of two-state neurons", in Proceedings of the National Academy of Sciences, 1984.
[Hopfield1985] J. J. Hopfield and D. W. Tank, ""Neural" computation of decisions in optimization problems", Biological Cybernetics, 1985.
[Popovici2005] N. Popovici and M. Boncut, "On the Hopfield algorithm. Foundations and examples", General Mathematics, 2005.
[Rojas1996] R. Rojas, "13. The Hopfield Model", in Neural Networks – A Systematic Introduction, Springer, 1996.



Please Note: This content was automatically generated from the book content and may contain minor differences.