Hopfield network example

As a toy problem for playing/testing/understanding attention allocation in OpenCog, I’ve been emulating the behaviour of a Hopfield network within the OpenCog AtomSpace.

For those not already aware, a Hopfield network is a kind of recurrent neural network that acts as an associative memory. It consist of a number of units linked together. These units store and display the patterns in memory. The patterns are stored in a Hopfield network by adjusting the weights of links between units, based on the association between active units in a given pattern.

A result of this, is that if you give the Hopfield network a partial pattern, that is slightly incorrect, it should still retrieve the complete pattern if it’s in memory. The memorised patterns are in fact minimal energy states for the network (see the wikipedia entry for details).

Using this traditional method, approximately 0.14N patterns can be stored (where N = number of units/nodes) in a fully connected network. However trying to teach a network already loaded with patterns a new one results in aberrant patterns. There are ways of allowing continuous learning, the most well known probably being the Palimpsest scheme, where the weights between units are capped. This reduces the number of storable patterns down to 0.05N however. We theorise, that using the mechanism of attention allocation, we’ll be able to achieve comparable if not improved results.

The process of emulating a Hopfield network is described on the OpenCog wiki.

In my next post, I’ll explain an example of Hopfield network emulation using attention allocation.

This entry was posted in Development, Theory and tagged , , , . Bookmark the permalink.