An article in Wired from a while back on Piotr Wozniak (no relation to Steve), a researcher of optimal memory and learning strategies, got me thinking about learning theory and memorization in the context of OpenCog. From the article (emphasis mine):
Long-term memory, the Bjorks said, can be characterized by two components, which they named retrieval strength and storage strength. Retrieval strength measures how likely you are to recall something right now, how close it is to the surface of your mind. Storage strength measures how deeply the memory is rooted. Some memories may have high storage strength but low retrieval strength. Take an old address or phone number. Try to think of it; you may feel that it’s gone. But a single reminder could be enough to restore it for months or years. Conversely, some memories have high retrieval strength but low storage strength. Perhaps you’ve recently been told the names of the children of a new acquaintance. At this moment they may be easily accessible, but they are likely to be utterly forgotten in a few days, and a single repetition a month from now won’t do much to strengthen them at all.
So, in memory studies, they talk of storage strength and retrieval strength. In my observation, retrieval strength is analogous to the distance of an atom’s short term importance to the attentional focus. Atoms just below the attentional focus threshold will be much easier to retrieve, both because there is more chance that they are stored locally and in memory and because they are more likely to be used by mind agents than atom’s with lower short term importance. Storage strength, on the other hand, is related to the long term importance of an atom. Atom’s with very low long term importance are unlikely to persist in the atom space, or more accurately they’ll be preferentially forgotten over atoms with higher long term importance.
One of the problems [with learning] is that the amount of storage strength you gain from practice is inversely correlated with the current retrieval strength. In other words, the harder you have to work to get the right answer, the more the answer is sealed in memory.
Perhaps they’ll be a need to incorporate this. Something that is persistently of use, but only becomes useful at significantly spaced periods of time. Requiring an OpenCog instance to reason about or research the fact again each time would be inefficient if there is some way of recognizing this long term trend. In a way, this is where something like a System Activity Mining agent might come in to play, to data mine such trends, however… I’m personally unsure about whether such an agent will scale to working on an entire atom space, particularly if it’s trying to detect these long term and infrequent trends.
One way to implement the storage of these infrequently but consistently used atoms is to assess the velocity at which short term importance changes when bringing an atom into the attention focus. This velocity would be greater for atoms which are harder to recall since they have further to travel to reach the attentional focus. A higher velocity could then confer more long term importance when an atom entires the attentional focus rather than getting long term importance purely from the stimulus reward system.