Well, I’d noticed it’s been a while since we’ve had a post here and thought I’d rectify it with a brief note of what I’ve been up to. Posts of more substance are on their way I promise!
Recently we’ve been trying to play with some new mechanisms for the spread of Short Term Importance in OpenCog. The initial implementation worked alright, but not sufficiently well to make the Hopfield network emulation perform admirably. Ideally we’d like to have a emulator comparable to traditional Hopfield networks that has the added benefit of effective continuous learning. Now, perhaps this focus on a toy problem is unnecessary, but if we can come up with a continuous learning Hopfield network then they’ll be a scientific paper in there somewhere, which will lend to OpenCog’s credibility in the future.
Initially I implemented an importance spreading mechanism that ensured importance couldn’t spread uphill to atoms with higher importance. This performed somewhat better than the existing system (based on very cursory evaluation), but was still somewhat hacky.
Ben suggested an example analogous to diffusion, which uses a Markov matrix to represent the transition probabilities of STI moving along a Hebbian link from one atom to another. Thus I went and implemented that, which works super effectively when imprinting a single pattern – but doesn’t work so well with more than one. I’m now going to modify how the the transition matrix is constructed based on what atoms are in the attentional focus, which should prevent importance diffusion being the free for all it is at the moment.
And in between doing this Attention Allocation stuff, I’m trying to port an implementation of Probabilistic Logic Networks from Novamente (OpenCog’s benevolent ancestor) to OpenCog. Theoretically that shouldn’t be too hard… but it’s quite complex code in places. Smart pointers to vectors of trees of predicates, oh my!