µ
Machine Understanding

Search

More Tech Stuff:

Indexing Books: Lessons in Language Computations

Client-Side Frame Manipulation Inside the Microsoft Internet Explorer Object Model with Visual Basic .NET

Replacing a PC power supply

Constructing a Mandelbrot Set Based Logo with Visual Basic.NET and Fireworks


March 15, 2010

Evaluating HTMs, Part 7: Belief Propagation

See also Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 8

"Hierarchical Temporary Memory, Concepts, Theory, and Terminology " by Hawkins and George, Section 3.3, Belief Propagation... clarifies some aspects of how HTMs do inference. That is, how the nodes in the hierarchy work together, given an input, to get an output. It assumes that the HTM has already gone through its learning curve with prior data.

Belief Propagation "can be used to force the entire network to quickly settle on aset of beliefs that are mutually consistent." It assumes the network is Bayesian, which means that it uses probabilities rather than On/Off or even shares-of-gray logic [See also Bayesian Network].

The nodes of an HTM are connected via conditional probability tables (CPTs). Each pair of connected nodes has a CPT. Because the math involves vectors multiplied by matrices, with vectors as outputs, these probably should be called conditional probability matrices. Nodes at different levels of the hierarchy have different levels of functionality, with more complex causes being in the quantization points as data goes up the hierarchy. In effect the CPTs translate between levels. The belief that a cause is a leaf, not an eye, might result in a higher-level node believing that the overall cause is a tree, not a cat. But suppose another look says the cause (for a particular node) is more likely to be an eye. Then belief propagation, through the CPTs, may make a higher node increase the probability that a cat is the cause, and decrease the probability that a tree is the cause of the data.

You can read about belief propagation at Wikipedia or in more depth in books such as Probablistic Reasoning in Intelligent Systems by Judea Pearl (Chapter 4, Belief Updating by Network Propagation).

HTMs are more sophisticated than ordinary Bayesian networks with Belief Propagation because they are able to process a time sequence element.

"You can think of HTMs as similar to Bayesian networks but with some significant additions to handle time, self-training, and the discovery of causes." HTMs also differ in that they allow loops in the network, just as the human nervous system does.