µ
Machine Understanding

Search

More Tech Stuff:

Indexing Books: Lessons in Language Computations

Client-Side Frame Manipulation Inside the Microsoft Internet Explorer Object Model with Visual Basic .NET

Replacing a PC power supply

Constructing a Mandelbrot Set Based Logo with Visual Basic.NET and Fireworks


March 3, 2010

Evaluating HTMs, Part 2: What HTMs Do

See also Evaluating HTMs, Part 1

"Hierarchical Temporary Memory, Concepts, Theory, and Terminology " by Hawkins and George, Section 1, What Do HTMs Do?, asserts that HTMs can discover causal relationships in data presented to them. Once the causality is established, they can infer the cause of a new input. They can predict (with some accuracy) future data sets, and they can use the above three abilities to direct (choose) behavior.

It should be pointed out immediately that each of these 4 abilities have been demonstrated by other computational systems. Neural network software that could discover certain types of relationships from raw data was available at least as early as the 1980's. Many mathematical forms of analysis can find causal relationships within data, for instance regression analysis. In any system, once a relationship is established, recognizing it should present no great difficulty. Nor should making predictions, or using the system to control behavior.

What is interesting about the set of claims for HTMs is that they work together holistically (like the human brain) and should be stackable. That is, the HTM systems should be able to deal with external relationships of increasing complexity by stacking HTM subsystems into an appropriate system. In addition, the HTMs can find relationships in time (sequential or temporal relationships).

In fact, if we call some of the data "objects," for an HTM the objects "have a persistent structure; they exist over time." The authors call the objects "causes." In theory an HTM system could deal with multiple forms of data coming in directly from the world, but usually (for now) the HTM deals with a specific subset of data (much as when a human, say, concentrates on music, or on reading). The data could be a computer file, or a stream of data from input devices.

For the HTM to work the causes should be relatively stable, but should generate data that changes over time, as a horse moving across a visual field, or a conversation between two people. Causes are typically multiple.

The discovery of the causal relationships is a learning process. During learning the HTM builds representations of causes in the form of vectors. The relationship is expressed as a set of probabilities for causes; this set is called a "belief." The causes, relationships, and beliefs can be quite complex if the HTM is complex enough. In particular, hierarchies of causes and beliefs can be learned.

The authors say an HTM, once it has gone through learning, can "infer causes of a novel input." This means that if it is presented with new data, it will try to match the data up to one of the causes it knows about. This is basically pattern recognition, and there are other systems, including certain neural networks, that do this well in certain situations. A good point made by is that if a million pixel visual field (of a scene with motion) is used as the input, it would be rare that an exact pattern would be input twice. So inference, matching a set of data to the closest cause, is a necessity. In the older neural networks causes were typically static; adding a time dimension to the data usually makes it easier for an HTM to learn and infer. I should point out, however, that "infer causes of novel input," to me can mean something more than is claimed by an HTM. For humans, it can mean a deduction, or even a deep set of deductions, rather than just recognizing a pattern or its degree of ambiguity. Then again, perhaps a sufficiently complete HTM system could do even that.

The ability to predict is the third leg of what HTMs do. In other words, given a sequence already encountered, an HTM will predict that sequence is happening again. This sounds like not much, but it is an ability that is crucial to machine understanding. In particular the authors point to priming. Given the latest data, the HTM makes a prediction and notes differences between what is predicted and what happens. If data is ambiguous or noisy, the HTM may fill in with the predicted data. If a prediction is fed back into the HTM as data, this is akin to thinking or imagining. Thus the machine could plan for the future. The authors claim "HTMs can do this well." Imagine a sheep-herding dog application. The better it can predict the behavior of the sheep, the less energy it should need to herd them.

Finally, HTMs can direct behavior. Of course almost any device, even simple mechanical ones, can direct behavior. A mouse trap, given a certain type of input, will engage in a known set behavior. Still, mentioning this for HTMs is important because that is exactly what we would expect artificial intelligence or machine understanding to be used for: behavior. An important point is that "From the HTM's perspective, the [output] system it is connected to is just another object in the world." In other words, an HTM can learn about how its own outputs act is causes in the world.

If you want to get an idea of the potential power of HTMs, before wading through a lot of other materials, section 1.4 of the paper "Direct Behavior", is a great starting point.

I'm excited after reading this part.

Possible Acronym: LIPD (learn, infer, predict, direct) model

Next: How do HTMs discover and infer causes?