Natural Language Understanding Wiki
Register
Advertisement

From "Reinforcement Learning: An Introduction", Richard S. Sutton and Andrew G. Barto:

Thus, adding dimensions, such as new sensors or new features, to a task should be almost without consequence if the complexity of the needed approximations remain the same. The new dimensions may even make things easier if the target function can be simply expressed in terms of them. Methods like tile and RBF coding, however, do not work this way. Their complexity increases exponentially with dimensionality even if the complexity of the target function does not. For these methods, dimensionality itself is still a problem. We need methods whose complexity is unaffected by dimensionality per se, methods that are limited only by, and scale well with, the complexity of what they approximate. One simple approach that meets these criteria, which we call Kanerva coding, is to choose binary features that correspond to particular prototype states.

See also: https://en.wikipedia.org/wiki/Sparse_distributed_memory

Advertisement