next up previous contents
Next: 4.1.2 Potential fields and Up: 4.1 Motivation Previous: 4.1 Motivation

4.1.1 Why abstract recurrent neural networks?

Recurrent neural networks have two main advantages over feed-forward networks (Steinkühler and Cruse, 1998; Movellan and McClelland, 1993): first, they do not fail on tasks that provide many possible solutions for a given input since RNNs relaxe to one possible solution. Second, the role of input and output neurons can be chosen after training. Thus, the same sensorimotor network can, for example, be used as a forward model and as an inverse model.

Evidently, RNNs exist in the brain (see, for example, Nakazawa et al. (2002)). So far, however, computational RNNs that are able to learn sensorimotor relationships are missing. Existing models either cannot be trained (Steinkühler and Cruse, 1998; Cruse, 2001), or cannot be used for arbitrary functional relationships (Hopfield, 1982,1984). Although recurrent connections are widespread in biological nervous systems, their specific functions and the corresponding learning mechanisms are still widely unknown, and thus do not offer a direct approach to this problem. Moreover, since we did not see how to construct a computational recurrent network that can also learn to approximate functions, we developed an abstract network with the desired characteristics. This step was further motivated by potential field models (Bachmann et al., 1987; Dembo and Zeitouni, 1988). They can store arbitrarily many patterns, which are the minima of a potential field. In recall, these models descent into the minima. However, they do not generalize; the data are just stored.


next up previous contents
Next: 4.1.2 Potential fields and Up: 4.1 Motivation Previous: 4.1 Motivation
Heiko Hoffmann
2005-03-22