next up previous contents
Next: 7.3.4 Goal-directed movements Up: 7.3 Results Previous: 7.3.2 Anticipation with the


7.3.3 Performance outside the training domain

For both abstract RNN and MLP, in contrast to the prediction (appendix C.3), the error increase per interval was smaller than the error for one predicted step. To investigate the cause, we first study how the forward models perform on points outside the training domain.

Figure 7.11 shows the change of the forward-model output, $ \bf f$($ \bf s$ + $ \bf e$) - $ \bf f$($ \bf s$), as a result of a small divergence $ \bf e$ from a given sensory input from the test set. In the MLP case, the major part of the change in the output is concentrated around a line with slope 0.5. Thus, sensory states outside the training domain were mapped back, closer to the domain. In the abstract RNN case, this also holds for most of the test points. However, some points were mapped further away from the domain (top part in figure 7.11, right).

Figure 7.11: Response of the forward mapping $ \bf f$($ \bf s$) to small deviations $ \bf e$ from a test pattern input $ \bf s$, for the MLP (left) and for the abstract RNN (right). All values are in pixels. The abstract RNN was trained on the standard set with MPPCA-ext and q = 5. The right diagram is typical for all tested mixture models.
\includegraphics[width=7.7cm]{pioneer/deviation.eps} \includegraphics[width=7.7cm]{pioneer/deviationRNN.eps}

The sensory input from the training and test data is restricted to a two-dimensional manifold. Thus, a single prediction steps starts from this manifold, and its error can extend into all ten directions. In a sequence, however, the sensory input for some steps has left the manifold. In these cases, as shown in the experiment, the sensory state is mapped back toward the manifold. Therefore, the directions the error can go are restricted to the directions within the manifold. Thus, different from the assumption in appendix C.3, the sequence of errors did a random walk in two dimensions instead of ten. This led to more error compensations and thus to a slower error increase.

To test this argument further, the square error between successive prediction steps was compared. In accordance with the argument, the observed percentage of error compensations was 45% for the MLP and around 40% for the abstract RNN (this percentage is higher than for a random walk in ten dimensions).


next up previous contents
Next: 7.3.4 Goal-directed movements Up: 7.3 Results Previous: 7.3.2 Anticipation with the
Heiko Hoffmann
2005-03-22