next up previous contents
Next: 7.3.3 Performance outside the Up: 7.3 Results Previous: 7.3.1 Anticipation with the


7.3.2 Anticipation with the abstract recurrent neural network

For the abstract recurrent neural network, the error increase over time was worse than for the MLP (table 7.1 and 7.2). On the standard training set, MPPCA-ext was better than NGPCA and NGPCA-constV for the anticipated q value of 5 (table 7.1). With a larger q value, however, the performance of NGPCA and NGPCA-constV increased, while it decreased for MPPCA-ext.

On the change set, NGPCA-constV did better than the other two methods (table 7.2). Here, the distributions of assigned patterns (respective prior probabilities) differed clearly (figure 7.10). In the NGPCA-constV case, the distribution is confined to a smaller range of numbers compared to the two other cases. Moreover, NGPCA results in 13 units with only a few assigned patterns (less than 30). The test with the change set further shows that more principal components were needed than in the standard case to achieve an almost equal performance (table 7.2).

Figure 7.10: Histogram of assigned patterns, respective prior probabilities. n is the number of units for each interval. Here, the change set and 14 principal components were used.
\includegraphics[width=15.5cm]{histogram/piohist.eps}

The abstract RNN can also learn the inverse direction from two successive sensory states to the two wheel velocities (table 7.1 and 7.2). However, the error is too high for robot control. The square root of the square error is around 20% of the total velocity range. This error is actually so large that the prediction of motor commands can only be used to determine if the robot goes forward or backward, or turns left or right, as a function of the alternating camera image.


Table 7.1: Anticipation performance (on the 138 test series) of the abstract RNN trained on the standard set. The results for the multi-layer perceptron are shown for comparison. q is the number of principal components. The 1-step error is the average square error per sector for the first predicted interval. The error increase is the average square error increase per sector and interval between the second and the sixth interval (obtained by a linear fit as in figure 7.9). For the inverse direction, the last column shows the square-root of the average square error of the predicted velocity.
method q 1-step error error increase inv. dir. error
    (pixel2) (pixel2) (mm/sec)
MPPCA-ext 5 1.8 0.30 27
NGPCA 5 1.7 0.46 25
NGPCA-constV 5 1.8 0.52 27
MPPCA-ext 7 1.8 0.39 25
NGPCA 7 1.7 0.30 24
NGPCA-constV 7 1.7 0.30 24
MLP - 1.5 0.13 -



Table 7.2: Anticipation performance (on the 138 test series) of the abstract RNN trained on the change set. The results for the multi-layer perceptron are shown for comparison. See table 7.1 for further explanation.
method q 1-step error error increase inv. dir. error
    (pixel2) (pixel2) (mm/sec)
MPPCA-ext 14 1.8 0.46 23
NGPCA 14 2.0 0.81 22
NGPCA-constV 14 1.6 0.28 23
MPPCA-ext 7 1.9 0.51 24
NGPCA 7 1.9 0.77 24
NGPCA-constV 7 1.8 0.45 24
MLP - 1.4 0.20 -


The main difference in the performance between the abstract RNN and the MLP is the difference in the linear square error increase. The best obtained value for the abstract RNN of 0.28 pixels squared per interval (table 7.2) is more than double than the best MLP value of 0.13 pixels squared (table 7.1). Furthermore, compared to the MLP, the abstract RNN was not only less accurate in the forward prediction, but also slower. On an Athlon 2200+ with 1GB RAM, a single mapping with the abstract RNN took 1.3 ms and with the MLP 0.016 ms (both algorithms were implemented in C++). Thus, for the applications of the chain, only the MLP was used.


next up previous contents
Next: 7.3.3 Performance outside the Up: 7.3 Results Previous: 7.3.1 Anticipation with the
Heiko Hoffmann
2005-03-22