The abstract RNN was compared to a multi-layer perceptron (MLP) and to a look-up table. This section further points out differences to the pattern-association based on self-organizing maps (SOM) (Ritter et al., 1990) and to the pattern-association based on parametrized self-organizing maps (PSOM) (Ritter, 1993).
Section 8.3 mentioned some tasks to which the abstract RNN can be applied but the MLP cannot. In tasks that only require to learn a function from input to output (many-to-one or one-to-one), the results are less distinct. In the forward direction of the kinematic arm model, the abstract RNN did better than the MLP (section 4.5); however, in the mobile-robot experiment, the MLP did better on the prediction of the sensory input (section 7.3.2).
We observed that the performance of the abstract RNN deteriorated the more input against output dimensions were chosen (section 4.5.2 and 4.6). This suggests that for tasks in which the input dimensions outnumber the output dimensions (for example, by 20 to 2), the MLP should be preferred.
This dependence on the number of input dimensions also holds for a look-up table that picks the best fit among all training patterns (section 4.6). In contrast to a look-up table, however, the abstract RNN could interpolate between patterns and thus resulted in more accurate associations (section 4.4.2 and 6.3). Moreover, with the addition of noise to the training patterns, the performance of the abstract RNN did not deteriorate as much as in the look-up-table case (section 6.3).
Compared to the algorithms SOM and PSOM (see section 1.5.5 and 1.5.6), the abstract RNN has the following advantages: