The mixture models showed about the same performance (table 4.2). They did better than the single unit on large concatenated output regions; the single unit was better to interpolate between thin stripes. All variants of the abstract RNN were better than a table look-up on the training set.
The completions obtained by the abstract RNN resembled human faces (figure 4.8, here, NGPCA was used as example). Some of the recalled images that do not match their test image (like the images in the bottom row) nevertheless seem to fit the boundary conditions. These cases suggest that the approximation of the distribution of faces intersects the constraint space more than once. To exploit this one-to-many mapping, the ellipsoid (unit) was determined that yields the second smallest potential (see section 4.2) and the square error of the corresponding completion was computed. Using NGPCA, for the first mask (top half occluded), in 18 cases, the second ellipsoid provided the solution that matched better the test image (smaller square error). Replacing the corresponding originally recalled images by these cases, the mean square error dropped from 0.0133 to 0.0122.