For the abstract recurrent neural network, the error increase over time was worse than for the MLP (table 7.1 and 7.2). On the standard training set, MPPCAext was better than NGPCA and NGPCAconstV for the anticipated q value of 5 (table 7.1). With a larger q value, however, the performance of NGPCA and NGPCAconstV increased, while it decreased for MPPCAext.
On the change set, NGPCAconstV did better than the other two methods (table 7.2). Here, the distributions of assigned patterns (respective prior probabilities) differed clearly (figure 7.10). In the NGPCAconstV case, the distribution is confined to a smaller range of numbers compared to the two other cases. Moreover, NGPCA results in 13 units with only a few assigned patterns (less than 30). The test with the change set further shows that more principal components were needed than in the standard case to achieve an almost equal performance (table 7.2).

The abstract RNN can also learn the inverse direction from two successive sensory states to the two wheel velocities (table 7.1 and 7.2). However, the error is too high for robot control. The square root of the square error is around 20% of the total velocity range. This error is actually so large that the prediction of motor commands can only be used to determine if the robot goes forward or backward, or turns left or right, as a function of the alternating camera image.


The main difference in the performance between the abstract RNN and the MLP is the difference in the linear square error increase. The best obtained value for the abstract RNN of 0.28 pixels squared per interval (table 7.2) is more than double than the best MLP value of 0.13 pixels squared (table 7.1). Furthermore, compared to the MLP, the abstract RNN was not only less accurate in the forward prediction, but also slower. On an Athlon 2200+ with 1GB RAM, a single mapping with the abstract RNN took 1.3 ms and with the MLP 0.016 ms (both algorithms were implemented in C++). Thus, for the applications of the chain, only the MLP was used.