In recall, an image from the test set was presented and processed as in section 6.2.3. The resulting 20 dimensional vector defined an offset of a constrained space. Its intersection with the mixture of local PCA provides the output (chapter 4).
Four different methods were compared: the abstract RNN based on a mixture of local PCA, the pattern association based on kernel PCA, the look-up table, and the multi-layer perception. For these methods, the average recall time was measured. All of them were implemented in C++ and were running on an Athlon XP 2200+ with 1 GB RAM.
After recall, each joint angle is given as a population code. Each of the four values has an activation ai (section 6.2.4). To obtain the joint angle, a Gaussian function was fitted to the points (, ai). Its center equals the desired angle.
With the resulting set of joint-angles, the robot arm is able to grasp the brick by moving from the resting via the pre-grasping to the grasping position. To evaluate off-line the grasping performance on the test set, however, further processing is required. The joints angles were transformed into a gripper position and orientation using a geometric model of the arm (see also section 4.5). The resulting values were compared with the coordinates and orientation of the brick (given in the test set). This comparison gives quantitative errors of position and orientation. Further, a geometric model of gripper tips and brick could determine if the grasping was actually successful. The calculated rate of successful grasps was in agreement with a test on the operating robot (within 1%). Here, the brick was placed 100-times by hand on arbitrary table positions.