The faces were taken from the faces database4.2 of the Max Planck Institute for Biological Cybernetics, Tübingen (Blanz and Vetter, 1999). The database contains 100 male and 100 female faces, each in seven different perspectives (thus, in total 1400 images). The images are in color and their size is 256×256 pixels. The faces are centered in each image and the background is black. 90% of the male and female faces were used for training and the remainder for testing (1260 training patterns and 140 test patterns).
To preprocess each image, first, a gray-scaled image was obtained by averaging over all color channels. Second, a margin 45 pixels wide (mostly black) was removed. Then, the image was subsampled to a 26×26 image. In the subsampled image, each pixel corresponds to a window (6×6 pixels) in the image of the previous processing stage. The pixel's gray-value was set to the average gray-value in that window.
In training, the mixture models NGPCA and NGPCA-constV contained ten units with ten principal components each. The models had the same training parameters as in section 4.4.1. MPPCA-ext could not be used because the dimensionality of the patterns was too high (see section 3.5). Results were again compared to a model using a single unit. 119 principal components were extracted, resulting in the same number of free parameters as for the mixture model. Moreover, the results were compared to a table look-up, herein from the training set, the image was chosen that had the smallest Euclidean distance to an input pattern.