next up previous contents
Next: 2.1 Principal component analysis Up: hoffmann_diss Previous: 1.5.6 Parametrized self-organizing maps


2. Modeling of data distributions

The data we try to model are patterns whose coordinates include all sensory variables and all motor variables2.1. The set of these patterns forms a distribution in a sensorimotor space. In this thesis, the focus is on finding a simplified representation of such a distribution. That is, learning is based on finding a statistical description of the data, instead of constructing a neural network. Like a network, however, the proposed algorithms will have a training phase, in which the distribution is approximated, and a recall phase, in which a partially given input pattern is completed, as in a recurrent neural network (section 1.5.4). These training and recall phases can be also found in the self-organizing map (section 1.5.5) and the parameterized self-organizing map (section 1.5.6). Different from these algorithms, however, the present approach is based on principal component analysis (PCA). Since PCA can only give a linear approximation of a data distribution, two extensions are used. The first extends the single PCA to a mixture of many analyzers. Here, each analyzer approximates linearly a locally confined region of the pattern space. This restriction of PCA is called `local PCA'. PCA itself cannot separate the space into these regions. The separation is accomplished by linking local PCA to vector quantization. The second extension (kernel PCA) uses only one analyzer, but approximates linearly the data in a higher-dimensional space into which they were mapped. This chapter describes the background of these two extensions.



Subsections
next up previous contents
Next: 2.1 Principal component analysis Up: hoffmann_diss Previous: 1.5.6 Parametrized self-organizing maps
Heiko Hoffmann
2005-03-22