next up previous contents
Next: 3.1 Motivation for local Up: hoffmann_diss Previous: 2.4.3 Common kernel functions

3. Mixture of local PCA

This chapter presents two new extensions to existing models to obtain a mixture of local PCA for modeling data distributions. The first section gives a motivation for using local PCA instead of code-book vectors or univariate densities. The following two sections present the two learning variants. The first variant is an extension of the vector quantizer Neural Gas (section 2.2.4) to local PCA. The second variant is a modification of the mixture of probabilistic PCA (section 2.3.2). The new variants were tested on synthetic data distributions and on a digit classification task. Finally, their advantages and disadvantages relative to each other are discussed. For the overall chapter, a training set consists of points $ \bf x_{i}^{}$ $ \in$ IRd with i = 1,..., n, and the mixture contains m units.


Heiko Hoffmann