The goal of the recall is to complete a pattern whose components are only partially given (Hoffmann and Möller, 2003). The resulting pattern shares the components of that are defined as input.
After learning, the data distribution is represented by a collection of hyper-ellipsoids; each has a center , direction vectors (principal axes), semi-axes lengths , and a residual variance (in any direction orthogonal to the span of the principal axes). The are the eigenvectors of a local principal component analysis, and the are the corresponding eigenvalues. The hyper-ellipsoids are iso-potential surfaces of the normalized Mahalanobis distance plus reconstruction error (see section 3.2.1),
An input to the network (one part of the components of ) defines the offset of a constrained space () spanning the space of all possible output values:
The recall of the complete pattern happens in two steps. First, for each unit j, determine the point that yields the smallest potential value (4.1) on the constrained subspace. Second, choose the unit j* that gives the smallest of these minimal potential values {Ej()}. The corresponding yields the desired output values (figure 4.2).
|
The error Ej as a function of the free parameters
can be written as:
Ej(()) | = | ( + j)T( j-1 + { - } ) ( + j) | |
+ | lndetj + (d - q)ln , | (4.3) |
= 2 + 2 j | (4.4) |
= j-1 + - . | (4.5) |
= - ()-1 . | (4.7) |
The function E() is convex. Therefore, is the only minimum. Thus, = + is the point with the smallest potential on the constraint. Next, j* can be chosen, and the resulting concludes the algorithm. For each input a unique output is given, and local minima as described in section 4.1.2 are avoided.