This chapter investigates whether both planning goal-directed movements and judging the geometry of the surroundings can be based on a forward model (Hoffmann and Möller, 2004). Tests were done with a mobile robot, which was equipped with a camera. A forward model was put into effect either as a multi-layer perceptron (MLP) or as an abstract recurrent neural network (RNN). The robot collected training data by random exploration. A training pattern for the forward model was obtained from two successive images and the corresponding motor commands. To predict the sensory consequence of a sequence of motor commands, multiple copies of the trained forward model can be linked to a chain. On this prediction, a chain of MLP was more accurate than a chain of abstract RNNs. For goal-directed movement planning, an optimization method yielded the required motor commands. For judging the geometry of the surroundings, the simulation of covert motor commands revealed the connection between self-motion and sensory input.