Understanding the brain requires more than knowing which brain part processes which function. Understanding must include how specific tasks are solved. Without a hypothesis on the function of a neural structure, anatomical and electro-physiological data are often difficult to interpret. Thus, models of the brain function are developed. Although certain details of the model can be tested by experiment (for example, whether, in a certain cortex region, an object's position is coded in eye-centered coordinates (Batista et al., 1999)), the overall working of the model usually relies on human intuition, which could be wrong. Therefore, a good test seems to be to construct a system able to do the task studied. This construction is `synthetic modeling', which may be understood as an extension to its analytical counterpart.
In synthetic modeling, robots have an advantage over simulations because the latter are more likely to oversimplify a problem (Brooks, 1986a). To be feasible, simulations will usually include only those parts that seem essential from the perspective of the scientist. Thus, in a robot setup, problems may emerge that have not been foreseen in a simulation. For example, different from most simulations, in the real world, the sensory input is noisy, and this noise may make a model break down.
It remains arguable how good a robot can model biology (see Webb (2001) and the open peer commentaries following that article). The solution found by a robotics engineer might be different from the one realized in the brain (the neural circuits differ from the engineered ones). Nevertheless, being forced to solve a problem in the real world results at least in an understanding of the difficulties that need to be overcome. Therefore, testing models on robots helps to develop an intuitive understanding of brain function.