next up previous contents
Next: 6.1.2 High-dimensional image data Up: 6.1 Visual guided grasping Previous: 6.1 Visual guided grasping

6.1.1 Related work

The presented robot model differs essentially from previous reaching-and-grasping studies. These alternatives differ in at least one of the following three points.

First, some studies work either with reaching or with grasping. For example, Ritter et al. (1989) and Walter et al. (2000) made a robot reach to a point light, which was given in image coordinates. Molina-Vilaplana et al. (2004) also used a point as target. On the other hand, Uno et al. (1995) trained only a robot hand on samples provided by a human equipped with a data glove, and Fuentes and Nelson (1998) manipulated objects that were already in between the gripper.

Second, some studies rely on a representation of the target object in three-dimensional Cartesian space. For example, Cipolla and Hollinghurst (1997) and Molina-Vilaplana et al. (2004) used stereo vision to compute directly the target's coordinates. To compute the target's orientation, Salganicoff et al. (1996) fitted an ellipsoid to three-dimensional data of the target from a laser scanner. Fuentes and Nelson (1998) moved an object with a robot hand by manually providing goal coordinates. Furthermore, like the kinematic arm model in section 4.5, most simulations have as input Cartesian goal coordinates. For example, Oztop et al. (2004), who simulated a human-like arm and hand, used the target's coordinates to compute the arm-joint angles.

Third, some studies operate in closed-loop. Here, the inverse model produces incremental joint-angle changes (Distante et al., 2000; Molina-Vilaplana et al., 2004). Using reinforcement learning, for example, a robot arm can learn to choose the correct reaching direction (Distante et al., 2000).

Closest to our approach is the work by Kuperstein (1988,1990). His robot can grasp an elongated object in different orientations by mapping visual data onto motor activation that leads to an arm posture. The mapping is carried out by a neural controller. Its training data were gained by sampling randomly the motor space. Different from the abstract RNN, Kuperstein's neural controller is a function from sensory input to motor output and can therefore not cope with one-to-many mappings.


next up previous contents
Next: 6.1.2 High-dimensional image data Up: 6.1 Visual guided grasping Previous: 6.1 Visual guided grasping
Heiko Hoffmann
2005-03-22