A neural network model for learning to represent 3D objects via tactile exploration
- Xiaogang Yan, Department of Computer Science, University of Otago, Dunedin, Otago, New Zealand
- Alistair Knott, Computer Science, University of Otago, Dunedin, New Zealand
- Steven Mills, Department of Computer Science, University of Otago, Dunedin, Otago, New Zealand
AbstractThis paper aims to answer the fundamental but still unanswered question: how can brains represent 3D objects? Rather than building a model of visual processing, we focus on modeling the haptic sensorimotor processes through which objects are explored by touch. This idea is inspired from two main facts: 1) in developmental terms, tactile exploration is the primary means by which infants learn to represent object shapes; 2) blind people can also represent and distinguish objects just by haptic exploration. Therefore, in this paper, we firstly establish the relationship between the geometric properties of an object and constrained navigation action sequences for tactile exploration. Then, a neural network model is proposed to represent 3D objects from these experiences, using a mechanism that is computationally similar to that used by hippocampal place cells. Simulation results based on a 2*2*2 cube and a 3*2*1 cuboid show that the proposed model is effective for representing 3D objects via tactile exploration and comparative results suggest that the model is more efficient and accurate when learning a representation of the 3*2*1 cuboid with an asymmetrical geometrical structure than the 2*2*2 cube with a symmetrical geometrical structure.
Return to previous page