Based on indications from neuroscience and psychology, both perception and action can be internally simulated in or - ganisms by activating sensory and/or motor areas in the brain without actual external sensory input and/or without any resulting behavior (a phenomenon called Thinking). This phenomenon is usually used by the organisms to cope with missing external inputs. Applying such phenomenon in a real robot recently has taken the attention of many researchers. Although some work has been reported on this issue, none of this work has so far consider ed the potential of the robot’s vision at the sensorimotor abstraction level, where extracting data from the environment takes place. In this study, a novel visiomotor abstraction is presented into a physical robot through a memory-based learning algorithm. Experi - mental results indicate that our robot with its vision could develop a kind of simple anticipation mechanism into its tree-type memory structure through interacting with the environment which would guide its behavior in the absence of external inputs.
Real world applications are usually subject to change and very difficult to be predicted. Any sudden changes in the environment can possibly cause temporary lose in communication with the external world. Some organisms, those that have the ability of cognition or thinking, can cope with such situations by replacing the external missing or corrupted sensory data with their own internal representation (or experience).
In recent decades, a branch of science called cognitive neuroscience, an interdisciplinary link between cognitive psychology and neuroscience, has been established to introduce such phenomena to the mobile robot [
Cognitive Roboticsis concerned with endowing robots with mammalian and human-like cognitive capabilities to enable them to accomplish complex tasks in complex environments. Cognitive ability is the ability to understand and try to make sense of the world. In [
In recent years, building a complete blindfolded navigation system in a mobile robot has been a challenging task for many robotic researchers [6-9]. For instance, some initial experiments were presented in [
To support our argument, we have done psychological experiments, similar to the one introduced by Lee and Thompson [
From the above experiment we can conclude that subject X had collected a sufficient amount of data from the environment during his first “eyes open” navigation. This data could be various dimensions in the room which the
subject related to times and distances that helped him to build internally—in his inner world where sensory experiences and consequences of different behaviors may be anticipated—his own internal image. In contrast, the amount of data that subject (Y) had collected was limited to the objects that his hand touched during his first blindfolded navigation and their relation to his moving steps. This data, however, was not good enough to accurately perform the task.
In the above experiment, subject Y could be a demonstration of the results of the most recently reported works (e.g., [
We also tried to demonstrate the inner world that was automatically built inside both subjects’ memory by giving each of them a sheet of white paper and asking them to draw the outline of the room that they trained in (note that subject Y had never seen the room). It was not surprising to find out that subject X could draw almost all the details of the room (
The work presented in this paper was motivated by the problems described above. Here we explore the inner world of a real mobile robot that has a chance to explore the surrounding environment with its camera before it was told to navigate blindfolded in it. In this study, the robot used two network architectures. The first was used to control its navigation, while the second, to build its internal representation.