Video for Attention System

This demo shows how a robot with an attentional model based on an egosphere can detect events in its environment and intuitively interact with people. The humanoid robot Nao can react to faces, movements and sounds.

Video for Internal Models and Reaching

This demo shows the functioning of internal models and simulations on the humanoid robot Nao for visuo-motor coordination. In the experiment, the robot learns to predict the outcomes of its actions in an initial learning and exploration phase. It can later on actively chose one of several possible actions (use left or right arm in reaching the target) based on the prediction errors calculated after simulating the actions.

Video for Speech Dialogue

This video staged by Aldebaran Robotics demonstrates speech dialogues for human-robot interaction that were developed within the EARS project as part of Deliverable D4.1 (voice dialogue system). It shows how the Nao robot is receiving guests at a hotel reception and handling typical inquiries.