This demo video (produced by Imperial College London) shows a newly developed algorithm for sound source localization and tracking for the NAO robot.
This video shows how the motor ego-noise of the NAO robot can be suppressed by a dictionary learning-based multichannel approach.
This demo video, which is part of Deliverable D4.3, demonstrates the newly developed HRI capabilities for the robot NAO.
This video was recorded at the EuroPython 2015 conference and demonstrates the performance of the sound event recognition system for the Nao robot implemented by the EARS Partner Aldebaran (in Python of course).
This demo shows how a robot with an attentional model based on an egosphere can detect events in its environment and intuitively interact with people. The humanoid robot Nao can react to faces, movements and sounds.
This demo shows the functioning of internal models and simulations on the humanoid robot Nao for visuo-motor coordination. In the experiment, the robot learns to predict the outcomes of its actions in an initial learning and exploration phase. It can later on actively chose one of several possible actions (use left or right arm in reaching the target) based on the prediction errors calculated after simulating the actions.
This video staged by Aldebaran Robotics demonstrates speech dialogues for human-robot interaction that were developed within the EARS project as part of Deliverable D4.1 (voice dialogue system). It shows how the Nao robot is receiving guests at a hotel reception and handling typical inquiries.