Microphone Array Signal Processing for Robot Audition

H. W. Loellmann (FAU Erlangen-Nuremberg), Alastair H. Moore, Patrick A. Naylor (Imperial College London), Boaz Rafaely (Ben-Gurion University of the Negev), Radu Horaud (INRIA Grenoble), Alexandre Mazel (Softbank Robotics), W. Kellermann (FAU Erlangen-Nuremberg)

Workshop on on Hands-free Speech Communication and Microphone Arrays (HSCMA), San Francisco, USA, March 1-3, 2017

[showhide type=”Abstract”] Abstract: Robot audition for humanoid robots interacting naturally with humans in an unconstrained real-world environment is a hitherto unsolved challenge. The recorded microphone signals are usually distorted by background and interfering noise sources (speakers) as well as room reverberation. In addition, the movements of a robot and its actuators cause ego-noise which degrades the recorded signals significantly. The movement of the robot body and its head also complicates the detection and tracking of the desired, possibly moving, sound sources of interest. This paper presents an overview of the concepts in microphone array processing for robot audition and some recent achievements. [/showhide]

Copyright Notice ©2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Paper: EARS_HSCMA_2017_FAU_HL