Audio-Visual Speaker Localization via Weighted Clustering

I.-D. Gebru, X. Alameda-Pineda, R. Horaud and F. Forbes (INRIA)
Paper presented at IEEE Workshop on Machine Learning for Signal Processing, September 21-24, 2014, Reims, France.
[showhide type=”Abstract”]Abstract: In this paper we address the problem of detecting and locating speakers using audiovisual data. We address this problem in the framework of clustering. We propose a novel weighted clustering method based on a finite mixture model which explores the idea of non-uniform weighting of observations. Weighted-data clustering techniques have already been proposed, but not in a generative setting as presented here. We introduce a weighted-data mixture model and we formally devise the associated EM procedure. The clustering algorithm is applied to the problem of detecting and localizing a speaker over time using both visual and auditory observations gathered with a single camera and two microphones. Audiovisual fusion is enforced by introducing a cross-modal weighting scheme. We test the robustness of the method with experiments in two challenging scenarios: disambiguate between an active and a non-active speaker, and associate a speech signal with a person.[/showhide]
Copyright Notice: ©2014 IEEE.Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Paper: EARS_Paper_MLSP_2014_INRIA