Audio-visual Tracking by Density Approximation in a Sequential Bayesian Filtering Framework

I. D. Gebru (INRIA Grenoble), C. Evers, P. A. Naylor (Imperial College London), R. Horaud (INRIA Grenoble)
Workshop on on Hands-free Speech Communication and Microphone Arrays (HSCMA), San Francisco, USA, March 1-3, 2017
[showhide type=”Abstract”] Abstract: This paper proposes a novel audio-visual tracking approach that exploits constructively audio and visual modalities in order to estimate trajectories of multiple people in a joint state space. The tracking problem is modeled using a sequential Bayesian filtering framework. Within this framework, we propose to represent the posterior density with a Gaussian Mixture Model (GMM). To ensure that a GMM representation can be retained sequentially over time, the predictive density is approximated by a GMM using the Unscented Transform. While a density interpolation technique is introduced to obtain a continuous representation of the observation likelihood, which is also a GMM. Furthermore, to prevent the number of mixtures from growing exponentially over time, a density approximation based on the Expectation Maximization (EM) algorithm is applied, resulting in a compact GMM representation of the posterior density. Recordings using a camcorder and microphone array are used to evaluate the proposed approach, demonstrating significant improvements in tracking performance of the proposed audio-visual approach compared to two benchmark visual trackers. [/showhide]

This contribution received a Best Paper Award.