Multimodal information fusion for video concept detection
Abstract
Video media carries multimodal information including visual, audio, textual data. Considerable research has been focused on utilizing multimodal features for better understanding of video content. However, many problems remain such as how to combine multimodal features and what are the effects of different combinations. In this paper, we propose to find the optimal combination of multimodal information in order to improve the performance of video concept detection using two methods, one is gradient-descent-optimization linear fusion and the other is super-kernel nonlinear fusion. Gradient-descent-optimization linear fusion learns an optimal weighted linear combination of single modalities based on fusing individual kernel matrices with gradient descent techniques. Super-kernel nonlinear fusion trains separate classifiers for single modalities as the first step. Once individual models have been designed, super-kernel nonlinear fusion learns an optimal non-linear combination of individual models by fusing single-modality classifiers. Our experiments show that both methods improve performance significantly on TREC-Video 2003 benchmarks. ©2004 IEEE.