The use of emotions has recently been considered to improve the indexing of video contents and two different approaches are usually followed: computation of objective emotions through low-level video features analysis and computation of subjective emotions through analysis of the viewers’ physical signals. In this paper, we propose a different approach and we present ViMood, a novel mechanism designed to improve the indexing of video material by integrating objective and subjective emotions.
ViMood indexes every video scene with emotion(s) obtained through a combination of low-level feature analysis and on-the-fly viewer’s emotion annotation. The goal is to allow viewers to browse video material using either general information (e.g., title, director) or specific emotions (e.g., “joy”, “sadness”, “surprise”). Results obtained in the evaluation process showed that participants were very interested in the hybrid approach, as it fixes some of the problems of the objective and subjective approaches.