Video-based Question Answering (Video QA) is a challenging task and becomes even more intricate when addressing Socially Intelligent Question Answering (SIQA). SIQA requires context understanding, temporal reasoning, and the integration of multimodal information, but in addition, it requires processing nuanced human behavior. Furthermore, the complexities involved are exacerbated by the dominance of the primary modality (text) over the others. Thus, there is a need to help the task's secondary modalities to work in tandem with the primary modality. In this work, we introduce a cross-modal alignment and subsequent representation fusion approach that achieves state-of-the-art results (82.06\% accuracy) on the Social IQ 2.0 dataset for SIQA. Our approach exhibits an improved ability to leverage the video modality by using the audio modality as a bridge with the language modality. This leads to enhanced performance by reducing the prevalent issue of language overfitting and resultant video modality bypassing encountered by current existing techniques. Our code and models are publicly available at: https://github.com/sts-vlcc/sts-vlcc
We use the SocialIQ 2.0 dataset which follows the guidelines for measuring social intelligence. This dataset consists of 1,400 social in-the-wild videos annotated with 8,076 questions and 32,304 answers (4 answers per question, 3 incorrect, 1 correct). We use this dataset which is the only dataset that captures social intelligence in the VQA setup.
The dataset, includes videos (mp4), audio (mp3, wav) and transcripts (vtt).
@inproceedings{agrawal2024listen,
title={Listen Then See: Video Alignment with Speaker Attention},
author={Agrawal, Aviral and Lezcano, Carlos Mateo Samudio and Heredia-Marin, Iqui Balam and Sethi, Prabhdeep Singh},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2018--2027},
year={2024}
}