Options
What Makes the Sound?: A Dual-Modality Interacting Network for Audio-Visual Event Localization
Date Issued
01-05-2020
Author(s)
Ramaswamy, Janani
Abstract
The presence of auditory and visual senses enables humans to obtain a profound understanding of the real-world scenes. While audio and visual signals are capable of providing scene knowledge individually, the combination of both offers a better insight about the underlying event. In this paper, we address the problem of audio-visual event localization where the goal is to identify the presence of an event that is both audible and visible in a video, using fully or weakly supervised learning. For this, we propose a novel Audio-Visual Interacting Network (AVIN) that enables inter as well as intra modality interactions by exploiting the local and global information of the two modalities. Our empirical evaluations confirm the superiority of our proposed model over the existing state-of-the-art methods, in both fully as well as weakly supervised learning tasks, thus asserting the efficacy of our joint-modeling.
Volume
2020-May