WhyAct: Identifying Action Reasons in Lifestyle Vlogs

Abstract

We aim to automatically identify human action reasons in online videos. We focus on the widespread genre of lifestyle vlogs, in which people perform actions while verbally describing them. We introduce and make publicly available the WHYACT dataset, consisting of 1,077 visual actions manually annotated with their reasons. We describe a multimodal model that leverages visual and textual information to automatically infer the reasons corresponding to an action presented in the video.

Publication
In The 2021 Conference on Empirical Methods in Natural Language Processing