Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis

Abstract

Representation Learning is a significant and challenging task in multimodal learning. Effective modality representations should contain two parts of characteristics: the consistency and the difference. Due to the unified multimodal annotation, existing methods are restricted in capturing differentiated information. However, additional uni-modal annotations are high time- and labor-cost. In this paper, we design a label generation module based on the self-supervised learning strategy to acquire independent unimodal supervisions. Then, joint training the multi-modal and uni-modal tasks to learn the consistency and difference, respectively. Moreover, during the training stage, we design a weight-adjustment strategy to balance the learning progress among different subtasks. That is to guide the subtasks to focus on samples with a larger difference between modality supervisions. Last, we conduct extensive experiments on three public multimodal baseline datasets. The experimental results validate the reliability and stability of auto-generated unimodal supervisions. On MOSI and MOSEI datasets, our method surpasses the current state-of-the-art methods. On the SIMS dataset, our method achieves comparable performance than humanannotated unimodal labels.

Publication
Proceedings of the AAAI Conference on Artificial Intelligence
Wenmeng Yu
Master’s Degree

My research direction is multimodal learning, facial expression recognition, and multi-task learning.

Hua Xu
Hua Xu
Tenured Associate Professor, Associate Editor of Expert Systems with Application, Ph.D Supervisor
Ziqi Yuan
Ziqi Yuan
Ph.D Student

My research direction is multimodal machine learning.

Related