The natural interaction ability between human and machine mainly involves human-machine dialogue ability, multi-modal sentiment analysis ability, human-machine cooperation ability and so on. In order to realize the multi-modal sentiment analysis ability of intelligent computer, it is necessary to make the computer own strong multi-modal sentiment analysis ability in the process of human-computer interaction. This is one of the key technologies to realize efficient and intelligent human-computer interaction. The research and practical application of multi-modal sentiment analysis oriented to human-computer natural interaction, this book mainly discusses the following levels of hot research content: Multi-modal Information Feature Representation, Feature Fusion and Sentiment Classification. Multi-modal sentiment analysis oriented to natural interaction is a comprehensive research field involving the integration of natural language processing, computer vision, machine learning, pattern recognition, algorithm, robot intelligent system, human-computer interaction, etc. In recent years, our research team from State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science, Tsinghua University, has conducted a lot of pioneering research and applied work, which have been carried out in the field of multi-modal sentiment analysis for natural interaction, especially in the field of sentiment feature representation, feature fusion, robust sentiment analysis based on deep learning model. Related achievements have also been published in the top academic international conferences in the field of artificial intelligence in recent years, such as ACL, AAAI, ACM MM, COLING and well-known international journals, such as Pattern Recognition, Knowledge based Systems, IEEE Intelligent Systems and Expert Systems with Applications. In order to systematically present the latest achievements in multi-modal sentiment analysis in academia in recent years, the relevant work achievements are systematically sorted out and presented to readers in the form of a complete systematic discussion. Currently, the research on multi-modal sentiment analysis in natural interaction develops fastly. The author’s research team will timely sort out and summarize the latest achievements and share them with readers in the form of a series of books currenlty. This book can not only be used as a professional textbook in the fields of natural interaction, intelligent question answering (customer service), natural language processing, human-computer interaction, etc., but also as an important reference book for the research and development of systems and products in intelligent robots, natural language processing, human-computer interaction, etc. As the natural interaction is a new and rapidly developing research field, limited by the author’s knowledge and cognitive scope, mistakes and shortcomings in the book are inevitable. We sincerely hope that you can give us valuable comments and suggestions for our book. Please contact (
xuhua@tsinghua.edu.cn) or a third party in the open source system platform
https://thuiar.github.io/ to give us a message. All of the related source codes and datasets for this book have also been shared on the following websites
https://github.com/thuiar/Books . The research work and writing of this book were supported by the National Natural Science Foundation of China (Project No. 62173195). We deeply appreciate the following students from State Key laboratory of Intelligent Technology and Systems, Department of Computer Science, Tsinghua University for their hard preparing work: Xiaofei Chen, Yuanzhe Qiu and Jiayu Huang. We also deeply appreciate the following students for the related research directions of cooperative innovation work: Zhongwu Zhai, Wenmeng Yu, Kaicheng Yang, Jiyun Zou, Ziqi Yuan, Huisheng Mao, Wei Li,Baozheng Zhang and Yihe Liu . Without the efforts of the members of our team, the book could not be presented in a structured form in front of every reader. The full codes are available for use at
this link.