Joint Visual-Textual Sentiment Analysis Based on Cross-modality Attention Mechanism thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Joint Visual-Textual Sentiment Analysis Based on Cross-modality Attention Mechanism

Published on Jan 29, 2019353 Views

Related categories

Chapter list

Joint Visual-Textual Sentiment Analysis Based on Cross-modality Attention Mechanism00:00
Outline00:16
Introduction00:19
Introduction - 200:21
Introduction - 300:46
Related Work02:00
Related Work - 202:02
Early Fusion and Late Fusion 02:16
Attention For Multimodal Tasks03:07
Summary on Related Work03:26
Model Description03:44
Intuition03:48
Bidirectional RNN For Semantic Embedding04:38
Bidirectional RNN For Semantic Embedding - 205:40
Cross-modality Attention Mechanism06:37
Experiments08:05
Experiments - 208:09
Table I. Statistics of two datasets08:11
Comparison Methods08:32
RNN Embedding08:47
Results & Analysis09:04
Results on the VSO testing dataset10:04
Results on the image-text pairs with opposite sentiments 10:33
Qualitative attention analysis11:25
Qualitative attention analysis - 211:56
Conclusion12:46
Conclusion - 212:47
Thank you 13:14