排序方式: 共有3条查询结果,搜索用时 0 毫秒
1
1.
Huddar Mahesh G. Sannakki Sanjeev S. Rajpurohit Vijay S. 《Multimedia Tools and Applications》2021,80(9):13077-13077
Multimedia Tools and Applications - A Correction to this paper has been published: https://doi.org/10.1007/s11042-021-10591-y 相似文献
2.
Huddar Mahesh G. Sannakki Sanjeev S. Rajpurohit Vijay S. 《Multimedia Tools and Applications》2021,80(9):13059-13076
Multimedia Tools and Applications - Due to the availability of an enormous amount of multimodal content on the social web and its applications, automatic sentiment analysis, and emotion detection... 相似文献
3.
Mahesh G. Huddar Sanjeev S. Sannakki Vijay S. Rajpurohit 《Computational Intelligence》2020,36(2):861-881
The availability of the humongous amount of multimodal content on the internet, the multimodal sentiment classification, and emotion detection has become the most researched topic. The feature selection, context extraction, and multi-modal fusion are the most important challenges in multimodal sentiment classification and affective computing. To address these challenges this paper presents multilevel feature optimization and multimodal contextual fusion technique. The evolutionary computing based feature selection models extract a subset of features from multiple modalities. The contextual information between the neighboring utterances is extracted using bidirectional long-short-term-memory at multiple levels. Initially, bimodal fusion is performed by fusing a combination of two unimodal modalities at a time and finally, trimodal fusion is performed by fusing all three modalities. The result of the proposed method is demonstrated using two publically available datasets such as CMU-MOSI for sentiment classification and IEMOCAP for affective computing. Incorporating a subset of features and contextual information, the proposed model obtains better classification accuracy than the two standard baselines by over 3% and 6% in sentiment and emotion classification, respectively. 相似文献
1