首页 | 本学科首页   官方微博 | 高级检索  
     

基于空间注意力和卷积神经网络的视觉情感分析
引用本文:蔡国永,贺歆灏,储阳阳.基于空间注意力和卷积神经网络的视觉情感分析[J].山东大学学报(工学版),2020,50(4):8-13.
作者姓名:蔡国永  贺歆灏  储阳阳
作者单位:桂林电子科技大学广西可信软件重点试验室,广西 桂林541004;桂林电子科技大学广西可信软件重点试验室,广西 桂林541004;桂林电子科技大学广西可信软件重点试验室,广西 桂林541004
基金项目:国家自然科学基金资助项目(61763007);广西自然科学基金重点资助项目(2017JJD160017);广西科技重大专项(AA19046004)
摘    要:为了解决现有基于深度学习方法的视觉情感分析忽略了图像各局部区域情感呈现的强度差异问题,提出一种结合空间注意力的卷积神经网络spatial attention with CNN, SA-CNN用于提升视觉情感分析效果。设计一个情感区域探测神经网络用于发现图像中诱发情感的局部区域;通过空间注意力机制对情感映射中各个位置赋予注意力权重,恰当抽取各区域的情感特征表示,从而有助于利用局部区域情感信息进行分类;整合局部区域特征和整体图像特征形成情感判别性视觉特征,并用于训练视觉情感的神经网络分类器。该方法在3个真实数据集TwitterⅠ、TwitterⅡ和Flickr上的情感分类准确率分别达到82.56%、80.23%、79.17%,证明利用好图像局部区域情感表达的差异性,能提升视觉情感分类效果。

关 键 词:图像处理  情感分析  深度学习  注意力机制  神经网络
收稿时间:2019-07-23

Visual sentiment analysis based on spatial attention mechanism and convolutional neural network
Guoyong CAI,Xinhao HE,Yangyang CHU.Visual sentiment analysis based on spatial attention mechanism and convolutional neural network[J].Journal of Shandong University of Technology,2020,50(4):8-13.
Authors:Guoyong CAI  Xinhao HE  Yangyang CHU
Affiliation:Guangxi Key Lab of Trusted Software, Guilin University of Electronic Technology, Guilin 541004, Guangxi, China
Abstract:Existing visual sentiment analysis based on deep learning mainly ignored the intensity differences of emotional presentation in different parts of the image. In order to solve this problem, the convolutional neural network based on spatial attention (SA-CNN) was proposed to improve the effect of visual sentiment analysis. The affective region detection neural network was designed to discover the local areas of sentiment induced in images. The spatial attention mechanism was used to assign attention weights to each location in the sentiment map, and the sentiment features of each region were extracted appropriately, which was helpful for sentiment classification by using local information. The discriminant visual features were formed by integrating local region features and global image features, and were used to train the neural network classifier of visual sentiment. Classification accuracy of the method achieved 82.56%, 80.23% and 79.17% on three real datasets Twitter Ⅰ, Twitter Ⅱ and Flickr, which proved that the method could improve the visual emotion classification effect by making good use of the difference of emotion expression in the local area of the image.
Keywords:image process  sentiment analysis  deep learning  attention mechanism  neural network  
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《山东大学学报(工学版)》浏览原始摘要信息
点击此处可从《山东大学学报(工学版)》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号