首页 | 官方网站   微博 | 高级检索  
     

基于注意力及视觉Transformer的野外人脸表情识别
引用本文:罗岩,冯天波,邵洁.基于注意力及视觉Transformer的野外人脸表情识别[J].计算机工程与应用,2022,58(10):200-207.
作者姓名:罗岩  冯天波  邵洁
作者单位:1.上海电力大学 电子与信息工程学院,上海 201306 2.国家电网上海市电力公司 信息通信公司,上海 200000
基金项目:上海市科委地方院校能力建设项目;国家自然科学基金
摘    要:目前的人脸表情识别更关注包含面部遮挡、图像模糊等因素的野外图像而非实验室图像,且COVID-19的流行使得人们不得不在公共场合佩戴口罩,这给表情识别任务带来了新的挑战。受启发于最近Transformer在众多计算机视觉任务上的成功,提出了基于注意力及视觉Transformer的野外人脸表情识别模型,并率先使用CSWin Transformer作为主干网络。加入通道-空间注意力模块来提高模型对于全局特征的注意力。Sub-center ArcFace损失函数被用来进一步优化模型的分类能力。在两个公开的野外表情数据集RAF-DB和FERPlus上以及它们对应的口罩遮挡数据集上对所提出的方法进行了评估,识别准确率分别为88.80%、89.31%和76.12%、72.28%,提高了表情识别精度。

关 键 词:人脸表情识别  Transformer  注意力机制  Sub-centerArcFace  

Facial Expression Recognition in Wild Based on Attention and Vision Transformer
LUO Yan,FENG Tianbo,SHAO Jie.Facial Expression Recognition in Wild Based on Attention and Vision Transformer[J].Computer Engineering and Applications,2022,58(10):200-207.
Authors:LUO Yan  FENG Tianbo  SHAO Jie
Affiliation:1.School of Electronic and Information Engineering, Shanghai University of Electric Power, Shanghai 201306, China  2.Information and Telecommunication Branch, State Grid Shanghai Municipal Electric Power Company, Shanghai 200000, China
Abstract:Facial expression recognition(FER) nowadays pays more attention to images in the wild that contain factors such as facial occlusion and image blurring than laboratory images, and the epidemic of COVID-19 makes people have to wear masks in public places, which brings new challenges to the task of FER. Due to the recent success of the Transformer on numerous computer vision tasks, an Attention-Transformer network is proposed, which first using CSWin Transformer as the backbone. Meanwhile, channel-spatial attention module is designed to increase the attention of the network to global features. Moreover, the Sub-center ArcFace loss function is used to further optimize the classification ability of the model. The proposed method is evaluated on two public in-the-wild facial expression datasets, RAF-DB and FERPlus, and their corresponding masked datasets. In addition, their accuracy rates are 88.80% and 89.31% on RAF-DB and FERPlus datasets, and the accuracy rates are 76.12% and 72.28% on their masked datasets. In a nutshell, the results demonstrate that the model performs superior to the state-of-the-art methods.
Keywords:facial expression recognition  Transformer  attention mechanism  Sub-center ArcFace  
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机工程与应用》浏览原始摘要信息
点击此处可从《计算机工程与应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号