首页 | 本学科首页   官方微博 | 高级检索  
     

融合局部特征的面部遮挡表情识别
引用本文:王晓华,李瑞静,胡敏,任福继.融合局部特征的面部遮挡表情识别[J].中国图象图形学报,2016,21(11):1473-1482.
作者姓名:王晓华  李瑞静  胡敏  任福继
作者单位:合肥工业大学计算机与信息学院, 情感计算与先进智能机器安徽省重点实验室, 合肥 230009,合肥工业大学计算机与信息学院, 情感计算与先进智能机器安徽省重点实验室, 合肥 230009,合肥工业大学计算机与信息学院, 情感计算与先进智能机器安徽省重点实验室, 合肥 230009,合肥工业大学计算机与信息学院, 情感计算与先进智能机器安徽省重点实验室, 合肥 230009;德岛大学, 先端技术科学教育部, 日本, 德岛 7708502
基金项目:国家自然科学基金项目(61300119,61432004);安徽省自然科学基金项目(1408085MKL16)
摘    要:目的 针对人脸表情识别中存在局部遮挡的问题,提出一种融合局部特征的面部遮挡表情识别方法。方法 首先,为了减少噪声的影响,利用高斯滤波对归一化后的图像进行去噪处理;然后根据人脸不同部位对表情识别的不同贡献度,将图像划分为两个重要的子区域,并分别对该子区域进行不重叠分块处理;采用改进的中心对称局部二值模式(差值中心对称局部二值模式DCS-LBP)和改进的差值局部方向模式(梯度中心对称局部方向模式GCS-LDP)对各个子块提取相应的特征,并采用级联的方式得到图像的特征直方图;最后结合最近邻分类器对表情图像进行分类识别:利用卡方距离求取测试集图像与训练集图像特征直方图之间的距离,同时考虑到遮挡的干扰以及每个子块包含信息量的不同,利用信息熵对子块得到的卡方距离进行自适应加权。结果 在日本女性人脸表情库(JAFFE)和Cohn-Kanade(CK)人脸表情库上进行了3次交叉实验。在JAFFE库中随机遮挡、嘴部遮挡和眼部遮挡分别可以取得92.86%、94.76%和86.19%以上的平均识别率;在CK库中随机遮挡、嘴部遮挡和眼部遮挡分别可以取得99%、98.67%和99%以上的平均识别率。结论 该特征提取方法通过融合梯度方向上灰度值的差异以及梯度方向之间边缘响应值的差异来描述图像的特征,更加完整地提取了图像的细节信息。针对遮挡情况,本文采用的图像分割和信息熵自适应加权方法,有效地降低了遮挡对表情识别的干扰。在相同的实验环境下,与经典的局部特征提取方法以及遮挡问题处理方法的对比表明了该方法的有效性和优越性。

关 键 词:人脸表情识别  局部遮挡  差值中心对称局部二值模式(DCS-LBP)  梯度中心对称局部方向模式(GCS-LDP)  自适应加权
收稿时间:2016/4/26 0:00:00
修稿时间:2016/7/14 0:00:00

Occluded facial expression recognition based on the fusion of local features
Wang Xiaohu,Li Ruijing,Hu Min and Ren Fuji.Occluded facial expression recognition based on the fusion of local features[J].Journal of Image and Graphics,2016,21(11):1473-1482.
Authors:Wang Xiaohu  Li Ruijing  Hu Min and Ren Fuji
Affiliation:School of Computer and Information of Hefei University of Technology, Anhui Province Key Laboratory of Affective Computing and Advanced Intelligent Machine, Hefei 230009, China,School of Computer and Information of Hefei University of Technology, Anhui Province Key Laboratory of Affective Computing and Advanced Intelligent Machine, Hefei 230009, China,School of Computer and Information of Hefei University of Technology, Anhui Province Key Laboratory of Affective Computing and Advanced Intelligent Machine, Hefei 230009, China and School of Computer and Information of Hefei University of Technology, Anhui Province Key Laboratory of Affective Computing and Advanced Intelligent Machine, Hefei 230009, China;University of Tokushima, Graduate School of Advanced Technology & Science, Tokushima 7708502, Japan
Abstract:Objective To reduce the effect of partial occlusion in facial expression recognition, this paper proposes a new method of facial expression recognition based on local feature fusion. Method First, the normalized images are processed by the Gaussian filter to reduce the effect of noise. According to their different contributions in facial expression recognition, all the images are then divided into two main parts: near the eye and near the mouth. To analyze considerable structure detail, these two parts are further divided into several non-overlapping blocks. The following two patterns are used to extract the features of each sub block: the difference center-symmetric local binary pattern, which is the change of center-symmetric local binary pattern; and the gradient center-symmetric local directional pattern, which is the change of difference local directional pattern. The features are marked as two binary sequences, which are then cascaded to obtain the characteristic histogram of the sub block. The final histogram of the image is obtained by cascading the histogram of each sub block. Finally, the nearest neighbor method is used for classification. Chi-square distance is used to calculate the distance among the characteristic histograms of the testing and training images. Considering the difference of the amount of information contained in each sub block and to reduce the effect of occlusion further, information entropy is used to weigh chi-square distance adaptively. Result Three cross experiments are conducted on JAFFE and CK databases. The average recognition accuracies in random occlusion, mouth occlusion, and eye occlusion cases are 92.86%, 94.76%, and 86.19% on JAFFE database, and are 99%, 98.67%, and 99% on CK database. Conclusion In the aspect of feature extraction, our method describes the image from two aspects: one is the difference of the pixel values in the gradient direction, and the other is the difference of the edge response values between gradient directions. Accordingly, the image can be fully described. In the aspect of occlusion, image segmentation and information entropy are used to weigh chi-square distance adaptively. Thus, our method can effectively reduce the effect of occlusion. Under the same experimental conditions, experimental results show the effectiveness and superiority of the proposed method to other classical local feature extraction and occlusion handling methods.
Keywords:facial expression recognition  partial occlusion  difference center-symmetric local binary pattern (DCS-LBP)  gradient center-symmetric local directional pattern (GCS-LDP)  adaptively weighted
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号