首页 | 本学科首页   官方微博 | 高级检索  
     

结合加权局部旋度模式的3维人脸表情识别
引用本文:余璟,达飞鹏. 结合加权局部旋度模式的3维人脸表情识别[J]. 中国图象图形学报, 2019, 24(7): 1076-1085
作者姓名:余璟  达飞鹏
作者单位:1. 东南大学自动化学院, 南京 210096;2. 东南大学复杂工程系统测量与控制教育部重点实验室, 南京 210096,1. 东南大学自动化学院, 南京 210096;2. 东南大学复杂工程系统测量与控制教育部重点实验室, 南京 210096
基金项目:国家自然科学基金项目(61405034,51475092,51175081)
摘    要:目的 3维人脸的表情信息不均匀地分布在五官及脸颊附近,对表情进行充分的描述和合理的权重分配是提升识别效果的重要途径。为提高3维人脸表情识别的准确率,提出了一种基于带权重局部旋度模式的3维人脸表情识别算法。方法 首先,为了提取具有较强表情分辨能力的特征,提出对3维人脸的旋度向量进行编码,获取局部旋度模式作为表情特征;然后,提出将ICNP(interactive closest normal points)算法与最小投影偏差算法结合,前者实现3维人脸子区域的不规则划分,划分得到的11个子区域保留了表情变化下面部五官和肌肉的完整性,后者根据各区域对表情识别的贡献大小为各区域的局部旋度模式特征分配权重;最后,带有权重的局部旋度模式特征被输入到分类器中实现表情识别。结果 基于BU-3DFE 3维人脸表情库对本文提出的局部旋度模式特征进行评估,结果表明其分辨能力较其他表情特征更强;基于BU-3DFE库进行表情识别实验,与其他3维人脸表情识别算法相比,本文算法取得了最高的平均识别率,达到89.67%,同时对易混淆的“悲伤”、“愤怒”和“厌恶”等表情的误判率也较低。结论 局部旋度模式特征对3维人脸的表情有较强的表征能力; ICNP算法与最小投影偏差算法的结合,能够实现区域的有效划分和权重的准确计算,有效提高特征对表情的识别能力。试验结果表明本文算法对3维人脸表情具有较高的识别率,并对易混淆的相似表情仍具有较好的识别效果。

关 键 词:3维人脸表情识别  局部特征  旋度特征  3维人脸划分  特征权重
收稿时间:2018-08-01

3D facial expression recognition based on weighted local curl patterns
Yu Jing and Da Feipeng. 3D facial expression recognition based on weighted local curl patterns[J]. Journal of Image and Graphics, 2019, 24(7): 1076-1085
Authors:Yu Jing and Da Feipeng
Affiliation:1. College of Automation, Southeast University, Nanjing 210096, China;2. Key Laboratory of Measurement and Control of Complex System of Engineering, Ministry of Education, Southeast University, Nanjing 210096, China and 1. College of Automation, Southeast University, Nanjing 210096, China;2. Key Laboratory of Measurement and Control of Complex System of Engineering, Ministry of Education, Southeast University, Nanjing 210096, China
Abstract:Objective Facial expression recognition is an interesting and challenging problem and affects important applications in many areas, such as human-computer interactions and data-driven animation. Research on 3D facial expression recognition is becoming popular in recent years with the developments of 3D capture technique and face recognition. 3D face data are robust to pose, illumination, and gestures. Such data contain more topological characteristic and geometric information than traditional 2D face data. Thus, 3D face data can describe the facial muscle movements caused by expressions. Feature- and model-based methods are the two main types of approaches for 3D facial expression recognition. The former generally performs better than the latter in terms of computational efficiency. Although this method has been successfully applied to facial expression recognition, the existing feature-based methods cannot achieve the desired recognition effects. First, feature-based methods are limited by the issue of extracting features. Some features are extracted using the accurate position of labeled landmarks. Despite their high recognition rates, these features meet many difficulties in practical applications because of landmark location. Other features extracted without labeled landmarks facilitate the achievement of automatic expression recognition. However, these features may have limited information and sometimes tend to fail in recognizing similar expressions. Thus, extracting effective features without labeled landmarks is key for improving 3D facial expression recognition rates by describing the deformation of facial muscles during expressions. Furthermore, proper weights for features can strengthen the identification capabilities of features for expression recognition considering that features of different facial parts have varying degrees of importance. Therefore, we propose an algorithm for 3D facial expression recognition by using weighted local curl patterns. This study aims to extract local curl pattern features and then achieve accurate recognition by computing their weights. Method The proposed algorithm includes two parts. First, local curl patterns are extracted as highly discriminative features on the basis of the curl vectors of 3D face to represent the facial surface changed by expressions. Curl vectors have been proven to perform better than other features in face representation. The vector direction can be used to represent the spatial location of the 3D surface and the vector length can describe the shape characteristics. Therefore, we construct local curl patterns on the basis of curl vectors. The principle of coding is the same as that of local binary patterns in 2D images. Second, local curl patterns should be assigned with weights considering that different parts of the face have varying sensitivities of expressions. We propose to combine the ICNP(interactive closest normal points) algorithm with minimum projection error algorithm for calculating feature weighs. Finally, weighted local curl pattern features are entered into the classifier, and then the expression class of 3D face is predicted. Result The algorithm effect is verified by recognizing nine different expressions, including calmness, smile, laugh, surprise, fear, anger, sadness, and disgust, on the basis of the BU-3DFE(Binghamton University 3D facial expression) database, which was developed for 3D facial expression studies. The database contains a total of 100 subjects, including 56 females and 44 males from various ethnic groups and ages. Then, 20 of the subjects are selected randomly for training the classifier, and the remaining 80 subjects are selected for recognition. First, the discrimination power of local curl patterns is evaluated via PCA(principal components analysis)-relative entropy. Discrimination power refers to the ratio of inter- and intra-class similarities. The higher the discrimination power, the better the results that will be achieved in recognition. In our experiment, local curl patterns are proven to have the highest discrimination power among common expression features, including normal vectors and shape index. Second, on the basis of the BU-3DFE database, the mean recognition rate of our approach is 89.67%, which is comparable to other methods. The proposed method also achieves low error rates among angry, sad, and disgusting faces, which are often confused in expression recognition. The error rate between angry and sad is 6.26%, and that between angry and disgust is 5.38%. Conclusion Local curl patterns are proven to extract the features of 3D faces effectively and perform well in expression recognition. The combination of ICNP and minimum projection error algorithms, which is more effective than traditional approaches, can enhance the discrimination power of local curl patterns. Experimental results show that the proposed approach is comparable to state-of-the-art methods in terms of its accuracy and excellent performance, especially for recognizing confusing expressions.
Keywords:3D facial expression recognition  local features  curl features  division of 3D face  weighs of features
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号