首页 | 本学科首页   官方微博 | 高级检索  
     


Texture and shape information fusion for facial expression and facial action unit recognition
Authors:Irene Kotsia  Stefanos Zafeiriou  Ioannis Pitas
Affiliation:1. Computational Intelligence Research Group, Department of Computing Science and Digital Technologies, Faculty of Engineering and Environment, Northumbria University, Newcastle, NE1 8ST, UK;2. Anglia Ruskin IT Research Institute, Faculty of Science and Technology, Anglia Ruskin University, Cambridge, CB1 1PF, UK;1. Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Sciences and Medical Engineering, Southeast University, Nanjing, Jiangsu 210096, China;2. School of Information Science and Engineering, Southeast University, Nanjing, Jiangsu 210096, China;3. School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu, 210096, China
Abstract:A novel method based on fusion of texture and shape information is proposed for facial expression and Facial Action Unit (FAU) recognition from video sequences. Regarding facial expression recognition, a subspace method based on Discriminant Non-negative Matrix Factorization (DNMF) is applied to the images, thus extracting the texture information. In order to extract the shape information, the system firstly extracts the deformed Candide facial grid that corresponds to the facial expression depicted in the video sequence. A Support Vector Machine (SVM) system designed on an Euclidean space, defined over a novel metric between grids, is used for the classification of the shape information. Regarding FAU recognition, the texture extraction method (DNMF) is applied on the differences images of the video sequence, calculated taking under consideration the neutral and the expressive frame. An SVM system is used for FAU classification from the shape information. This time, the shape information consists of the grid node coordinate displacements between the neutral and the expressed facial expression frame. The fusion of texture and shape information is performed using various approaches, among which are SVMs and Median Radial Basis Functions (MRBFs), in order to detect the facial expression and the set of present FAUs. The accuracy achieved using the Cohn–Kanade database is 92.3% when recognizing the seven basic facial expressions (anger, disgust, fear, happiness, sadness, surprise and neutral), and 92.1% when recognizing the 17 FAUs that are responsible for facial expression development.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号