首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
2.
Applications related to game technology, law-enforcement, security, medicine or biometrics are becoming increasingly important, which, combined with the proliferation of three-dimensional (3D) scanning hardware, have made that 3D face recognition is now becoming a promising and feasible alternative to two-dimensional (2D) face methods. The main advantage of 3D data, when compared with traditional 2D approaches, is that it provides information that is invariant to rigid geometric transformations and to pose and illumination conditions. One key element for any 3D face recognition system is the modeling of the available scanned data. This paper presents new 3D models for facial surface representation and evaluates them using two matching approaches: one based on support vector machines and another one on principal component analysis (with a Euclidean classifier). Also, two types of environments were tested in order to check the robustness of the proposed models: a controlled environment with respect to facial conditions (i.e. expressions, face rotations, etc.) and a non-controlled one (presenting face rotations and pronounced facial expressions). The recognition rates obtained using reduced spatial resolution representations (a 77.86% for non-controlled environments and a 90.16% for controlled environments, respectively) show that the proposed models can be effectively used for practical face recognition applications.  相似文献   

3.
The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications. In this paper, we attempt to recognize facial expressions under poses with large rotation angles from 2D videos. A depth-patch based 4D expression representation model is proposed. It was reconstructed from 2D dynamic images for delineating continuous spatial changes and temporal context under non-frontal cases. Furthermore, we present an effective deep neural network classifier, which can accurately capture pose-variant expression features from the depth patches and recognize non-frontal expressions. Experimental results on the BU-4DFE database show that the proposed method achieves a high recognition accuracy of 86.87% for non-frontal facial expressions within a range of head rotation angle of up to 52°, outperforming existing methods. We also present a quantitative analysis of the components contributing to the performance gain through tests on the BU-4DFE and Multi-PIE datasets.  相似文献   

4.
5.
Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose and illumination variations. In order to deal with these problems, 3D and 4D (dynamic 3D) recordings are increasingly used in expression analysis research. In this paper we survey the recent advances in 3D and 4D facial expression recognition. We discuss developments in 3D facial data acquisition and tracking, and present currently available 3D/4D face databases suitable for 3D/4D facial expressions analysis as well as the existing facial expression recognition systems that exploit either 3D or 4D data in detail. Finally, challenges that have to be addressed if 3D facial expression recognition systems are to become a part of future applications are extensively discussed.  相似文献   

6.
Over the last decade 3D face models have been extensively used in many applications such as face recognition, facial animation and facial expression analysis. 3D Morphable Models (MMs) have become a popular tool to build and fit 3D face models to images. Critical to the success of MMs is the ability to build a generic 3D face model. Major limitations in the MMs building process are: (1) collecting 3D data usually involves the use of expensive laser scans and complex capture setups, (2) the number of available 3D databases is limited, and typically there is a lack of expression variability and (3) finding correspondences and registering the 3D model is a labor intensive and error prone process.  相似文献   

7.
目的 3维人脸的表情信息不均匀地分布在五官及脸颊附近,对表情进行充分的描述和合理的权重分配是提升识别效果的重要途径。为提高3维人脸表情识别的准确率,提出了一种基于带权重局部旋度模式的3维人脸表情识别算法。方法 首先,为了提取具有较强表情分辨能力的特征,提出对3维人脸的旋度向量进行编码,获取局部旋度模式作为表情特征;然后,提出将ICNP(interactive closest normal points)算法与最小投影偏差算法结合,前者实现3维人脸子区域的不规则划分,划分得到的11个子区域保留了表情变化下面部五官和肌肉的完整性,后者根据各区域对表情识别的贡献大小为各区域的局部旋度模式特征分配权重;最后,带有权重的局部旋度模式特征被输入到分类器中实现表情识别。结果 基于BU-3DFE 3维人脸表情库对本文提出的局部旋度模式特征进行评估,结果表明其分辨能力较其他表情特征更强;基于BU-3DFE库进行表情识别实验,与其他3维人脸表情识别算法相比,本文算法取得了最高的平均识别率,达到89.67%,同时对易混淆的“悲伤”、“愤怒”和“厌恶”等表情的误判率也较低。结论 局部旋度模式特征对3维人脸的表情有较强的表征能力; ICNP算法与最小投影偏差算法的结合,能够实现区域的有效划分和权重的准确计算,有效提高特征对表情的识别能力。试验结果表明本文算法对3维人脸表情具有较高的识别率,并对易混淆的相似表情仍具有较好的识别效果。  相似文献   

8.
3D face scans have been widely used for face modeling and analysis. Due to the fact that face scans provide variable point clouds across frames, they may not capture complete facial data or miss point-to-point correspondences across various facial scans, thus causing difficulties to use such data for analysis. This paper presents an efficient approach to representing facial shapes from face scans through the reconstruction of face models based on regional information and a generic model. A new approach for 3D feature detection and a hybrid approach using two vertex mapping algorithms, displacement mapping and point-to-surface mapping, and a regional blending algorithm are proposed to reconstruct the facial surface detail. The resulting models can represent individual facial shapes consistently and adaptively, establishing facial point correspondences across individual models. The accuracy of the generated models is evaluated quantitatively. The applicability of the models is validated through the application of 3D facial expression recognition using the static 3DFE and dynamic 4DFE databases. A comparison with the state of the art has also been reported.  相似文献   

9.

This paper first presents a novel approach for modelling facial features, Local Directional Texture (LDT), which exploits the unique directional information in image textures for the problem of face recognition. A variant of LDT with privacy-preserving temporal strips (TS) is then considered to achieve faceless recognition with a higher degree of privacy while maintaining high accuracy. The TS uses two strips of pixel blocks from the temporal planes, XT and YT, for face recognition. By removing the reliance on spatial context (i.e., XY plane) for this task, the proposed method withholds facial appearance information from public view, where only one-dimensional temporal information that varies across time are extracted for recognition. Thus, privacy is assured, yet without impeding the facial recognition task which is vital for many security applications such as street surveillance and perimeter access control. To validate the reliability of the proposed method, experiments were carried out using the Honda/UCSD, CK+, CAS(ME)2 and CASME II databases. The proposed method achieved a recognition rate of 98.26% in the standard video-based face recognition database, Honda/UCSD. It also offers a 81.92% reduction in the dimension length required for storing the extracted features, in contrast to the conventional LBP-TOP.

  相似文献   

10.
In order to solve the problem of low recognition accuracy in later period which is caused by the too few extracted parameters in the 3D face recognition, and the incapable formation of completed point cloud structure. An automatic iterative interpolation algorithm is proposed. The new and more accurate 3D face data points are obtained by automatic iteration. This algorithm can be used to restore the data point cloud information of 3D facial feature in 2D images by means of facial three-legged structure formed by 3D face and automatic interpolation. Thus, it can realize to shape the 3D facial dynamic model which can be recognized and has high saturability. Experimental results show that the interpolation algorithm can achieve the complete the construction of facial feature based on the facial feature after 3D dynamic reconstruction, and the validity is higher.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号