首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
龚锐  丁胜  章超华  苏浩 《计算机应用》2020,40(3):704-709
目前基于深度学习的人脸识别方法存在识别模型参数量大、特征提取速度慢的问题,而且现有人脸数据集姿态单一,在实际人脸识别任务中无法取得好的识别效果。针对这一问题建立了一种多姿态人脸数据集,并提出了一种轻量级的多姿态人脸识别方法。首先,使用多任务级联卷积神经网络(MTCNN)算法进行人脸检测,并且使用MTCNN最后包含的高层特征做人脸跟踪;然后,根据检测到的人脸关键点位置来判断人脸姿态,通过损失函数为ArcFace的神经网络提取当前人脸特征,并将当前人脸特征与相应姿态的人脸数据库中的人脸特征比对得到人脸识别结果。实验结果表明,提出方法在多姿态人脸数据集上准确率为96.25%,相较于单一姿态的人脸数据集,准确率提升了2.67%,所提方法能够有效提高识别准确率。  相似文献   

2.
The increasing availability of 3D facial data offers the potential to overcome the intrinsic difficulties faced by conventional face recognition using 2D images. Instead of extending 2D recognition algorithms for 3D purpose, this letter proposes a novel strategy for 3D face recognition from the perspective of representing each 3D facial surface with a 2D attribute image and taking the advantage of the advances in 2D face recognition. In our approach, each 3D facial surface is mapped homeomorphically onto a 2D lattice, where the value at each site is an attribute that represents the local 3D geometrical or textural properties on the surface, therefore invariant to pose changes. This lattice is then interpolated to generate a 2D attribute image. 3D face recognition can be achieved by applying the traditional 2D face recognition techniques to obtained attribute images. In this study, we chose the pose invariant local mean curvature calculated at each vertex on the 3D facial surface to construct the 2D attribute image and adopted the eigenface algorithm for attribute image recognition. We compared our approach to state-of-the-art 3D face recognition algorithms in the FRGC (Version 2.0), GavabDB and NPU3D database. Our results show that the proposed approach has improved the robustness to head pose variation and can produce more accurate 3D multi-pose face recognition.  相似文献   

3.
Automatic recognition of facial gestures (i.e., facial muscle activity) is rapidly becoming an area of intense interest in the research field of machine vision. In this paper, we present an automated system that we developed to recognize facial gestures in static, frontal- and/or profile-view color face images. A multidetector approach to facial feature localization is utilized to spatially sample the profile contour and the contours of the facial components such as the eyes and the mouth. From the extracted contours of the facial features, we extract ten profile-contour fiducial points and 19 fiducial points of the contours of the facial components. Based on these, 32 individual facial muscle actions (AUs) occurring alone or in combination are recognized using rule-based reasoning. With each scored AU, the utilized algorithm associates a factor denoting the certainty with which the pertinent AU has been scored. A recognition rate of 86% is achieved.  相似文献   

4.
This paper proposes a novel natural facial expression recognition method that recognizes a sequence of dynamic facial expression images using the differential active appearance model (AAM) and manifold learning as follows. First, the differential-AAM features (DAFs) are computed by the difference of the AAM parameters between an input face image and a reference (neutral expression) face image. Second, manifold learning embeds the DAFs on the smooth and continuous feature space. Third, the input facial expression is recognized through two steps: (1) computing the distances between the input image sequence and gallery image sequences using directed Hausdorff distance (DHD) and (2) selecting the expression by a majority voting of k-nearest neighbors (k-NN) sequences in the gallery. The DAFs are robust and efficient for the facial expression analysis due to the elimination of the inter-person, camera, and illumination variations. Since the DAFs treat the neutral expression image as the reference image, the neutral expression image must be found effectively. This is done via the differential facial expression probability density model (DFEPDM) using the kernel density approximation of the positively directional DAFs changing from neutral to angry (happy, surprised) and negatively directional DAFs changing from angry (happy, surprised) to neutral. Then, a face image is considered to be the neutral expression if it has the maximum DFEPDM in the input sequences. Experimental results show that (1) the DAFs improve the facial expression recognition performance over conventional AAM features by 20% and (2) the sequence-based k-NN classifier provides a 95% facial expression recognition performance on the facial expression database (FED06).  相似文献   

5.
6.
This paper presents an efficient 3D face recognition method to handle facial expression and hair occlusion. The proposed method uses facial curves to form a rejection classifier and produce a facial deformation mapping and then adaptively selects regions for matching. When a new 3D face with an arbitrary pose and expression is queried, the pose is normalized based on the automatically detected nose tip and the principal component analysis (PCA) follows. Then, the facial curve in the nose region is extracted and used to form the rejection classifier which quickly eliminates dissimilar faces in the gallery for efficient recognition. Next, six facial regions which cover the face are segmented and curves in these regions are used to map facial deformation. Regions used for matching are automatically selected based on the deformation mapping. In the end, results of all the matching engines are fused by weighted sum rule. The approach is applied on the FRGC v2.0 dataset and a verification rate of 96.0% for ROC III is achieved as a false acceptance rate (FAR) of 0.1%. In the identification scenario, a rank-one accuracy of 97.8% is achieved.  相似文献   

7.
Yao  Li  Wan  Yan  Ni  Hongjie  Xu  Bugao 《Multimedia Tools and Applications》2021,80(16):24287-24301
Multimedia Tools and Applications - Automatic facial expression analysis remains challenging due to its low recognition accuracy and poor robustness. In this study, we utilized active learning and...  相似文献   

8.
针对现有的在人脸表情识别中应用的卷积神经网络结构不够轻量,难以精确提取人脸表情特征,且需要大量表情标记数据等问题,提出一种基于注意力机制的人脸表情识别迁移学习方法.设计一个轻量的网络结构,在其基础上进行特征分组并建立空间增强注意力机制,突出表情特征重点区域,利用迁移学习在目标函数中构造一个基于log-Euclidean...  相似文献   

9.
Human facial feature extraction for face interpretation and recognition   总被引:16,自引:0,他引:16  
Facial features' extraction algorithms which can be used for automated visual interpretation and recognition of human faces are presented. Here, we can capture the contours of the eye and mouth by a deformable template model because of their analytically describable shapes. However, the shapes of the eyebrow, nostril and face are difficult to model using a deformable template. We extract them by using an active contour model (snake). In the experiments, 12 models are photographed, and the feature contours are extracted for each portrait.  相似文献   

10.
An algorithm is proposed for 3D face recognition in the presence of varied facial expressions. It is based on combining the match scores from matching multiple overlapping regions around the nose. Experimental results are presented using the largest database employed to date in 3D face recognition studies, over 4,000 scans of 449 subjects. Results show substantial improvement over matching the shape of a single larger frontal face region. This is the first approach to use multiple overlapping regions around the nose to handle the problem of expression variation.  相似文献   

11.
12.
Liao  Haibin  Wang  Dianhua  Fan  Ping  Ding  Ling 《Multimedia Tools and Applications》2021,80(19):28627-28645

Automated Facial Expression Recognition (FER) has remained challenging because of the high inter-subject (e.g. the variations of age, gender and ethnic backgrounds) and intra-subject variations (e.g. the variations of low image resolution, occlusion and illumination). To reduce the variations of age, gender and ethnic backgrounds, we have introduced a conditional random forest architecture. Moreover, a deep multi-instance learning model has been proposed for reducing the variations of low image resolution, occlusion and illumination. Unlike most existing models are trained with facial expression labels only, other attributes related to facial expressions such as age and gender are also considered in our proposed model. A large number of experiments were conducted on the public CK+, ExpW, RAF-DB and AffectNet datasets, and the recognition rates reached 99% and 69.72% on the normalized CK+ face database and the challenging natural scene database respectively. The experimental results shows that our proposed method outperforms the state-of-the-art methods and it is robust to occlusion, noise and resolution variation in the wild.

  相似文献   

13.
14.
Multimedia Tools and Applications - Nowadays, digital protection has become greater prominence for daily digital activities. It’s far vital for people to keep new passwords in their minds and...  相似文献   

15.
Meng  Hao  Yuan  Fei  Tian  Yang  Yan  Tianhao 《Multimedia Tools and Applications》2022,81(4):5621-5643
Multimedia Tools and Applications - Large-scale high-quality datasets are a particularly important condition for facial expression recognition(FER) in the era of deep learning, but most of the...  相似文献   

16.
17.
为了充分利用人脸图像的潜在信息,提出一种通过设置不同尺寸的卷积核来得到图像多尺度特征的方法,多尺度卷积自动编码器(Multi-Scale Convolutional Auto-Encoder,MSCAE)。该结构所提取的不同尺度特征反映人脸的本质信息,可以更好地还原人脸图像。这种特征提取框架是一个卷积和采样交替的层级结构,使得特征对旋转、平移、比例缩放等具有高度不变性。MSCAE以encoder-decoder模式训练得到特征提取器,用它提取特征,并融合形成用于分类的特征向量。BP神经网络在ORL和Yale人脸库上的分类结果表明,多尺度特征在识别率和性能上均优于单尺度特征。此外,MSCAE特征与HOG(Histograms of Oriented Gradients)的融合特征取得了比单一特征更高的识别率。  相似文献   

18.
Spontaneous facial expression recognition is significantly more challenging than recognizing posed ones. We focus on two issues that are still under-addressed in this area. First, due to the inherent subtlety, the geometric and appearance features of spontaneous expressions tend to overlap with each other, making it hard for classifiers to find effective separation boundaries. Second, the training set usually contains dubious class labels which can hurt the recognition performance if no countermeasure is taken. In this paper, we propose a spontaneous expression recognition method based on robust metric learning with the aim of alleviating these two problems. In particular, to increase the discrimination of different facial expressions, we learn a new metric space in which spatially close data points have a higher probability of being in the same class. In addition, instead of using the noisy labels directly for metric learning, we define sensitivity and specificity to characterize the annotation reliability of each annotator. Then the distance metric and annotators' reliability is jointly estimated by maximizing the likelihood of the observed class labels. With the introduction of latent variables representing the true class labels, the distance metric and annotators' reliability can be iteratively solved under the Expectation Maximization framework. Comparative experiments show that our method achieves better recognition accuracy on spontaneous expression recognition, and the learned metric can be reliably transferred to recognize posed expressions.  相似文献   

19.
This work presents a novel dictionary learning method based on the l2l2-norm regularization to learn a dictionary more suitable for face recognition. By optimizing the reconstruction error for each class using the dictionary atoms associated with that class, we learn a structured dictionary which is able to make the reconstruction error for each class more discriminative for classification. Moreover, to make the coding coefficients of samples coded over the learned dictionary discriminative, a discriminative term bilinear to the training samples and the coding coefficients is incorporated in our dictionary learning model. The bilinear discriminative term essentially resolves a linear regression problem for patterns concatenated by the training samples and the coding coefficients in the Reproducing Kernel Hilbert Space (RKHS). Consequently, a novel classifier based on the bilinear discriminative model is also proposed. Experimental results on the AR, CMU PIE, CAS-PEAL-R1, and the Sheffield (previously UMIST) face databases show that the proposed method is effective to expression, lighting, and pose variations in face recognition as well as gender classification, compared with the recently proposed face recognition methods and dictionary learning methods.  相似文献   

20.
In order to serve people and support them in daily life, a domestic or service robot needs to accommodate itself to various individuals. Emotional and intelligent human–robot interaction plays an important role for a robot to gain attention of its users. Facial expression recognition is a key factor in interactive robotic applications. In this paper, an image-based facial expression recognition system that adapts online to a new face is proposed. The main idea of the proposed learning algorithm is to adjust parameters of the support vector machine (SVM) hyperplane for learning facial expressions of a new face. After mapping the input space to Gaussian-kernel space, support vector pursuit learning (SVPL) is employed to retrain the hyperplane in the new feature space. To expedite the retraining step, we propose to retrain a new SVM classifier by using only samples classified incorrectly in previous iteration in combination with critical historical sets. After adjusting the hyperplane parameters, the new classifier will recognize more effectively previous unrecognizable facial datasets. Experiments of using an embedded imaging system show that the proposed system recognizes new facial datasets with a recognition rate of 92.7%. Furthermore, it also maintains a satisfactory recognition rate of 82.6% of old facial samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号