首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 951 毫秒
1.
There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although algorithms have been tested and compared extensively with each other, there has been remarkably little work comparing the accuracy of computer-based face recognition systems with humans. We compared seven state-of-the-art face recognition algorithms with humans on a face-matching task. Humans and algorithms determined whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. Three algorithms surpassed human performance matching face pairs prescreened to be "difficult" and six algorithms surpassed humans on "easy" face pairs. Although illumination variation continues to challenge face recognition algorithms, current algorithms compete favorably with humans. The superior performance of the best algorithms over humans, in light of the absolute performance levels of the algorithms, underscores the need to compare algorithms with the best current control-humans.  相似文献   

2.
The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations   总被引:4,自引:0,他引:4  
In this paper, we describe the acquisition and contents of a large-scale Chinese face database: the CAS-PEAL face database. The goals of creating the CAS-PEAL face database include the following: 1) providing the worldwide researchers of face recognition with different sources of variations, particularly pose, expression, accessories, and lighting (PEAL), and exhaustive ground-truth information in one uniform database; 2) advancing the state-of-the-art face recognition technologies aiming at practical applications by using off-the-shelf imaging equipment and by designing normal face variations in the database; and 3) providing a large-scale face database of Mongolian. Currently, the CAS-PEAL face database contains 99 594 images of 1040 individuals (595 males and 445 females). A total of nine cameras are mounted horizontally on an arc arm to simultaneously capture images across different poses. Each subject is asked to look straight ahead, up, and down to obtain 27 images in three shots. Five facial expressions, six accessories, and 15 lighting changes are also included in the database. A selected subset of the database (CAS-PEAL-R1, containing 30 863 images of the 1040 subjects) is available to other researchers now. We discuss the evaluation protocol based on the CAS-PEAL-R1 database and present the performance of four algorithms as a baseline to do the following: 1) elementarily assess the difficulty of the database for face recognition algorithms; 2) preference evaluation results for researchers using the database; and 3) identify the strengths and weaknesses of the commonly used algorithms.  相似文献   

3.
Since 2005, human and computer performance has been systematically compared as part of face recognition competitions, with results being reported for both still and video imagery. The key results from these competitions are reviewed. To analyze performance across studies, the cross-modal performance analysis (CMPA) framework is introduced. The CMPA framework is applied to experiments that were part of face a recognition competition. The analysis shows that for matching frontal faces in still images, algorithms are consistently superior to humans. For video and difficult still face pairs, humans are superior. Finally, based on the CMPA framework and a face performance index, we outline a challenge problem for developing algorithms that are superior to humans for the general face recognition problem.  相似文献   

4.
The intended applications of automatic face recognition systems include venues that vary widely in demographic diversity. Formal evaluations of algorithms do not commonly consider the effects of population diversity on performance. We document the effects of racial and gender demographics on estimates of the accuracy of algorithms that match identity in pairs of face images. In particular, we focus on the effects of the “background” population distribution of non-matched identities against which identity matches are compared. The algorithm we tested was created by fusing three of the top performers from a recent US Government competition. First, we demonstrate the variability of algorithm performance estimates when the population of non-matched identities was demographically “yoked" by race and/or gender (i.e., “yoking” constrains non-matched pairs to be of the same race or gender). We also report differences in the match threshold required to obtain a false alarm rate of .001 when demographic controls on the non-matched identity pairs varied. In a second experiment, we explored the effect on algorithm performance of progressively increasing population diversity. We found systematic, but non-general, effects when the balance between majority and minority populations of non-matched identities shifted. Third, we show that identity match accuracy differs substantially when the non-match identity population varied by race. Finally, we demonstrate the impact on performance when the non-match distribution consists of faces chosen to resemble a target face. The results from all experiments indicate the importance of the demographic composition and modeling of the background population in predicting the accuracy of face recognition algorithms.  相似文献   

5.
为了解决当前视线估计网络复杂度较深、精度不高的问题,同时为了未来将网络部署在移动设备端,提出了一种基于ShuffleNet V2算法的视线估计网络,其由脸部和眼睛两个子网络构成。脸部子网络通过ResNetV2网络对脸部图片进行特征处理,并加入人脸对齐算法,减少头部角度误差的影响。眼睛子网络通过ShuffleNet V2...  相似文献   

6.
7.
In this paper, two methods are presented that manipulate images to hinder automatic face identification. They partly degrade image quality, so that humans can identify the persons in a scene, while face identification algorithms fail to do so. The approaches used involve: a) singular value decomposition (SVD) and b) image projections on hyperspheres. Simulation experiments verify that these methods reduce the percentage of correct face identification rate by over 90 %. Additionally, the final image is not degraded beyond recognition by humans, in contrast with the majority of other de-identification methods.  相似文献   

8.
Face recognition technologies have seen dramatic improvements in performance over the past decade, and such systems are now widely used for security and commercial applications. Since recognizing faces is a task that humans are understood to be very good at, it is common to want to compare automatic face recognition (AFR) and human face recognition (HFR) in terms of biometric performance. This paper addresses this question by: 1) conducting verification tests on volunteers (HFR) and commercial AFR systems and 2) developing statistical methods to support comparison of the performance of different biometric systems. HFR was tested by presenting face-image pairs and asking subjects to classify them on a scale of "Same," "Probably Same," "Not sure," "Probably Different," and "Different"; the same image pairs were presented to AFR systems, and the biometric match score was measured. To evaluate these results, two new statistical evaluation techniques are developed. The first is a new way to normalize match-score distributions, where a normalized match score t is calculated as a function of the angle from a representation of [false match rate, false nonmatch rate] values in polar coordinates from some center. Using this normalization, we develop a second methodology to calculate an average detection error tradeoff (DET) curve and show that this method is equivalent to direct averaging of DET data along each angle from the center. This procedure is then applied to compare the performance of the best AFR algorithms available to us in the years 1999, 2001, 2003, 2005, and 2006, in comparison to human scores. Results show that algorithms have dramatically improved in performance over that time. In comparison to the performance of the best AFR system of 2006, 29.2% of human subjects performed better, while 37.5% performed worse.  相似文献   

9.

With the onset of COVID-19 pandemic, wearing of face mask became essential and the face occlusion created by the masks deteriorated the performance of the face biometric systems. In this situation, the use of periocular region (region around the eye) as a biometric trait for authentication is gaining attention since it is the most visible region when masks are used. One important issue in periocular biometrics is the identification of an optimal size periocular ROI which contains enough features for authentication. The state of the art ROI extraction algorithms use fixed size rectangular ROI calculated based on some reference points like center of the iris or centre of the eye without considering the shape of the periocular region of an individual. This paper proposes a novel approach to extract optimum size periocular ROIs of two different shapes (polygon and rectangular) by using five reference points (inner and outer canthus points, two end points and the midpoint of eyebrow) in order to accommodate the complete shape of the periocular region of an individual. The performance analysis on UBIPr database using CNN models validated the fact that both the proposed ROIs contain enough information to identify a person wearing face mask.

  相似文献   

10.
One of the challenges of face recognition in surveillance is the low resolution of face region. Therefore many superresolution (SR) face reconstruction methods are proposed to produce a high-resolution face image from one or a set of low-resolution face images. However, existing dictionary learning based algorithms are sensitive to noise and very time-consuming. In this paper, we define and prove the multi-scale linear combination consistency. In order to improve the performance of SR, we propose a novel SR face reconstruction method based on nonlocal similarity and multi-scale linear combination consistency (NLS-MLC). We further proposed a new recognition approach for very low resolution face images based on resolution scale invariant feature (RSIF). A series of experiments are conducted on two public face image databases to test feasibility of our proposed methods. Experimental results show that the proposed SR method is more robust and computationally effective in face hallucination, and the recognition accuracy of RSIF is higher than some state-of-art algorithms.   相似文献   

11.
Illumination invariant face recognition using near-infrared images   总被引:4,自引:0,他引:4  
Most current face recognition systems are designed for indoor, cooperative-user applications. However, even in thus-constrained applications, most existing systems, academic and commercial, are compromised in accuracy by changes in environmental illumination. In this paper, we present a novel solution for illumination invariant face recognition for indoor, cooperative-user applications. First, we present an active near infrared (NIR) imaging system that is able to produce face images of good condition regardless of visible lights in the environment. Second, we show that the resulting face images encode intrinsic information of the face, subject only to a monotonic transform in the gray tone; based on this, we use local binary pattern (LBP) features to compensate for the monotonic transform, thus deriving an illumination invariant face representation. Then, we present methods for face recognition using NIR images; statistical learning algorithms are used to extract most discriminative features from a large pool of invariant LBP features and construct a highly accurate face matching engine. Finally, we present a system that is able to achieve accurate and fast face recognition in practice, in which a method is provided to deal with specular reflections of active NIR lights on eyeglasses, a critical issue in active NIR image-based face recognition. Extensive, comparative results are provided to evaluate the imaging hardware, the face and eye detection algorithms, and the face recognition algorithms and systems, with respect to various factors, including illumination, eyeglasses, time lapse, and ethnic groups  相似文献   

12.
In this paper, we presented algorithms to assess the quality of facial images affected by factors such as blurriness, lighting conditions, head pose variations, and facial expressions. We developed face recognition prediction functions for images affected by blurriness, lighting conditions, and head pose variations based upon the eigenface technique. We also developed a classifier for images affected by facial expressions to assess their quality for recognition by the eigenface technique. Our experiments using different facial image databases showed that our algorithms are capable of assessing the quality of facial images. These algorithms could be used in a module for facial image quality assessment in a face recognition system. In the future, we will integrate the different measures of image quality to produce a single measure that indicates the overall quality of a face image  相似文献   

13.
Recently, the importance of face recognition has been increasingly emphasized since popular CCD cameras are distributed to various applications. However, facial images are dramatically changed by lighting variations, so that facial appearance changes caused serious performance degradation in face recognition. Many researchers have tried to overcome these illumination problems using diverse approaches, which have required a multiple registered images per person or the prior knowledge of lighting conditions. In this paper, we propose a new method for face recognition under arbitrary lighting conditions, given only a single registered image and training data under unknown illuminations. Our proposed method is based on the illuminated exemplars which are synthesized from photometric stereo images of training data. The linear combination of illuminated exemplars can represent the new face and the weighted coefficients of those illuminated exemplars are used as identity signature. We make experiments for verifying our approach and compare it with two traditional approaches. As a result, higher recognition rates are reported in these experiments using the illumination subset of Max-Planck Institute face database and Korean face database.  相似文献   

14.
Although unconstrained face recognition has been widely studied over the recent years, state-of-the-art algorithms still result in an unsatisfactory performance for low-quality images. In this paper, we make two contributions to this field: the first one is the release of a new dataset called ‘AR-LQ’ that can be used in conjunction with the well-known ‘AR’ dataset to evaluate face recognition algorithms on blurred and low-resolution face images. The proposed dataset contains five new blurred faces (at five different levels, from low to severe blurriness) and five new low-resolution images (at five different levels, from 66 × 48 to 7 × 5 pixels) for each of the hundred subjects of the ‘AR’ dataset. The new blurred images were acquired by using a DLSR camera with manual focus that takes an out-of-focus photograph of a monitor that displays a sharp face image. In the same way, the low-resolution images were acquired from the monitor by a DLSR at different distances. Thus, an attempt is made to acquire low-quality images that have been degraded by a real degradation process. Our second contribution is an extension of a known face recognition technique based on sparse representations (ASR) that takes into account low-resolution face images. The proposed method, called blur-ASR or bASR, was designed to recognize faces using dictionaries with different levels of blurriness. These were obtained by digitally blurring the training images, and a sharpness metric for matching blurriness between the query image and the dictionaries. These two main adjustments made the algorithm more robust with respect to low-quality images. In our experiments, bASR consistently outperforms other state-of-the-art methods including hand-crafted features, sparse representations, and seven well-known deep learning face recognition techniques with and without super resolution techniques. On average, bASR obtained 88.8% of accuracy, whereas the rest obtained less than 78.4%.  相似文献   

15.

The use of the iris and periocular region as biometric traits has been extensively investigated, mainly due to the singularity of the iris features and the use of the periocular region when the image resolution is not sufficient to extract iris information. In addition to providing information about an individual’s identity, features extracted from these traits can also be explored to obtain other information such as the individual’s gender, the influence of drug use, the use of contact lenses, spoofing, among others. This work presents a survey of the databases created for ocular recognition, detailing their protocols and how their images were acquired. We also describe and discuss the most popular ocular recognition competitions (contests), highlighting the submitted algorithms that achieved the best results using only iris trait and also fusing iris and periocular region information. Finally, we describe some relevant works applying deep learning techniques to ocular recognition and point out new challenges and future directions. Considering that there are a large number of ocular databases, and each one is usually designed for a specific problem, we believe this survey can provide a broad overview of the challenges in ocular biometrics.

  相似文献   

16.
This paper describes a new software-based registration and fusion of visible and thermal infrared (IR) image data for face recognition in challenging operating environments that involve illumination variations. The combined use of visible and thermal IR imaging sensors offers a viable means for improving the performance of face recognition techniques based on a single imaging modality. Despite successes in indoor access control applications, imaging in the visible spectrum demonstrates difficulties in recognizing the faces in varying illumination conditions. Thermal IR sensors measure energy radiations from the object, which is less sensitive to illumination changes, and are even operable in darkness. However, thermal images do not provide high-resolution data. Data fusion of visible and thermal images can produce face images robust to illumination variations. However, thermal face images with eyeglasses may fail to provide useful information around the eyes since glass blocks a large portion of thermal energy. In this paper, eyeglass regions are detected using an ellipse fitting method, and replaced with eye template patterns to preserve the details useful for face recognition in the fused image. Software registration of images replaces a special-purpose imaging sensor assembly and produces co-registered image pairs at a reasonable cost for large-scale deployment. Face recognition techniques using visible, thermal IR, and data-fused visible-thermal images are compared using a commercial face recognition software (FaceIt®) and two visible-thermal face image databases (the NIST/Equinox and the UTK-IRIS databases). The proposed multiscale data-fusion technique improved the recognition accuracy under a wide range of illumination changes. Experimental results showed that the eyeglass replacement increased the number of correct first match subjects by 85% (NIST/Equinox) and 67% (UTK-IRIS).  相似文献   

17.
目前,存在大量的人脸识别算法和人脸库,如何选择分类算法,如何选择训练集,成为人脸识别的一个关键问题。针对小波变换和主元分析的人脸识别算法,在Matlab6.5平台上,通过大量的仿真实验,根据计算速度和识别率得出,三阶近邻分类优于欧几里德分类,选择ORL人脸库的前8幅图像作为训练集优于其他情况。实验得出了最优情况下的特征脸库,及测试图像用平均脸图像和特征脸图像的线性表示。在人脸识别领域具有很强的应用价值。  相似文献   

18.
In everyday life, face similarity is an important kinship clue. Computer algorithms able to infer kinship from pairs of face images could be applied in forensics, image retrieval and annotation, and historical studies. So far, little work in this area has been presented, and only one study, using a small set of low quality images, tackles the problem of identifying siblings pairs. The purpose of our paper is to present a comprehensive investigation on this subject, aimed at understanding which are, on the average, the most relevant facial features, how effective can be computer algorithms for detecting siblings pairs, and if they can outperform human evaluation. To avoid problems due to low quality pictures and uncontrolled imaging conditions, as for the heterogeneous datasets collected for previous researches, we prepared a database of high quality pictures of sibling pairs, shot in controlled conditions and including frontal, profile, expressionless, and smiling faces. Then we constructed various classifiers of image pairs using different types of facial data, based on various geometric, textural, and holistic features. The classifiers were first tested separately, and then the most significant facial data, selected with a two stage feature selection algorithm were combined into a unique classifier. The discriminating ability of the automatic classifier combining features of different nature has been found to outperform that of a panel of human raters. We also show the good generalization capabilities of the algorithm by applying the classifier, in a cross-database experiment, to a low quality database of images collected from the Internet.  相似文献   

19.
目的 现实中采集到的人脸图像通常受到光照、遮挡等环境因素的影响,使得同一类的人脸图像具有不同程度的差异性,不同类的人脸图像又具有不同程度的相似性,这极大地影响了人脸识别的准确性。为了解决上述问题对人脸识别造成的影响,在低秩矩阵恢复理论的基础上提出了具有识别力的结构化低秩字典学习的人脸识别算法。方法 该算法基于训练样本的标签信息将低秩正则化以及结构化稀疏同时引入到学习的具有识别力的字典上。在字典学习过程中,首先利用样本的重建误差约束样本与字典之间的关系;其次将Fisher准则应用到稀疏编码过程中,使其编码系数具有识别能力;由于训练样本中的噪声信息会影响字典的识别力,所以在低秩矩阵恢复理论的基础上将低秩正则化应用到字典学习过程中;接着,在字典学习过程中加入了结构化稀疏使其不丢失结构信息以保证对样本进行最优分类;最后再利用误差重构法对测试样本进行分类识别。结果 本文算法在AR以及ORL人脸数据库上分别进行了实验仿真。在AR人脸数据库中,为了分析样本不同维数对实验结果造成的影响,选取了第一时期拍摄的每人6幅图像,包括1幅围巾遮挡,2幅墨镜遮挡以及3幅脸部表情变化以及光照变化(未被遮挡)的图像作为训练样本,同时选取相同组合的样本图像作为测试样本,无论哪种方法,图像的维度越高识别率越高。对比SRC (sparse representation based on classification)算法与DKSVD (discriminative K-means singular value decomposition)算法的识别率可知,DKSVD算法通过字典学习减缓了训练样本中的不确定因素对识别结果的影响;对比DLRD_SR (discriminative low-rank dictionary learning for sparse representation)算法与FDDL (Fisher discriminative dictionary learning)算法的识别率可知,当图像有遮挡等噪声信息存在时,字典低秩化可以提高至少5.8%的识别率;对比本文算法与DLRD_SR算法可知,在字典学习的过程中加入Fisher准则后识别率显著提高,同时理想稀疏值能保证对样本进行最优的分类。当样本图像的维度达到500维时人脸图像在有围巾、墨镜遮挡的情况下识别率可达到85.2%;其中墨镜和围巾的遮挡程度分别可以看成是人脸图像的20%和40%,为了验证本文算法在不同脸部表情变化、光照改变以及遮挡情况下的有效性,根据训练样本的具体图像组合情况进行实验。无论哪种样本图像组合,本文算法在有遮挡存在的样本识别中具有显著优势。在训练样本只包含脸部表情变化、光照变化以及墨镜遮挡图像的情况下,本文算法的识别率高于其他算法至少2.7%,在训练样本只包含脸部表情变化、光照变化以及围巾遮挡图像的情况下,本文算法的识别率高于其他算法至少3.6%,在训练样本包含脸部表情变化、光照变化、围巾遮挡以及墨镜遮挡图像的情况下,其识别率高于其他算法至少1.9%。在ORL人脸数据库中,人脸图像在无遮挡的情况下识别率达到95.2%,稍低于FDDL算法的识别率;在随机块遮挡程度达到20%时,相比较于SRC算法、DKSVD算法、FDDL算法以及DLRD_SR算法,本文算法的识别率最高;当随机块遮挡程度达到50%时,以上算法的识别率均不高,但本文算法的其识别率仍然最高。结论 本文算法在人脸图像受到遮挡等因素的影响时具有一定的鲁棒性,实验结果表明该算法在人脸识别方面具有可行性。  相似文献   

20.
Toward automatic simulation of aging effects on face images   总被引:6,自引:0,他引:6  
The process of aging causes significant alterations in the facial appearance of individuals. When compared with other sources of variation in face images, appearance variation due to aging displays some unique characteristics. Changes in facial appearance due to aging can even affect discriminatory facial features, resulting in deterioration of the ability of humans and machines to identify aged individuals. We describe how the effects of aging on facial appearance can be explained using learned age transformations and present experimental results to show that reasonably accurate estimates of age can be made for unseen images. We also show that we can improve our results by taking into account the fact that different individuals age in different ways and by considering the effect of lifestyle. Our proposed framework can be used for simulating aging effects on new face images in order to predict how an individual might look like in the future or how he/she used to look in the past. The methodology presented has also been used for designing a face recognition system, robust to aging variation. In this context, the perceived age of the subjects in the training and test images is normalized before the training and classification procedure so that aging variation is eliminated. Experimental results demonstrate that, when age normalization is used, the performance of our face recognition system can be improved  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号