首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Race classification is a long-standing challenge in the field of face image analysis. The investigation of salient facial features is an important task to avoid processing all face parts. Face segmentation strongly benefits several face analysis tasks, including ethnicity and race classification. We propose a race-classification algorithm using a prior face segmentation framework. A deep convolutional neural network (DCNN) was used to construct a face segmentation model. For training the DCNN, we label face images according to seven different classes, that is, nose, skin, hair, eyes, brows, back, and mouth. The DCNN model developed in the first phase was used to create segmentation results. The probabilistic classification method is used, and probability maps (PMs) are created for each semantic class. We investigated five salient facial features from among seven that help in race classification. Features are extracted from the PMs of five classes, and a new model is trained based on the DCNN. We assessed the performance of the proposed race classification method on four standard face datasets, reporting superior results compared with previous studies.  相似文献   

2.
田卓  佘青山  甘海涛  孟明 《计量学报》2019,40(4):576-582
为了提高复杂背景下面部信息的识别性能,提出了一种面向人脸特征点定位和姿态估计任务协同的深度卷积神经网络(DCNN)方法。首先从视频图像中检测出人脸信息;其次设计一个深度卷积网络模型,将人脸特征点定位和姿态估计两个任务协同优化,同时回归得到人脸特征点坐标和姿态角度值,然后融合生成相应的人机交互信息;最后采用公开数据集和实际场景数据进行测试,并与其他现有方法进行比对分析。实验结果表明:该方法在人脸特征点定位和姿态估计上表现出较好的性能,在光照变化、表情变化、部分遮挡等复杂条件下人机交互应用也取得了良好的准确性和鲁棒性,平均处理速度约16帧/s,具备一定的实用性。  相似文献   

3.
With the development of Deep Convolutional Neural Networks (DCNNs), the extracted features for image recognition tasks have shifted from low-level features to the high-level semantic features of DCNNs. Previous studies have shown that the deeper the network is, the more abstract the features are. However, the recognition ability of deep features would be limited by insufficient training samples. To address this problem, this paper derives an improved Deep Fusion Convolutional Neural Network (DF-Net) which can make full use of the differences and complementarities during network learning and enhance feature expression under the condition of limited datasets. Specifically, DF-Net organizes two identical subnets to extract features from the input image in parallel, and then a well-designed fusion module is introduced to the deep layer of DF-Net to fuse the subnet’s features in multi-scale. Thus, the more complex mappings are created and the more abundant and accurate fusion features can be extracted to improve recognition accuracy. Furthermore, a corresponding training strategy is also proposed to speed up the convergence and reduce the computation overhead of network training. Finally, DF-Nets based on the well-known ResNet, DenseNet and MobileNetV2 are evaluated on CIFAR100, Stanford Dogs, and UECFOOD-100. Theoretical analysis and experimental results strongly demonstrate that DF-Net enhances the performance of DCNNs and increases the accuracy of image recognition.  相似文献   

4.
Song H  Lee S  Kim J  Sohn K 《Applied optics》2005,44(5):677-687
We describe a face recognition system based on two different three-dimensional (3D) sensors. We use 3D sensors to overcome the pose-variation problems that cannot be effectively solved in two-dimensional images. We acquire input data based on a structured-light system and compare it with 3D faces that are obtained from a 3D laser scanner. Owing to differences in structure between the input data and the 3D faces, we can generate the range images of the probe and stored images. For estimating the head pose of input data, we propose a novel error-compensated singular-value decomposition that geometrically estimates the rotation angle. Face recognition rates obtained with principal component analysis on various range images of 35 people in different poses show promising results.  相似文献   

5.
基于深度数据的空间人脸旋转角度估计   总被引:1,自引:0,他引:1  
提出一种基于三维人脸深度数据的人脸姿态计算方法。利用人脸的深度数据以及与其一一对应的灰度图像,根据微分几何原理和相应的曲率算法与人脸数据中的灰度特征对人脸面部关键特征点定位,进而计算出人脸姿态在三维空间中的3个姿态角。实验证明该方法能在姿态变化情况下实现对人脸旋转角的准确估计,为进一步的人脸识别和表情分析提供基础。  相似文献   

6.
Improvements in medical imaging technology have greatly contributed to early disease detection and diagnosis. However, the accuracy of an examination depends on both the quality of the images and the ability of the physician to interpret those images. Use of output from computerized analysis of an image may facilitate the diagnostic tasks and, potentially improve the overall interpretation of images and the subsequent patient care. In this paper, Analysis, a modular software system designed to assist interpretation of medical images, is described in detail. Analysis allows texture and motion estimation of selected regions of interest (ROIs). Texture features can be estimated using first-order statistics, second-order statistics, Laws' texture energy, neighborhood gray-tone difference matrix, gray level difference statistics, and the fractal dimension. Motion can be estimated from temporal image sequences using block matching or optical flow. Image preprocessing, manual and automatic definition of ROIs, and dimensionality reduction and clustering using fuzzy c-means, are also possible within Analysis. An important feature of Analysis is the possibility for online telecollaboration between health care professionals under a secure framework. To demonstrate the applicability and usefulness of the system in clinical practice, Analysis was applied to B-mode ultrasound images of the carotid artery. Diagnostic tasks included automatic segmentation of the arterial wall in transverse sections, selection of wall and plaque ROIs in longitudinal sections, estimation of texture features in different image areas, motion analysis of tissue ROIs, and clustering of the extracted features. It is concluded that Analysis can provide a useful platform for computerized analysis of medical images and support of diagnosis  相似文献   

7.
 为了满足微纳和生物医学领域对微视觉图像目标信息高精度实时提取的需求,提出一种基于多分辨率阈值的非均匀光照微视觉图像实时分割技术.针对传统方法所存在的在非均匀光照微视觉图像分割过程中难以精确估计阈值和实时性差等问题,首先根据微视觉图像的梯度概率密度稀疏分布特性,建立灰度强度分布估计目标函数,并利用迭代加权最小二乘法实现灰度强度分布优化估计,达到非均匀光照补偿的目的;然后,在传统二维Otsu方法的基础上,利用小波变换的多分辨率分析能力有效减少阈值估计过程中的计算量,提高微视觉图像分割的实时性.实验研究表明,该方法能有效协调分割精度和速度之间的矛盾关系,具有不受光照影响的特点,能快速取得精确的图像分割效果.  相似文献   

8.
We present a method for the segmentation of human leg bones and extraction of functional parameters of the femur using MRT images. The novelty consists in the use of dynamic models which will be adapted to the images of individual patients both globally to a whole leg bone and locally to individual parts of a bone. Thresholding and region growing procedures are applied for pre‐processing the images. For some parts of bones, for example the femur ball, we use a three dimensional VRML‐based (Virtual Reality Modelling Language) femur model as a reference in order to make the segmentation method more robust. Based on the segmentation and the 3D VRML model, we can extract functional (biomechanical) femur parameters which are needed for gait analysis.  相似文献   

9.
This study presents a geometric model and a computational algorithm for segmentation of ultrasound images. A partial differential equation (PDE)-based flow is designed in order to achieve a maximum likelihood segmentation of the target in the scene. The flow is derived as the steepest descent of an energy functional taking into account the density probability distribution of the gray levels of the image as well as smoothness constraints. To model gray level behavior of ultrasound images, the classic Rayleigh probability distribution is considered. The steady state of the flow presents a maximum likelihood segmentation of the target. A finite difference approximation of the flow is derived, and numerical experiments are provided. Results are presented on ultrasound medical images as fetal echography and echocardiography.  相似文献   

10.
In order to improve face recognition accuracy, we present a simple near-infrared (NIR) and visible light (VL) image fusion algorithm based on two-dimensional linear discriminant analysis (2DLDA). We first use two such schemes to extract two classes of face discriminant features of each of NIR and VL images separately. Then the two classes of features of each kind of images are fused using the matching score fusion method. At last, a simple NIR and VL image fusion approach is exploited to combine the scores of NIR and VL images and to obtain the classification result. The experimental results show that the proposed NIR and VL image fusion approach can effectively improve the accuracy of face recognition.  相似文献   

11.
Currently, some photorealistic computer graphics are very similar to photographic images. Photorealistic computer generated graphics can be forged as photographic images, causing serious security problems. The aim of this work is to use a deep neural network to detect photographic images (PI) versus computer generated graphics (CG). In existing approaches, image feature classification is computationally intensive and fails to achieve real-time analysis. This paper presents an effective approach to automatically identify PI and CG based on deep convolutional neural networks (DCNNs). Compared with some existing methods, the proposed method achieves real-time forensic tasks by deepening the network structure. Experimental results show that this approach can effectively identify PI and CG with average detection accuracy of 98%.  相似文献   

12.
徐珩  刘学平 《包装工程》2019,40(11):188-193
目的 为了增强字符配准对字符位姿变化的鲁棒性和识别能力,以及印刷质量检验精度和缺陷类型分析对不同字符产品的自适应性,提出一种基于多对象匹配与融合字符特征的印刷质量检验方法。方法 采用多张合格字符样品图像进行模板构建;借助多对象匹配来配准多个待检验的字符,消除字符位姿的变化对字符配准的影响;进行逐像素的比对,检验字符区域的质量;利用灰度阈值分割以及Sobel边缘检测,将字符区域分成3个待检验的局部特征区域:边缘、前景、后景;进而获取边缘完整性,前景面积和灰度,背景面积和灰度这些显著的字符特征,由多张字符样品训练每个特征的自适应的合格范围;将其组合,形成融合字符特征,分析缺陷的类型。结果 测试数据表明,针对不同种类、不同精度要求的字符产品,所提方法对于字符质量的判断准确率达到100%,对缺陷类型的分类准确率保持在84.2%以上。结论 所提字符质量检验方法拥有良好的鲁棒性与自适应性,在包装、印刷等行业具备较高的应用价值。  相似文献   

13.
提出了一种利用三维人脸模型匹配二维人脸图像的分层人脸识别方法和基于模糊数学的人脸姿态角度估计算法.对多姿态二维图像进行姿态空间划分,利用主成分分析方法(PCA)形成多姿态特征脸.识别过程首先估计测试图像姿态和模糊姿态角,在估计的姿态空间内采用基于PCA的方法进行第一层识别得到候选个体,然后利用候选个体的三维模型结合模糊姿态角产生虚拟图像,利用相关进行第二层识别.实验结果表明,该方法对姿态的变化有较好的鲁棒性.  相似文献   

14.
ABSTRACT

Face Recognition is the process of identifying and verifying the faces. Face recognition has vast importance in the field of Security, Healthcare, Banking, Criminal Identification, Payment, and Advertising. In this paper, we have reviewed various techniques and challenges for the face recognition. Illumination, pose variation, facial expressions, occlusions, aging, etc. are the key challenges to the success of face recognition. Pre-processing, Face Detection, Feature Extraction, Optimal Feature Selection, and Classification are primary steps in any face recognition system. This paper provides a detailed review of each. Feature extraction techniques can be classified as appearance-based methods or geometry-based methods, such method may be local or global. Feature extraction is the most crucial stage for the success of the face recognition system. However, deep learning methods have freed the user from handcrafting the features. In this article, we have surveyed state-of-the-art methods of last few decades and the comparative study of various feature extraction methods is provided. Article also describes the current challenges in the area.  相似文献   

15.
Face detection has an essential role in many applications. In this paper, we propose an efficient and robust method for face detection on a 3D point cloud represented by a weighted graph. This method classifies graph vertices as skin and non-skin regions based on a data mining predictive model. Then, the saliency degree of vertices is computed to identify the possible candidate face features. Finally, the matching between non-skin regions representing eyes, mouth and eyebrows and salient regions is done by detecting collisions between polytopes, representing these two regions. This method extracts faces from situations where pose variation and change of expressions can be found. The robustness is showed through different experimental results. Moreover, we study the stability of our method according to noise. Furthermore, we show that our method deals with 2D images.  相似文献   

16.
The COVID-19 pandemic poses an additional serious public health threat due to little or no pre-existing human immunity, and developing a system to identify COVID-19 in its early stages will save millions of lives. This study applied support vector machine (SVM), k-nearest neighbor (K-NN) and deep learning convolutional neural network (CNN) algorithms to classify and detect COVID-19 using chest X-ray radiographs. To test the proposed system, chest X-ray radiographs and CT images were collected from different standard databases, which contained 95 normal images, 140 COVID-19 images and 10 SARS images. Two scenarios were considered to develop a system for predicting COVID-19. In the first scenario, the Gaussian filter was applied to remove noise from the chest X-ray radiograph images, and then the adaptive region growing technique was used to segment the region of interest from the chest X-ray radiographs. After segmentation, a hybrid feature extraction composed of 2D-DWT and gray level co-occurrence matrix was utilized to extract the features significant for detecting COVID-19. These features were processed using SVM and K-NN. In the second scenario, a CNN transfer model (ResNet 50) was used to detect COVID-19. The system was examined and evaluated through multiclass statistical analysis, and the empirical results of the analysis found significant values of 97.14%, 99.34%, 99.26%, 99.26% and 99.40% for accuracy, specificity, sensitivity, recall and AUC, respectively. Thus, the CNN model showed significant success; it achieved optimal accuracy, effectiveness and robustness for detecting COVID-19.  相似文献   

17.
A 3D model-based pose invariant face recognition method that can recognise a human face from its multiple views is proposed. First, pose estimation and 3D face model adaptation are achieved by means of a three-layer linear iterative process. Frontal view face images are synthesised using the estimated 3D models and poses. Then the discriminant `waveletfaces' are extracted from these synthesised frontal view images. Finally, corresponding nearest feature space classifier is implemented. Experimental results show that the proposed method can recognise faces under variable poses with good accuracy  相似文献   

18.
研究了使用三维人脸模型进行不同姿势下的人脸识别问题,提出了一种三维建模二维识别的人脸识别算法,首先使用该方法将三维模型向不同方向投影,进而将不同姿势的二维图像与不同方向的投影结果相匹配,进行人脸识别。研究了使用Minolta Vivid 910进行数据获取,创建三维模型的方法和过程。实验结果表明,在进行不同姿势的人脸识别时,该方法的识别速度快于三维可变形模型方法,识别率远优于使用二维正面图像作为模板的人脸识别方法。  相似文献   

19.
20.
Face recognition is a big challenge in the research field with a lot of problems like misalignment, illumination changes, pose variations, occlusion, and expressions. Providing a single solution to solve all these problems at a time is a challenging task. We have put some effort to provide a solution to solving all these issues by introducing a face recognition model based on local tetra patterns and spatial pyramid matching. The technique is based on a procedure where the input image is passed through an algorithm that extracts local features by using spatial pyramid matching and max-pooling. Finally, the input image is recognized using a robust kernel representation method using extracted features. The qualitative and quantitative analysis of the proposed method is carried on benchmark image datasets. Experimental results showed that the proposed method performs better in terms of standard performance evaluation parameters as compared to state-of-the-art methods on AR, ORL, LFW, and FERET face recognition datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号