首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Automated human facial image de-identification is a much-needed technology for privacy-preserving social media and intelligent surveillance ap-plications. We propose a novel utility preserved facial image de-identification to subtly tinker the appearance of facial images to achieve facial anonymity by creating"averaged identity faces". This approach is able to preserve the utility of the facial images while achieving the goal of privacy protection. We explore a decomposition of an Active appearance model (AAM) face space by using subspace learning where the loss can be modeled as the difference between two trace ratio items, and each respectively models the level of discriminativeness on identity and utility. Finally, the face space is decomposed into subspaces that are respectively sensitive to face identity and face utility. For the subspace most relevant to face identity, a k-anonymity de-identification procedure is applied. To verify the performance of the proposed facial image de-identification approach, we evaluate the created"averaged faces"using the extended Cohn-Kanade Dataset (CK+). The experimental results show that our proposed approach is satisfied to preserve the utility of the original image while defying face identity recognition.  相似文献   

2.
The mass availability of mobile devices equipped with cameras has lead to increased public privacy concerns in recent years. Face de-identification is a necessary first step towards anonymity preservation, and can be trivially solved by blurring or concealing detected faces. However, such naive privacy protection methods are both ineffective and unsatisfying, producing a visually unpleasant result. In this paper, we tackle face de-identification using Deep Autoencoders, by finetuning the encoder to perform face de-identification. We present various methods to finetune the encoder in both a supervised and unsupervised fashion to preserve facial attributes, while generating new faces which are both visually and quantitatively different from the original ones. Furthermore, we quantify the realism and naturalness of the resulting faces by introducing a diversity metric to measure the distinctiveness of the new faces. Experimental results show that the proposed methods can generate new faces with different person identity labels, while maintaining the facelike nature and diversity of the input face images.  相似文献   

3.
Hallucinating a photo-realistic frontal face image from a low-resolution (LR) non-frontal face image is beneficial for a series of face-related applications. However, previous efforts either focus on super-resolving high-resolution (HR) face images from nearly frontal LR counterparts or frontalizing non-frontal HR faces. It is necessary to address all these challenges jointly for real-world face images in unconstrained environment. In this paper, we develop a novel Cross-view Information Interaction and Feedback Network (CVIFNet), which simultaneously handles the non-frontal LR face image super-resolution (SR) and frontalization in a unified framework and interacts them with each other to further improve their performance. Specifically, the CVIFNet is composed of two feedback sub-networks for frontal and profile face images. Considering the reliable correspondence between frontal and non-frontal face images can be crucial and contribute to face hallucination in a different manner, we design a cross-view information interaction module (CVIM) to aggregate HR representations of different views produced by the SR and frontalization processes to generate finer face hallucination results. Besides, since 3D rendered facial priors contain rich hierarchical features, such as low-level (e.g., sharp edge and illumination) and perception level (e.g., identity) information, we design an identity-preserving consistency loss based on 3D rendered facial priors, which can ensure that the high-frequency details of frontal face hallucination result are consistent with the profile. Extensive experiments demonstrate the effectiveness and advancement of CVIFNet.  相似文献   

4.
Conventional face image generation using generative adversarial networks (GAN) is limited by the quality of generated images since generator and discriminator use the same backpropagation network. In this paper, we discuss algorithms that can improve the quality of generated images, that is, high-quality face image generation. In order to achieve stability of network, we replace MLP with convolutional neural network (CNN) and remove pooling layers. We conduct comprehensive experiments on LFW, CelebA datasets and experimental results show the effectiveness of our proposed method.  相似文献   

5.
With the development of generative adversarial network (GANs) technology, the technology of GAN generates images has evolved dramatically. Distinguishing these GAN generated images is challenging for the human eye. Moreover, the GAN generated fake images may cause some behaviors that endanger society and bring great security problems to society. Research on GAN generated image detection is still in the exploratory stage and many challenges remain. Motivated by the above problem, we propose a novel GAN image detection method based on color gradient analysis. We consider the difference in color information between real images and GAN generated images in multiple color spaces, and combined the gradient information and the directional texture information of the generated images to extract the gradient texture features for GAN generated images detection. Experimental results on PGGAN and StyleGAN2 datasets demonstrate that the proposed method achieves good performance, and is robust to other various perturbation attacks.  相似文献   

6.
陈娜 《激光与红外》2022,52(6):923-930
基于单张人脸图片的3D人脸模型重构,无论是在计算机图形领域还是可见光成像领域都是一个极具挑战性的研究方向,对于人脸识别、人脸成像、人脸动画等实际应用更是具有重要意义。针对目前算法复杂度较高、运算量较大且存在局部最优解和初始化不良等问题,本文提出了一种基于深度卷积神经网络的单张图片向3D人脸自动重构算法。该算法首先基于3D转换模型来提取2D人脸图像的密集信息,然后构建深度卷积神经网络架构、设计总体损失函数,直接学习2D人脸图像从像素到3D坐标的映射,从而实现了3D人脸模型的自动构建。算法对比与仿真实验表明,该算法在3D人脸重建上的归一化平均误差更低,且仅需一张2D人脸图像便可自动重构生成3D人脸模型。所生成的3D人脸模型鲁棒性好,重构准确,完整保留表情细节,并且对不同姿态的人脸也具有较好的重建效果,能够在三维空间中无死角自由呈现,将满足更多实际应用需求。  相似文献   

7.
With the prevalence of face authentication applications, the prevention of malicious attack from fake faces such as photos or videos, i.e., face anti-spoofing, has attracted much attention recently. However, while an increasing number of works on the face anti-spoofing have been reported based on 2D RGB cameras, most of them cannot handle various attacking methods. In this paper we propose a robust representation jointly modeling 2D textual information and depth information for face anti-spoofing. The textual feature is learned from 2D facial image regions using a convolutional neural network (CNN), and the depth representation is extracted from images captured by a Kinect. A face in front of the camera is classified as live if it is categorized as live using both cues. We collected a face anti-spoofing experimental dataset with depth information, and reported extensive experimental results to validate the robustness of the proposed method.  相似文献   

8.
This paper presents a method which utilizes color, local symmetry and geometry information of human face based on various models. The algorithm first detects most likely face regions or ROIs (Region-Of-Interest) from the image using face color model and face outline model, produces a face color similarity map. Then it performs local symmetry detection within these ROIs to obtain a local symmetry similarity map. The two maps and local similarity map are fused to obtain potential facial feature points. Finally similarity matching is performed to identify faces between the fusion map and face geometry model under affine transformation. The output results are the detected faces with confidence values. The experimental results demonstrate its validity and robustness to identify faces under certain variations.  相似文献   

9.
复杂背景中人脸位置的检测及其分割方法   总被引:2,自引:0,他引:2  
田原  梁德群  吴更石 《电子学报》1998,26(10):91-95
人脸是一个复杂的模式,在图像中自动地对其进行定位和分割是进行识别的首条条件,本文提出了一种新的复杂背景下对未知大小,位置和数目人脸进行定位和分割的算法,它利用了基于知识的金字塔方法,主要由三部分组成:较高的两级是在多尺度脊边缘检测的基础上,利用B-样条二进小波变换通过检测在不同分辨率下的极小值(对应眼睛)和极大值点(对应人脸)利用尺度,包含和对称等关系进行初步筛选,最后利用人眼的特点最终确定其位置  相似文献   

10.
高涛  薛国伟  倪策  冯兴乐 《电视技术》2016,40(4):115-120
为了有效地提取单训练人脸样本的特征,提出了一种新的人脸局部特征描述方法,改进了局部二进制模式的方向性描述单一的问题,并且加入了像素间变化趋势幅度的描述,称之为局部综合模式(Local Comprehensive Patterns,LCP).首先对人脸样本图像进行分块,然后每个的分块子图像进行改进LCP算子运算;其次考虑到每个子块的特征对整幅人脸图像的贡献度不一致,提出了贡献度图谱(Contribution Map,CM);最后根据贡献度图谱对每个子块的改进LCP描述进行自适应加权融合形成最终的人脸描述特征,最后在ORL和Yale B库上进行了有效性的测试,与现有的多种算法进行比对,所提出的算法对于非限定环境下人脸的识别有良好的效果.  相似文献   

11.
黄梦涛  高娜  刘宝 《红外技术》2022,44(1):41-46
原始生成对抗网络(generative adversarial network,GAN)在训练过程中容易产生梯度消失及模式崩溃的问题,去模糊效果不佳.由此本文提出双判别器加权生成对抗网络(dual discriminator weighted generative adversarial network,D2WGAN)...  相似文献   

12.
In the field of security, faces are usually blurry, occluded, diverse pose and small in the image captured by an outdoor surveillance camera, which is affected by the external environment such as the camera pose and range, weather conditions, etc. It can be described as a problem of hard face detection in natural images. To solve this problem, we propose a deep convolutional neural network named feature hierarchy encoder–decoder network (FHEDN). It is motivated by two observations from contextual semantic information and the mechanism of multi-scale face detection. The proposed network is a scale-variant style architecture and single stage, which are composed of encoder and decoder subnetworks. Based on the assumption that contextual semantic information around face being auxiliary to detect faces, we introduce a residual mechanism to fuse context prior-based information into face feature and formulate the learning chain to train each encoder–decoder pair. In addition, we discuss some important factors in implement details such as the distribution of training dataset, the scale of feature hierarchy, and anchor box size, etc. They have some impact on the detection performance of the final network. Compared with some state-of-the-art algorithms, our method achieves promising performance on the popular benchmarks including AFW, PASCAL FACE, FDDB, and WIDER FACE. Consequently, the proposed approach can be efficiently implemented and routinely applied to detect faces with severe occlusion and arbitrary pose variations in unconstrained scenes. Our code and results are available on https://github.com/zzxcoder/EvaluationFHEDN.  相似文献   

13.
使用部件信息改进弹性匹配人脸识别   总被引:6,自引:1,他引:5  
提出了一种综合利用部件信息和整体信息的人脸识别算法。利用部件定位得出人脸关键点(例如暗孔和鼻尖)的位置,然后利用这些关键点信息来限制弹性匹配的搜索范围。实验结果表明,这种方法在很大地提高了识别速度的同时能使识别率保持不变甚至还有所提高。  相似文献   

14.
Face completion is a domain-specific image inpainting problem. Most existing face completion methods fail to synthesize fine-grained facial structures due to the undifferentiated treatment of face images and other scene images. To handle this problem, we propose an end-to-end deep generative model based approach which makes full use of the facial prior knowledge, including 2D facial geometry priors from facial parsing maps and landmarks, as well as the 3D depth prior. We adopt a coarse-to-fine inpainting framework where the 2D facial geometry priors based on coarse faces are extracted to guide the refinement network for better planar facial textures and structures. Moreover, a novel 3D regularized reconstruction loss is proposed for the enhancement of the stereo perception of generated faces. Experimental results on two large-scale benchmarks CelebA and CelebA-HQ show that our method significantly outperforms the state-of-the-arts in generating more visually realistic and pleasing faces. Code is available at .  相似文献   

15.
本文利用小波为工具提出了在多尺度检测图象中屋顶边缘的方法,导出了该处屋顶边缘的最大曲率及其方向的计算公式;并将其结合人脸的特点用以在较复杂背景的图象中确定人脸及其器官(如眼睛)的位置和进行初步的分割,此方法在实验中取得了良好的效果。  相似文献   

16.
Human-computer interaction is the way in which humans and machines communicate information. With the rapid development of deep learning technology, the technology of human-computer interaction has also made a corresponding breakthrough. In the past, the way human-computer interaction was mostly relied on hardware devices. Through the coordinated work of multiple sensors, people and machines can realize information interaction. However, as theoretical technology continues to mature, algorithms for human-computer interaction are also being enriched. The popularity of convolutional neural networks has made image processing problems easier to solve. Therefore, real-time human-computer interaction can be performed by using image processing, and intelligent of human-computer interaction can be realized. The main idea of this paper is to use the real-time capture of face images and video information to image the face image information. We perform feature point positioning based on the feature points of the face image. We perform expression recognition based on the feature points that are located. At the same time, we perform ray tracing for the identified human eye area. The feature points of the face and the corresponding expressions and implementation movements represent the user's use appeal. Therefore, we can analyze the user's use appeal by locating the face feature area. We define the corresponding action information for specific user face features. We extract the user's corresponding information according to the user's face features, and perform human-computer interaction according to the user's information.  相似文献   

17.
颜贝  张建林 《半导体光电》2019,40(6):896-901
数据匮乏是深度学习面临的一大难题。利用生成对抗网络(GAN)能够基于语义生成新的图像数据这一特性,提出一种基于谱约束的生成对抗网络图像数据生成方法,该方法针对卷积生成对抗网络模型易崩溃不收敛的问题,从每层神经网络的参数矩阵W的谱范数角度出发,引入谱范数归一化网络参数矩阵,将网络梯度限制在固定范围内,减缓判别网络收敛速度,从而提高GAN的训练稳定性。实验表明,通过该方法生成的数据相比原始GAN以及DCGAN、WGAN等生成的图像样本数据在图像识别网络中具有更高的准确率,能够对少量样本数据进行有效扩充。  相似文献   

18.
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms.  相似文献   

19.
We evaluated semiautomatic, voxel-based registration methods for a new application, the assessment and optimization of interventional magnetic resonance imaging (I-MRI) guided thermal ablation of liver cancer. The abdominal images acquired on a low-field-strength, open I-MRI system contain noise, motion artifacts, and tissue deformation. Dissimilar images can be obtained as a result of different MRI acquisition techniques and/or changes induced by treatments. These features challenge a registration algorithm. We evaluated one manual and four automated methods on clinical images acquired before treatment, immediately following treatment, and during several follow-up studies. Images were T2-weighted, T1-weighted Gd-DTPA enhanced, T1-weighted, and short-inversion-time inversion recovery (STIR). Registration accuracy was estimated from distances between anatomical landmarks. Mutual information gave better results than entropy, correlation, and variance of gray-scale ratio. Preprocessing steps such as masking and an initialization method that used two-dimensional (2-D) registration to obtain initial transformation estimates were crucial. With proper preprocessing, automatic registration was successful with all image pairs having reasonable image quality. A registration accuracy of approximately equal to 3 mm was achieved with both manual and mutual information methods. Despite motion and deformation in the liver, mutual information registration is sufficiently accurate and robust for useful applications in I-MRI thermal ablation therapy.  相似文献   

20.
提出了一种基于二维图像信息的人脸光照补偿算法。基于人脸形状与球面相近的假设,首先沿人脸对称方向估算均匀光照下人脸图像灰度的统计信息,并对统计信息进行数据拟合,构建标准光照模型;然后估算非均匀光照下人脸图像的大致光照方向,并沿垂直光照方向对图像灰度进行统计分析;最后利用标准光照模型,结合线性和非线性变换,把非均匀光照下人脸图像调整到标准状态。在Yale B人脸库上的处理结果表明,该算法可以解决大角度斜光照和极度暗光照情况下光照补偿问题,且算法简单计算量小。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号