首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Facial expressions contain most of the information on human face which is essential for human–computer interaction. Development of robust algorithms for automatic recognition of facial expressions with high recognition rates has been a challenge for the last 10 years. In this paper, we propose a novel feature selection procedure which recognizes basic facial expressions with high recognition rates by utilizing three-Dimensional (3D) geometrical facial feature positions. The paper presents a system of classifying expressions in one of the six basic emotional categories which are anger, disgust, fear, happiness, sadness, and surprise. The paper contributes on feature selections for each expression independently and achieves high recognition rates with the proposed geometric facial features selected for each expression. The novel feature selection procedure is entropy based, and it is employed independently for each of the six basic expressions. The system’s performance is evaluated using the 3D facial expression database, BU-3DFE. Experimental results show that the proposed method outperforms the latest methods reported in the literature.  相似文献   

2.
3.
We present a novel method for caricature synthesis based on mean value coordinates (MVC). Our method can be applied to any single frontal face image to learn a specified caricature face pair for frontal and 3D caricature synthesis. This technique only requires one or a small number of exemplar pairs and a natural frontal face image training set, while the system can transfer the style of the exemplar pair across individuals. Further exaggeration can be fulfilled in a controllable way. Our method is further applied to facial expression transfer, interpolation, and exaggeration, which are applications of expression editing. Additionally, we have extended our approach to 3D caricature synthesis based on the 3D version of MVC. With experiments we demonstrate that the transferred expressions are credible and the resulting caricatures can be characterized and recognized.  相似文献   

4.
一种同步人脸运动跟踪与表情识别算法   总被引:1,自引:0,他引:1       下载免费PDF全文
於俊  汪增福  李睿 《电子学报》2015,43(2):371-376
针对单视频动态变化背景下的人脸表情识别问题,提出了一种同步人脸运动跟踪和表情识别算法,并在此基础上构建了一个实时系统.该系统达到了如下目标:首先在粒子滤波框架下结合在线外观模型和柱状几何模型进行人脸三维运动跟踪;接着基于生理知识来提取人脸表情的静态信息;然后基于流形学习来提取人脸表情的动态信息;最后在人脸运动跟踪过程中,结合人脸表情静态信息和动态信息来进行表情识别.实验结果表明,该系统在大姿态和丰富表情下具有较好的综合优势.  相似文献   

5.
三维采集设备的快速发展,极大推动了三维数据技术的研究。其中,以三维人脸数据为载体的三维面部表情识别研究成果不断涌现。三维面部表情识别可以极大克服二维识别中的姿态和光照变化等方面问题。对三维表情识别技术进行了系统概括,尤其针对三维表情的关键技术,即对表情特征提取、表情编码分类及表情数据库进行了总结分析,并提出了三维表情识别的研究建议。三维面部表情识别技术在识别率上基本满足要求,但实时性上需要进一步优化。相关内容对该领域的研究具有指导意义。  相似文献   

6.
A 3D facial reconstruction and expression modeling system which creates 3D video sequences of test subjects and facilitates interactive generation of novel facial expressions is described. Dynamic 3D video sequences are generated using computational binocular stereo matching with active illumination and are used for interactive expression modeling. An individual’s 3D video set is annotated with control points associated with face subregions. Dragging a control point updates texture and depth in only the associated subregion so that the user generates new composite expressions unseen in the original source video sequences. Such an interactive manipulation of dynamic 3D face reconstructions requires as little preparation on the test subject as possible. Dense depth data combined with video-based texture results in realistic and convincing facial animations, a feature lacking in conventional marker-based motion capture systems.  相似文献   

7.
People instinctively recognize facial expression as a key to nonverbal communication, which has been confirmed by many different research projects. A change in intensity or magnitude of even one specific facial expression can cause different interpretations. A systematic method for generating facial expression syntheses, while mimicking realistic facial expressions and intensities, is a strong need in various applications. Although manually produced animation is typically of high quality, the process is slow and costly-therefore, often unrealistic for low polygonal applications. In this paper, we present a simple and efficient emotional-intensity-based expression cloning process for low-polygonal-based applications, by generating a customized face, as well as by cloning facial expressions. We define intensity mappings to measure expression intensity. Once a source expression is determined by a set of suitable parameter values in a customized 3D face and its embedded muscles, expressions for any target face(s) can be easily cloned by using the same set of parameters. Through experimental study, including facial expression simulation and cloning with intensity mapping, our research reconfirms traditional psychological findings. Additionally, we discuss the method's overall usability and how it allows us to automatically adjust a customized face with embedded facial muscles while mimicking the user's facial configuration, expression, and intensity.  相似文献   

8.
吴晓军  鞠光亮 《电子学报》2016,44(9):2141-2147
提出了一种无标记点的人脸表情捕捉方法.首先根据ASM(Active Shape Model)人脸特征点生成了覆盖人脸85%面部特征的人脸均匀网格模型;其次,基于此人脸模型提出了一种表情捕捉方法,使用光流跟踪特征点的位移变化并辅以粒子滤波稳定其跟踪结果,以特征点的位移变化驱动网格整体变化,作为网格跟踪的初始值,使用网格的形变算法作为网格的驱动方式.最后,以捕捉到的表情变化数据驱动不同的人脸模型,根据模型的维数不同使用不同的驱动方法来实现表情动画重现,实验结果表明,提出的算法能很好地捕捉人脸表情,将捕捉到的表情映射到二维卡通人脸和三维虚拟人脸模型都能取得较好的动画效果.  相似文献   

9.
We propose a novel approach for face tracking, resulting in a visual feedback loop: instead of trying to adapt a more or less realistic artificial face model to an individual, we construct from precise range data a specific texture and wireframe face model, whose realism allows the analysis and synthesis modules to visually cooperate in the image plane, by directly using 2D patterns synthesized by the face model. Unlike other feedback loops found in the literature, we do not explicitly handle the 3D complex geometric data of the face model, to make real-time manipulations possible. Our main contribution is a complete face tracking and pose estimation framework, with few assumptions about the face rigid motion (allowing large rotations out of the image plane), and without marks or makeup on the user's face. Our framework feeds the feature-tracking procedure with synthesized facial patterns, controlled by an extended Kalman filter. Within this framework, we present original and efficient geometric and photometric modelling techniques, and a reformulation of a block-matching algorithm to make it match synthesized patterns with real images, and avoid background areas during the matching. We also offer some numerical evaluations, assessing the validity of our algorithms, and new developments in the context of facial animation. Our face-tracking algorithm may be used to recover the 3D position and orientation of a real face and generate a MPEG-4 animation stream to reproduce the rigid motion of the face with a synthetic face model. It may also serve as a pre-processing step for further facial expression analysis algorithms, since it locates the position of the facial features in the image plane, and gives precise 3D information to take into account the possible coupling between pose and expressions of the analysed facial images.  相似文献   

10.
王镇  童莹  曹雪虹  焦良葆 《电视技术》2015,39(3):127-132
为降低噪声对人脸表情识别的影响,首先提出具有人眼视觉特性的各向异性扩散滤波方法,对图像进行滤波预处理;同时采用改进HOG算子提取人脸表情特征。实验结果表明,改进的各向异性扩散滤波算法在滤除噪声的同时能更好地保留表情图像的弱小细节信息,改进HOG算子相比传统特征提取算子可以更准确地描述人脸表情特征。因此,该算法是一种有效的、具有一定噪声鲁棒性的人脸表情识别算法。  相似文献   

11.
人类面部表情是其心理情绪变化的最直观刻画,不同人的面部表情具有很大差异,现有表情识别方法均利用面部统计特征区分不同表情,其缺乏对于人脸细节信息的深度挖掘。根据心理学家对面部行为编码的定义可以看出,人脸的局部细节信息决定了其表情意义。因此该文提出一种基于多尺度细节增强的面部表情识别方法,针对面部表情受图像细节影响较大的特点,提出利用高斯金字塔提取图像细节信息,并对图像进行细节增强,从而强化人脸表情信息。针对面部表情的局部性特点,提出利用层次结构的局部梯度特征计算方法,描述面部特征点局部形状特征。最后,使用支持向量机(SVM)对面部表情进行分类。该文在CK+表情数据库中的实验结果表明,该方法不仅验证了图像细节对面部表情识别过程的重要作用,而且在小规模训练数据下也能够得到非常好的识别结果,表情平均识别率达到98.19%。  相似文献   

12.
This paper describes a new and efficient method for facial expression generation on cloned synthetic head models. The system uses abstract facial muscles called action units (AUs) based on both anatomical muscles and the facial action coding system. The facial expression generation method has real-time performance, is less computationally expensive than physically based models, and has greater anatomical correspondence than rational free-form deformation or spline-based, techniques. Automatic cloning of a real human head is done by adapting a generic facial and head mesh to Cyberware laser scanned data. The conformation of the generic head to the individual data and the fitting of texture onto it are based on a fully automatic feature extraction procedure. Individual facial animation parameters are also automatically estimated during the conformation process. The entire animation system is hierarchical; emotions and visemes (the visual mouth shapes that occur during speech) are defined in terms of the AUs, and higher-level gestures are defined in terms of AUs, emotions, and visemes as well as the temporal relationships between them. The main emphasis of the paper is on the abstract muscle model, along with limited discussion on the automatic cloning process and higher-level animation control aspects.  相似文献   

13.
针对无法对面部表情进行精确识别的问题,提出了基于ResNet50网络融合双线性混合注意力机制的网络模型。针对传统池化算法造成图像特征提取残缺、模糊等问题,提出了一种基于Average-Pooling算法的自适应池化权重算法,同时基于粒子群算法对卷积神经网络模型超参数进行自适应调节,从而进一步提升模型识别精度。基于改进的网络模型,设计了一款实时面部表情识别系统。经验证,在Fer2013数据集和CK+数据集上,改进的模型在测试集中的识别精度分别为73.51%和99.86%。  相似文献   

14.
吴进  闵育  李聪  张伟华 《电讯技术》2019,59(10):1115-1120
微表情是一种持续时间很短暂的面部表情。针对其识别率低的问题,提出了一种基于三维卷积神经网络(3D Convolutionnal Neural Network,3D-CNN)的微表情识别算法。使用Keras作为网络框架,在3D-VGG-Block(3Dimension Visual Geometry Group Block,3D-VGG-Block)的基础上加入批量归一化算法以及丢弃法,提升网络深度与训练速度的同时有效地防止过拟合;针对数据集稀少的问题,采取随机设置起始帧的位置,提前设定每次读取帧序列的长度,循环操作,在将所有数据均遍历的同时,达到数据增广的目的。该算法在CASME II数据集上的识别率最高达68.85%,在识别率上有一定优势。  相似文献   

15.
以基于深度图像的绘制(DIBR)算法为核心,使用CUDA作为加速器,设计了一套基于深度信息的实时立体视频传输系统.该系统有效地解决了立体视频系统的帧同步问题,也降低了传输处理两路彩色视频所带来的带宽负担.在接收端合成的视频实时性好,主观视觉效果良好.  相似文献   

16.
17.
With better understanding of face anatomy and technical advances in computer graphics, 3D face synthesis has become one of the most active research fields for many human-machine applications, ranging from immersive telecommunication to the video games industry. In this paper we proposed a method that automatically extracts features like eyes, mouth, eyebrows and nose from the given frontal face image. Then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to fit the input face image by transforming the vertex topology of the generic face model. The 3D-specific face can finally be synthesized by texturing the individualized face model. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. Our method has the advantage that it is fully automatic, robust, fast and can generate various views of face by rotation of 3D model. It can be used in a variety of applications for which the accuracy of depth is not critical such as games, avatars, face recognition. We have tested and evaluated our system using standard database namely, BU-3DFE.  相似文献   

18.
In this paper, two novel methods for facial expression recognition in facial image sequences are presented. The user has to manually place some of Candide grid nodes to face landmarks depicted at the first frame of the image sequence under examination. The grid-tracking and deformation system used, based on deformable models, tracks the grid in consecutive video frames over time, as the facial expression evolves, until the frame that corresponds to the greatest facial expression intensity. The geometrical displacement of certain selected Candide nodes, defined as the difference of the node coordinates between the first and the greatest facial expression intensity frame, is used as an input to a novel multiclass Support Vector Machine (SVM) system of classifiers that are used to recognize either the six basic facial expressions or a set of chosen Facial Action Units (FAUs). The results on the Cohn-Kanade database show a recognition accuracy of 99.7% for facial expression recognition using the proposed multiclass SVMs and 95.1% for facial expression recognition based on FAU detection.  相似文献   

19.
在三维显示领域中,机动目标实时显示问题始终是研究的热点,为了进一步提高机动目标显示的效果,在保证画面分辨率的前提下,减少时延。针对于Unity3D引擎和Visual C+[KG-1.5mm]+6.0平台的实时通信问题,以舰载机起飞着舰的实际过程为背景,研究实现了舰载机的三维显示系统。重点讨论了Unity3D和Visual C+[KG-1.5mm]+6.0的通信连接方法,实现了发送数据包来驱动三维模型的过程,以此来实时显示舰载机飞行的状态信息。此外,文中又给出了系统的逼真三维实现效果图。该系统为将来三维显示技术在指挥控制系统中的应用提供了一种有效方法。  相似文献   

20.
基于移动代理的网络性能管理,利用移动代理的移动性、智能性、灵活性和强大的扩展能力,通过对整个网络进行本地和全局的性能监控和性能数据采集,提供实时数据采集、分析和可视化工具,最终为管理员完成可视化的性能报告。能够对网络实施高效、实时和准确的性能管理,而且移动代理的平台无关性又可方便地实现跨平台的网络管理,在复杂网络的管理方面有着极大的优势。根据多移动代理的设计模式,借助于UML建模工具构建了一个性能管理系统,清晰地描述了该系统的静态和动态结构,克服了传统的集中式网络性能管理的诸多缺点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号