首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
提出了一种利用三维人脸模型匹配二维人脸图像的分层人脸识别方法和基于模糊数学的人脸姿态角度估计算法.对多姿态二维图像进行姿态空间划分,利用主成分分析方法(PCA)形成多姿态特征脸.识别过程首先估计测试图像姿态和模糊姿态角,在估计的姿态空间内采用基于PCA的方法进行第一层识别得到候选个体,然后利用候选个体的三维模型结合模糊姿态角产生虚拟图像,利用相关进行第二层识别.实验结果表明,该方法对姿态的变化有较好的鲁棒性.  相似文献   

2.
利用两类投影方法进行特征融合的人脸识别   总被引:2,自引:0,他引:2  
提出了利用两类投影抽取特征、用并行策略融合特征进行人脸识别的新方法。先用一维的基于向量的投影抽取一组特征,再用基于二维的图像投影的方法抽取一组特征,用复向量将样本的两组特征向量组合在一起,在复向量空间分析主分量(CPCA),抽取人脸图像的鉴别特征。在FERET人脸库上的实验结果表明,该方法的识别性能比用单个特征有10%左右的提高。  相似文献   

3.
提出一种基于面部径向曲线弹性匹配的三维人脸识别方法。使用人脸曲面上的多条曲线表征人脸曲面,提取三维人脸上从鼻尖点发射的多条面部径向曲线,对其进行分层弹性匹配和点距对应匹配,根据人脸不同部位受表情影响程度不同,对不同曲线识别相似度赋予不同权重进行加权融合作为总相似度用于识别。测试结果表明该方法具有很好的识别性能,并且对表情、遮挡和噪声具有较好的鲁棒性。  相似文献   

4.
光笔视觉三维测量中光斑图像点立体匹配方法   总被引:3,自引:1,他引:2  
针对光笔视觉三维测量中左右两幅图像中光斑图像点的立体匹配难题,引入SoftPosit 算法将其转换为二维像点与三维物点对应关系未知情况下的位姿参数估计问题.通过将对应匹配和位姿估计融合在一起,进行捆绑迭代寻优,分别完成左右两幅图像中光斑图像点与空间光笔光斑的对应匹配,从而实现了左右两幅图像中对应光斑图像点的立体匹配.该方法基于模型,与空间点分布模式、点的数量及图像灰度无关,实验结果表明该方法切实可行,也可用于特征点的识别.  相似文献   

5.
针对传统二维风险评估模型的局限性,提出一种基于风险影响-概率矩阵的三维风险评估方法,即增加一类可检测性指标,对二维评估结果进行修正,解决转包生产活动中供应商面临的合同风险评估问题。通过建立某航空部件转包合同的二维和三维风险评估模型,并使用Borda序值法进行定量分析,验证了该方法的优越性。  相似文献   

6.
光栅投影式三维摄影测量仪的几何标定方法   总被引:1,自引:0,他引:1  
罗剑  袁家虎 《光电工程》2005,32(11):43-48,67
光栅投影式三维摄影测量仪利用了时域结构光投影技术和立体视觉测量原理获得三维点坐标。针对传统标定方法易受镜头畸变影响和标定约束方程少导致精度下降的问题,采用了非线性的摄像机和投影机模型,并提出了二维的投影机模型;使用多平面法标定了系统测量所需的摄像机和投影机几何参数;为进一步提高参数精度,采用Levenberg-Marquardt算法优化了摄像机和投影机模型。实验结果表明,该方法操作简单,无需精确的位置和姿态调整,标定的绝对精度为0.2pixel,相对精度为1/5000。  相似文献   

7.
为了提高识别视频人物行为的效果,在定义和分析高斯混合模型(GMM)的基础上,提出了一种基于随机投影(RP)和Fisher向量(FV)的行为识别方法。该方法通过随机投影将高维轨迹描述子投影到低维子空间来实现特征轨迹的降维,然后利用GMM-FV混合模型对降维后的轨迹特征向量进行空间聚类编码,以提高行为识别的准确率,最后再利用随机投影对Fisher编码向量进行二次降维以降低计算复杂度。用KTH和UCF50两种数据集进行的试验表明,与现有跟踪识别算法相比,该方法降低了计算的复杂度,提高了行为识别的准确率,在两种数据集上的识别都表现出了良好的鲁棒性。  相似文献   

8.
王海霞  陈峰  赵新亮  吕静 《光电工程》2007,34(8):115-120
提出一种具有旋转不变性的三维物体识别的新方法,该方法通过结构光照明的方法,使物体的高度分布以变形条纹的形式编码于二维强度图中,由于条纹图包含有物体的高度分布信息,因此对条纹的相关识别具有本征三维识别的特点.旋转不变性是通过BP神经网络实现的.计算机模拟结果表明,用二维强度像的基频分量做训练样本设计BP神经网络,选择训练样本和隐藏层神经元的数目,基于结构光编码的BP神经网络对三维物体具有良好的旋转不变识别效果.  相似文献   

9.
目的为了满足多用户共享三维模型版权时的单用户版权独立认证等需求,结合CDMA技术,提出一种三维点云模型高鲁棒性多重盲水印算法。方法为不同用户分配不同的Walsh码,并利用Walsh码,对各自的二值水印图像进行编码,得到多路混合的水印;对三维点云模型进行仿射不变性处理,并将模型的顶点坐标转换为球面坐标,角度值按照升序排序,按顺序选择顶点到重心的距离组成的二维矩阵,作为水印的嵌入对象。对二维矩阵进行二级小波变换,将多路混合的水印嵌入到对角线方向高频部分,经过小波逆变换得到含多重水印的三维点云模型。结果该算法对噪音、仿射、重排序等攻击具有很强的鲁棒性。能够嵌入多重水印,且多重水印之间没有发生相互碰撞。结论文中算法能够满足多用户共享三维模型版权时单用户版权独立认证和版权保护的需求。  相似文献   

10.
由序列图像进行三维测量的新方法   总被引:2,自引:2,他引:0  
目前的三维测量方法都需要专门的测量设备且存在着种种限制,为此提出了一种基于图像序列进行三维测量的新方法。将由数码相机围绕被测物体拍摄的多幅图像导入计算机,利用图像处理知识得到特征的二维信息;采用计算机视觉方法,对特征从射影空间到欧式空间分层逐步重建即可完成三维测量。设计一套特征标志组合,作为辅助测量工具避免了特征匹配难题。确立了一套图像分割与识别策略获得特征标志二维信息,识别率可达到95%以上。采用基于模约束的摄像机分层自标定方法得到特征在欧式空间下的三维信息,并通过多种优化方法减少误差的影响。该方法在硬件上实现简单,对测量条件要求不高。实际试验表明,相对误差可达到1.48%,重投影误差为0.3864像素。  相似文献   

11.
Face detection has an essential role in many applications. In this paper, we propose an efficient and robust method for face detection on a 3D point cloud represented by a weighted graph. This method classifies graph vertices as skin and non-skin regions based on a data mining predictive model. Then, the saliency degree of vertices is computed to identify the possible candidate face features. Finally, the matching between non-skin regions representing eyes, mouth and eyebrows and salient regions is done by detecting collisions between polytopes, representing these two regions. This method extracts faces from situations where pose variation and change of expressions can be found. The robustness is showed through different experimental results. Moreover, we study the stability of our method according to noise. Furthermore, we show that our method deals with 2D images.  相似文献   

12.
Most of the applications related to security and biometric rely on skin region detection such as face detection, adult 3D objects filtering, and gesture recognition. In this paper, we propose a robust method for skin detection on 3D coloured point clouds. Then, we extend this method to solve the problem of 3D face detection. To do so, we construct a weighted graph from initial coloured 3D point clouds. Then, we present a linear programming algorithm using a predictive model based on a data mining approach to classify and label graph vertices as skin and non-skin regions. Moreover, we apply some refinement rules on skin regions to confirm the presence of a face. Furthermore, we demonstrate the robustness of our method by showing and analysing some experimental results. Finally, we show that our method deals with many data that can be represented by a weighted graph such as 2D images and 3D models.  相似文献   

13.
提出了一种基于多角度序列图像特征实现外螺纹的三维模型重建的方法。首先在旋转平台上采集多角度序列螺纹件图像,然后对每帧图像进行特征点提取,将序列图像的特征点进行三维变换和插值,最终生成三维模型。实验结果表明,此算法能精确高精度地实现外螺纹三维模型重构。  相似文献   

14.
The shape-from-silhouette (SFS) method has been widely used in 3D shape reconstruction. It uses silhouettes of a series of 2D images captured from multiple viewpoints of an object to generate a 3D model that describes the visual hull of the object. The SFS method faces an inherent problem that virtual features appear all over the model. In addition, concavities on the object may wrongly be modeled as convex shapes because they are invisible on image silhouettes. The purpose of this study is to propose a method to generate a 3D model from silhouettes of multiple images and propose a quality improvement method to overcome the above-mentioned problems. The 3D modeling method focuses on accurate evaluation of 3D points intersected by all polyhedra from different views and the removal of poor meshes on triangulation. The quality improvement method is essentially an iterative procedure, which for smoothing the model and eliminating virtual features and artifacts, while preserving the consistency of all silhouettes. The proposed method is to be used for product presentations in e-commerce, in which the 3D model must be covered with color texture of an object. Several examples are presented to illustrate the capability of the proposed method.  相似文献   

15.
We propose a 3D video system that uses environmental stereo cameras to display a target object from an arbitrary viewpoint. This system is composed of the following stages: image acquisition, foreground segmentation, depth field estimation, 3D modeling from depth and shape information, and arbitrary view rendering. To create 3D models from captured 2D image pairs, a real‐time segmentation algorithm, a fast depth reconstruction algorithm, and a simple and efficient shape reconstruction method were developed. For viewpoint generation, the 3D surface model is rotated toward the desired place and orientation, and the texture data extracted from the original camera is projected onto this surface. Finally, a real‐time system that demonstrates the use of the aforementioned algorithms was implemented. The generated 3D object can easily be manipulated, e.g., rotated or translated, to render images from different viewpoints. This provides stable scenes of a minimal area that made it possible to understand the target space, and also made it easier for viewers to understand in near real‐time. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 367–378, 2007  相似文献   

16.
Three dimension (3D) reconstruction is one of the research focus of computer vision and widely applied in various fields. The main steps of 3D reconstruction include image acquisition, feature point extraction and matching, camera calibration and production of dense 3D scene models. Generally, not all the input images are useful for camera calibration because some images contain similar and redundant visual information. These images can even reduce the calibration accuracy. In this paper, we propose an effective image selection method to improve the accuracy of camera calibration. Then a new 3D reconstruction algorithm is proposed by adding the image selection step to 3D reconstruction. The image selection method uses structure-from-motion algorithm to estimate the position and attitude of each camera, first. Then the contributed value to 3D reconstruction of each image is calculated. Finally, images are selected according to the contributed value of each image and their effects on the contributed values of other images. Experimental results show that our image selection algorithm can improve the accuracy of camera calibration and the 3D reconstruction algorithm proposed in this paper can get better dense 3D models than the normal algorithm without image selection.  相似文献   

17.
基于深度数据的空间人脸旋转角度估计   总被引:1,自引:0,他引:1  
提出一种基于三维人脸深度数据的人脸姿态计算方法。利用人脸的深度数据以及与其一一对应的灰度图像,根据微分几何原理和相应的曲率算法与人脸数据中的灰度特征对人脸面部关键特征点定位,进而计算出人脸姿态在三维空间中的3个姿态角。实验证明该方法能在姿态变化情况下实现对人脸旋转角的准确估计,为进一步的人脸识别和表情分析提供基础。  相似文献   

18.
A 3D model-based pose invariant face recognition method that can recognise a human face from its multiple views is proposed. First, pose estimation and 3D face model adaptation are achieved by means of a three-layer linear iterative process. Frontal view face images are synthesised using the estimated 3D models and poses. Then the discriminant `waveletfaces' are extracted from these synthesised frontal view images. Finally, corresponding nearest feature space classifier is implemented. Experimental results show that the proposed method can recognise faces under variable poses with good accuracy  相似文献   

19.
Three-dimensional (3D) brain tumor segmentation is a clinical requirement for brain tumor diagnosis and radiotherapy planning. This is a challenging task due to variation in type, size, location, and shape of tumors. Several methods such as particle swarm optimization (PSO) algorithm formed a topological relationship for the slices that converts 2D images into 3D magnetic resonance imaging (MRI) images which does not provide accurate results and they depend on the number of input sections, positions, and the shape of the MRI images. In this article, we propose an efficient 3D brain tumor segmentation technique called modified particle swarm optimization. Also, segmentation results are compared with Darwinian particle swarm optimization (DPSO) and fractional-order Darwinian particle swarm optimization (FODPSO) approaches. The experimental results show that our method succeeded 3D segmentation with 97.6% of accuracy rate more efficient if compared with the DPSO and FODPSO methods with 78.1% and 70.21% for the case of T1-C modality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号