首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
2.
检测整幅窜改图像的方法增加了许多非必要的计算量,为了降低计算复杂度和进一步提高检测精确率,提出了一种基于改进显著图和局部特征匹配的copy-move窜改检测方法。首先,结合图像梯度改进显著图,分离出包含图像高纹理信息的局部显著区域;其次,只对该局部区域采用SIFT(scale invariant feature transform)算法提取特征点;然后,对显著性小的图像采用密度聚类和二阶段匹配策略,对显著性大的图像采用超像素分割和显著块特征匹配的策略;最后,结合PSNR和形态学操作来定位窜改区域。在两个公开数据集上进行实验,该方法的平均检测时间小于10 s,平均检测精确率大于97%,均优于所对比的方法。实验结果表明,该方法能够大幅缩减检测时间、有效提高检测精确率,并且对几何变换和后处理操作也都具有较好的鲁棒性。  相似文献   

3.
基于颜色梯度的图像特征点匹配算法   总被引:1,自引:0,他引:1       下载免费PDF全文
董瑞  梁栋  唐俊  鲍文霞  何韬 《计算机工程》2007,33(16):178-180
提出了一种利用颜色梯度的彩色图像特征点的匹配方法。结合图像特征点的颜色梯度信息和几何特征分别构造两幅图像的Laplacian矩阵,并对这两个矩阵进行奇异值分解。利用分解的结果构造出一个反映特征点之间匹配程度的关系矩阵,根据关系矩阵实现两幅图像的特征点匹配。大量实验结果表明,该文提出的算法具有较高的匹配精度。  相似文献   

4.
Forensic dentistry involves the identification of people based on their dental records, mainly available as radiograph images. Our goal is to automate this process using image processing and pattern recognition techniques. Given a postmortem radiograph, we search a database of antemortem radiographs in order to retrieve the closest match with respect to some salient features. In this paper, we use the contours of the teeth as the feature for matching. A semi-automatic contour extraction method is used to address the problem of fuzzy tooth contours caused by the poor image quality. The proposed method involves three stages: radiograph segmentation, pixel classification and contour matching. A probabilistic model is used to describe the distribution of object pixels in the image. Results of retrievals on a database of over 100 images are encouraging.  相似文献   

5.
We present a new feature based algorithm for stereo correspondence. Most of the previous feature based methods match sparse features like edge pixels, producing only sparse disparity maps. Our algorithm detects and matches dense features between the left and right images of a stereo pair, producing a semi-dense disparity map. Our dense feature is defined with respect to both images of a stereo pair, and it is computed during the stereo matching process, not a preprocessing step. In essence, a dense feature is a connected set of pixels in the left image and a corresponding set of pixels in the right image such that the intensity edges on the boundary of these sets are stronger than their matching error (which is the difference in intensities between corresponding boundary pixels). Our algorithm produces accurate semi-dense disparity maps, leaving featureless regions in the scene unmatched. It is robust, requires little parameter tuning, can handle brightnessdifferences between images, nonlinear errors, and is fast (linear complexity).  相似文献   

6.
目的 针对影像匹配时提取特征线断裂而影响匹配结果及可靠性的问题,提出多重约束条件下的近景影像线特征匹配方法。方法 首先,采用SIFT算法获取同名点,并使用RANSAC算法进行优化,通过同名点计算仿射变换矩阵;建立格网点,利用仿射变换、Harris兴趣值及最小二乘法提高密集匹配结果的精度;其次,采取Freeman链码优先级算法提取直线,根据搜索区域内密集匹配点与直线位置关系完成特征线的初始匹配;最后通过线段重合度对初始匹配结果进行优化,并利用核线约束确定同名直线端点。结果 选取存在旋转、尺度、遮挡的近景影像进行线特征匹配实验,结果表明,与其他直线匹配方法相比,本文方法不仅在直线匹配成功数目上约为经典算法的1.07~4.1倍,而且直线匹配正确率也提升0.6%~53.3%,具有较好的准确性和鲁棒性。结论 通过多重约束有效地减小了立体影像中线特征匹配时的搜索范围,提高了直线匹配速率,且该方法适用于不同类型几何变化下的近景影像数据,并能较好地改善直线断裂及遮挡问题。  相似文献   

7.
This correspondence presents a matching algorithm for obtaining feature point correspondences across images containing rigid objects undergoing different motions. First point features are detected using newly developed feature detectors. Then a variety of constraints are applied starting with simplest and following with more informed ones. First, an intensity-based matching algorithm is applied to the feature points to obtain unique point correspondences. This is followed by the application of a sequence of newly developed heuristic tests involving geometry, rigidity, and disparity. The geometric tests match two-dimensional geometrical relationships among the feature points, the rigidity test enforces the three dimensional rigidity of the object, and the disparity test ensures that no matched feature point in an image could be rematched with another feature, if reassigned another disparity value associated with another matched pair or an assumed match on the epipolar line. The computational complexity is proportional to the numbers of detected feature points in the two images. Experimental results with indoor and outdoor images are presented, which show that the algorithm yields only correct matches for scenes containing rigid objects  相似文献   

8.
9.
10.
现有图像修复方案普遍存在着结构错乱和细节纹理模糊的问题, 这主要是因为在图像破损区域的重建过程中, 修复网络难以充分利用非破损区域内的信息来准确地推断破损区域内容. 为此, 本文提出了一种由多级注意力传播驱动的图像修复网络. 该网络通过将全分辨率图像中提取的高级特征压缩为多尺度紧凑特征, 进而依据尺度大小顺序驱动紧凑特征进行多级注意力特征传播, 以期达到包括结构和细节在内的高级特征在网络中充分传播的目标. 为进一步实现细粒度图像修复重建, 本文还同时提出了一种复合粒度判别器, 以期实现对图像修复过程进行全局语义约束与非特定局部密集约束. 大量实验表明, 本文提出的方法可以产生更高质量的修复结果.  相似文献   

11.
目的 为了解决现有图像区域复制篡改检测算法只能识别图像中成对的相似区域而不能准确定位篡改区域的问题,提出一种基于JPEG(joint photographic experts group)图像双重压缩偏移量估计的篡改区域自动检测定位方法。方法 首先利用尺度不变特征变换(SIFT)算法提取图像的特征点和相应的特征向量,并采用最近邻算法对特征向量进行初步匹配,接下来结合特征点的色调饱和度(HSI)彩色特征进行优化匹配,消除彩色信息不一致引发的误匹配;然后利用随机样本一致性(RANSAC)算法对匹配对之间的仿射变换参数进行估计并消除错配,通过构建区域相关图确定完整的复制粘贴区域;最后根据对复制粘贴区域分别估计的JPEG双重压缩偏移量区分复制区域和篡改区域。结果 与经典SIFT和SURF(speeded up robust features)的检测方法相比,本文方法在实现较高检测率的同时,有效降低了检测虚警率。当第2次JPEG压缩的质量因子大于第1次时,篡改区域的检出率可以达到96%以上。 结论 本文方法可以有效定位JPEG图像的区域复制篡改区域,并且对复制区域的几何变换以及常见的后处理操作具有较强的鲁棒性。  相似文献   

12.
目的 提出一种定位图像匹配尺度及区域的有效算法,通过实现当前屏幕图像特征点与模板图像中对应尺度下部分区域中的特征点匹配,实现摄像机对模板图像的实时跟踪,解决3维跟踪算法中匹配精度与效率问题。方法 在预处理阶段,算法对模板图像建立多尺度表示,各尺度下的图像进行区域划分,在每个区域内采用ORB(oriented FAST and rotated BRIEF)方法提取特征点并生成描述子,由此构建图像特征点的分级分区管理模式。在实时跟踪阶段,对于当前摄像机获得的图像,首先定位该图像所对应的尺度范围,在相应尺度范围内确定与当前图像重叠度大的图像区域,然后将当前图像与模板图像对应的尺度与区域中的特征点集进行匹配,最后根据匹配点对计算摄像机的位姿。结果 利用公开图像数据库(stanford mobile visual search dataset)中不同分辨率的模板图像及更多图像进行实验,结果表明,本文算法性能稳定,配准误差在1个像素左右;系统运行帧率总体稳定在2030 帧/s。结论 与多种经典算法对比,新方法能够更好地定位图像匹配尺度与区域,采用这种局部特征点匹配的方法在配准精度与计算效率方面比现有方法有明显提升,并且当模板图像分辨率较高时性能更好,特别适合移动增强现实应用。  相似文献   

13.
针对合成孔径雷达(SAR)图象自动目标识别问题,在对SAR图象的特征提取问题进行分析的基础上,提出了一种特征点匹配算法,该算法根据Birkhoff-von Neumann定量,首先将广义置换矩阵约束松驰为广义双随机矩阵约束;然后利用拉格朗日乘子和障碍函数法,把约束加到目标函数中,从而将点集匹配问题转化为非线性最优化问题;最后利用确定性退火和软分配技术求解该问题,将得到的匹配代价用特征点数目的比值进行修正后,用于目标的识别。实验结果表明,该算法非常有效。  相似文献   

14.
针对局部二值模式(Local Binary Pattern,LBP)提取纹理特征时忽略了图像的局部结构信息问题,提出一种自适应加权融合显著性结构张量和LBP的表情识别算法。该算法通过对整幅图片进行显著性区域检测得到全局显著图来消除细小的纹理和噪声。在显著图的基础上进一步提取两种显著性纹理特征,根据每种特征信息熵的贡献度来作为特征向量的加权依据。利用支持向量机(Support Vector Machine,SVM)进行表情图像的分类。实验结果表明,自适应加权融合的两种纹理特征能够较好地描述人脸的特征,有效地提高表情识别率。  相似文献   

15.
This work aims to define a new strategy for extracting and stereo matching of buildings using very high resolution multi spectral IKONOS images having a ratio base/height about 0.53, we do not have the intrinsic and extrinsic parameters of the images acquisition system. These images contain dense urban scenes including various kinds of roads, cars, vegetation and buildings. We are interested by buildings, some of them have different shapes or colours and others have close colours or shapes, so, they generate a lot of “false matches”. To solve this issue, we propose in this paper an approach based on soft computing field in order to extract regions of interest (buildings) and to match them, it contains two main steps: region segmentation and thresholding step using a specific fuzzy thresholding algorithm and a neural Hopfield matching stage based on new constraints including geometric and photometric regions properties. The presented strategy is nearly all automatic, it is fast and simple and the results of its applied tests on several kinds of stereo dense urban images are satisfactory.  相似文献   

16.
目的 织物识别是提高纺织业竞争力的重要计算机辅助技术。与通用图像相比,织物图像通常只在纹理和形状特征方面呈现细微差异。目前常见的织物识别算法仅考虑图像特征,未结合织物面料的视觉和触觉特征,不能反映出织物本身面料属性,导致识别准确率较低。本文以常见服用织物为例,针对目前常见织物面料识别准确率不高的问题,提出一种结合面料属性和触觉感测的织物图像识别算法。方法 针对输入的织物样本,建立织物图像的几何测量方法,量化分析影响织物面料属性的3个关键因素,即恢复性、拉伸性和弯曲性,并进行面料属性的参数化建模,得到面料属性的几何度量。通过传感器设置对织物进行触感测量,采用卷积神经网络(convolutional neural network,CNN)提取测量后的织物触感图像的底层特征。将面料属性几何度量与提取的底层特征进行匹配,通过CNN训练得到织物面料识别模型,学习织物面料属性的不同参数,实现织物面料的识别并输出识别结果。结果 在构建的常见服用织物样本上验证了本文方法,与同任务的方法比较,本文方法识别率更高,平均识别率达到89.5%。结论 提出了一种基于面料属性和触觉感测的织物图像识别方法,能准确识别常用的服装织物面料,有效提高了织物识别的准确率,能较好地满足实际应用需求。  相似文献   

17.
This paper presents a histogram-based template matching method that copes with the large scale difference between target and template images. Most of the previous template matching methods are sensitive to the scale difference between target and template images because the features extracted from the images are changed according to the scale of the images. To overcome this limitation, we introduce the concept of dominant gradients and describe an image as the feature that is tolerant to scale changes. To this end, we first extract the dominant gradients of a template image and represent the template image as the grids of histograms of the dominant gradients. Then, the arbitrary regions of a target image with various locations and scales are matched with the template image via histogram matching. Experimental results show that the proposed method is more robust to scale difference than previous template matching techniques.  相似文献   

18.
目的 以词袋模型为基础的拷贝图像检索方法是当前最有效的方法。然而,由于局部特征量化存在信息损失,导致视觉词汇区别能力不足和视觉词汇误匹配增加,从而影响了拷贝图像检索效果。针对视觉词汇的误匹配问题,提出一种基于近邻上下文的拷贝图像检索方法。该方法通过局部特征的上下文关系消除视觉词汇歧义,提高视觉词汇的区分度,进而提高拷贝图像的检索效果。方法 首先,以距离和尺度关系选择图像中某局部特征点周围的特征点作为该特征点的上下文,选取的上下文中的局部特征点称为近邻特征点;再以近邻特征点的信息以及与该局部特征的关系为该局部特征构建上下文描述子;然后,通过计算上下文描述子的相似性对局部特征匹配对进行验证;最后,以正确匹配特征点的个数衡量图像间的相似性,并以此相似性选取若干候选图像作为返回结果。结果 在Copydays图像库进行实验,与Baseline方法进行比较。在干扰图像规模为100 k时,相对于Baseline方法,mAP提高了63%。当干扰图像规模从100 k增加到1 M时,Baseline的mAP值下降9%,而本文方法下降3%。结论 本文拷贝图像检索方法对图像编辑操作,如旋转、图像叠加、尺度变换以及裁剪有较高的鲁棒性。该方法可以有效地应用到图像防伪、图像去重等领域。  相似文献   

19.
针对目前工业现场弱纹理堆叠工件识别困难的问题,提出一种以工件表面孔洞为特征的改进几何模板匹配算法,以合页为例进行工件识别。首先采用加权平均法对彩色图像进行灰度化处理,再采用Canny算法进行边缘检测;其次采用旋转卡壳算法求取轮廓的最小面积外接矩形,进行几何约束后得到孔洞对应的孔轮廓,并采用随机增量法计算孔轮廓的最小外接圆得到孔特征圆心坐标。采用提出的改进几何模板匹配算法,即根据孔特征之间的几何约束进行工件识别,根据孔特征之间是否存在边缘剔除误识别工件。实验结果表明,提出的算法对带孔弱纹理堆叠工件具有良好的识别效果,工件识别查全率为98.3%,误检率为0.9%,为带孔弱纹理工件的识别提供方法。  相似文献   

20.
Content‐aware image retargeting is a technique that can flexibly display images with different aspect ratios and simultaneously preserve salient regions in images. Recently many image retargeting techniques have been proposed. To compare image quality by different retargeting methods fast and reliably, an objective metric simulating the human vision system (HVS) is presented in this paper. Different from traditional objective assessment methods that work in bottom‐up manner (i.e., assembling pixel‐level features in a local‐to‐global way), in this paper we propose to use a reverse order (top‐down manner) that organizes image features from global to local viewpoints, leading to a new objective assessment metric for retargeted images. A scale‐space matching method is designed to facilitate extraction of global geometric structures from retargeted images. By traversing the scale space from coarse to fine levels, local pixel correspondence is also established. The objective assessment metric is then based on both global geometric structures and local pixel correspondence. To evaluate color images, CIE L*a*b* color space is utilized. Experimental results are obtained to measure the performance of objective assessments with the proposed metric. The results show good consistency between the proposed objective metric and subjective assessment by human observers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号