首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper aims to solve the problem of matching images containing repetitive patterns. Although repetitive patterns widely exist in real world images, these images are difficult to be matched due to local ambiguities even if the viewpoint changes are not very large. It is still an open and challenging problem. To solve the problem, this paper proposes to match pairs of interest points and then obtain point correspondences from the matched pairs of interest points based on the low distortion constraint, which is meant that the distortions of point groups should be small across images. By matching pairs of interest points, local ambiguities induced by repetitive patterns can be reduced to some extent since information in a much larger region is used. Meanwhile, owing to our newly defined compatibility measure between one correspondence and a set of point correspondences, the obtained point correspondences are very reliable. Experiments have demonstrated the effectiveness of our method and its superiority to the existing methods.  相似文献   

2.
3.
4.
Deriving the visual connectivity across large image collections is a computationally expensive task. Different from current image‐oriented match graph construction methods which build on pairwise image matching, we present a novel and scalable feature‐oriented image matching algorithm for large collections. Our method improves the match graph construction procedure in three ways. First, instead of building trees repeatedly, we put the feature points of the input image collection into a single kd‐tree and select the leaves as our anchor points. Then we construct an anchor graph from which each feature can intelligently find a small portion of related candidates to match. Finally, we design a new form of adjacency matrix for fast feature similarity measuring, and return all the matches in different photos across the whole dataset directly. Experiments show that our feature‐oriented correspondence algorithm can explore visual connectivity between images with significant improvement in speed.  相似文献   

5.
目的 含有重复模式的图像会对局部特征描述符产生歧义,因此基于局部特征的匹配算法在此类图像的匹配过程中极易产生误匹配.同时,通过研究现有的引入全局特征描述符的匹配算法,发现全局特征同样依赖于计算局部信息所得到的特征点主方向,所以此类方法在含有重复模式的图像中也不容易得到令人满意的匹配效果.为了解决这一问题,提出一种基于成对特征点的图像匹配算法.方法 该方法利用成对特征点的方向向量作为特征点对的主方向,为特征描述提供了正确的方向信息,同时引入DAISY描述符与改进后的全局上下文(globalcontext)特征描述符,提高了匹配能力.结果 分别在模拟图像与实际图像上面进行了对比匹配实验,本文算法平均的匹配正确率能达到88%以上,比其他经典的匹配算法提高了26%以上.结论 实验结果表明,本文算法克服了现有算法在特征描述与主方向分配上的缺陷,进一步提升了匹配正确率,能够有效地解决重复模式图像的匹配问题.  相似文献   

6.
The problem of structure from motion is often decomposed into two steps: feature correspondence and three-dimensional reconstruction. This separation often causes gross errors when establishing correspondence fails. Therefore, we advocate the necessity to integrate visual information not only in time (i.e. across different views), but also in space, by matching regions – rather than points – using explicit photometric deformation models. We present an algorithm that integrates image-feature tracking and three-dimensional motion estimation into a closed loop, while detecting and rejecting outlier regions that do not fit the model. Due to occlusions and the causal nature of our algorithm, a drift in the estimates accumulates over time. We describe a method to perform global registration of local estimates of motion and structure by matching the appearance of feature regions stored over long time periods. We use image intensities to construct a score function that takes into account changes in brightness and contrast. Our algorithm is recursive and suitable for real-time implementation.  相似文献   

7.
This correspondence presents a matching algorithm for obtaining feature point correspondences across images containing rigid objects undergoing different motions. First point features are detected using newly developed feature detectors. Then a variety of constraints are applied starting with simplest and following with more informed ones. First, an intensity-based matching algorithm is applied to the feature points to obtain unique point correspondences. This is followed by the application of a sequence of newly developed heuristic tests involving geometry, rigidity, and disparity. The geometric tests match two-dimensional geometrical relationships among the feature points, the rigidity test enforces the three dimensional rigidity of the object, and the disparity test ensures that no matched feature point in an image could be rematched with another feature, if reassigned another disparity value associated with another matched pair or an assumed match on the epipolar line. The computational complexity is proportional to the numbers of detected feature points in the two images. Experimental results with indoor and outdoor images are presented, which show that the algorithm yields only correct matches for scenes containing rigid objects  相似文献   

8.
宽基线图像特征匹配是计算机视觉应用中一项极具挑战性的工作。由于图像之间存在较大的差异,宽基线图像初始特征匹配的结果中不可避免地包含大量的外点。提出了K近邻一致性算法来实现从宽基线图像初始匹配结果中快速选出高可靠性的点对。该算法采用仿射不变的结构相似度来衡量两组K近邻特征点的结构相似性。K近邻一致性算法采取由粗到精的策略,通过K近邻对应一致性检测和K近邻结构一致性检测两个步骤来选择内点。实验结果表明,提出的算法在查准率、查全率和运行速度等方面接近或优于当前几种最新的内点选择算法,可适用于存在大范围的视点、尺度和旋转变化的宽基线图像。  相似文献   

9.
基于蚁群算法的多目标优化   总被引:2,自引:0,他引:2       下载免费PDF全文
池元成  蔡国飙 《计算机工程》2009,35(15):168-169,
提出2种结合颜色矢量的谱匹配算法。一种算法是从空间矢量关系的角度提取不受光源影响的图像颜色特征,结合图像特征点的几何特征,为待匹配的2幅图像分别构造亲近矩阵,通过对亲近矩阵进行奇异值分解构造一个反映特征点之间匹配程度的关系矩阵,从而获得匹配结果。另一种是将得到的匹配结果作为初始概率,通过双随机矩阵计算谱匹配概率矩阵,获得匹配的最终解。实验结果表明, 2种算法都具有较高的匹配精度。  相似文献   

10.
Given an unstructured collection of captioned images of cluttered scenes featuring a variety of objects, our goal is to simultaneously learn the names and appearances of the objects. Only a small fraction of local features within any given image are associated with a particular caption word, and captions may contain irrelevant words not associated with any image object. We propose a novel algorithm that uses the repetition of feature neighborhoods across training images and a measure of correspondence with caption words to learn meaningful feature configurations (representing named objects). We also introduce a graph-based appearance model that captures some of the structure of an object by encoding the spatial relationships among the local visual features. In an iterative procedure, we use language (the words) to drive a perceptual grouping process that assembles an appearance model for a named object. Results of applying our method to three data sets in a variety of conditions demonstrate that, from complex, cluttered, real-world scenes with noisy captions, we can learn both the names and appearances of objects, resulting in a set of models invariant to translation, scale, orientation, occlusion, and minor changes in viewpoint or articulation. These named models, in turn, are used to automatically annotate new, uncaptioned images, thereby facilitating keyword-based image retrieval.  相似文献   

11.
For remote sensing image registration, we find that affine transformation is suitable to describe the mapping between images. Based on the scale-invariant feature transform (SIFT), affine-SIFT (ASIFT) is capable of detecting and matching scale- and affine-invariant features. Unlike the blob feature detected in SIFT and ASIFT, a scale-invariant edge-based matching operator is employed in our new method. To find the local features, we first extract edges with a multi-scale edge detector, then the distinctive features (we call these ‘feature from edge’ or FFE) with computed scale are detected, and finally a new matching scheme is introduced for image registration. The algorithm incorporates principal component analysis (PCA) to ease the computational burden, and its affine invariance is embedded by discrete sampling as ASIFT. We present our analysis based on multi-sensor, multi-temporal, and different viewpoint images. The operator shows the potential to become a robust alternative for point-feature-based registration of remote-sensing images as subpixel registration consistency is achieved. We also show that using the proposed edge-based scale- and affine-invariant algorithm (EBSA) results in a significant speedup and fewer false matching pairs compared to the original ASIFT operator.  相似文献   

12.
鲍文霞  梁栋 《计算机工程》2009,35(15):13-15
提出2种结合颜色矢量的谱匹配算法。一种算法是从空间矢量关系的角度提取不受光源影响的图像颜色特征,结合图像特征点的几何特征,为待匹配的2幅图像分别构造亲近矩阵,通过对亲近矩阵进行奇异值分解构造一个反映特征点之间匹配程度的关系矩阵,从而获得匹配结果。另一种是将得到的匹配结果作为初始概率,通过双随机矩阵计算谱匹配概率矩阵,获得匹配的最终解。实验结果表明, 2种算法都具有较高的匹配精度。  相似文献   

13.
This paper presents a novel method for addressing the problem of finding more good feature pairs between images, which is one of the most fundamental tasks in computer vision and pattern recognition. We first select matched features by Bi-matching as seed points, then organize these seed points by adopting the Delaunay triangulation algorithm. Finally, triangle constraint is used to explore good matches. The experimental evaluation shows that our method is robust to most geometric and photometric transformations including rotation, scale change, blur, viewpoint change, JPEG compression and illumination change, and significantly improves both the number of correct matches and the matching score. And the application on estimating the fundamental matrix for a pair of images is also shown. Both the experiments and the application demonstrate the robust performance of our method.  相似文献   

14.
与普通场景图像相比,无人机影像中纹理信息较丰富,局部特征与目标对象“一对多”的对应问题更加严重,经典SURF算法不适用于无人机影像的特征点匹配.为此,提出一种辅以空间约束的SURF特征点匹配方法,并应用于无人机影像拼接.该方法对基准影像整体提取SURF特征点,对目标影像分块提取SURF特征点,在特征点双向匹配过程中使用两特征点对进行空间约束,实现目标影像子图像与基准影像的特征点匹配;根据特征点对计算目标影像初始变换参数,估计目标影像特征点的匹配点在基准影像上的点位,对匹配点搜索空间进行约束,提高匹配速度与精度;利用点疏密度空间约束,得到均匀分布的特征点对.最后,利用所获取的特征点对实现无人机影像的配准与拼接,通过人工选取均匀分布的特征点对验证拼接精度.实验结果表明,采用本文方法提取的特征点能够得到较好的无人机影像拼接效果.  相似文献   

15.
16.
目的 图像匹配是遥感图像镶嵌拼接的重要环节,图像匹配技术通常采用两步法,首先利用高维描述子的最近和次近距离比建立初始匹配,然后通过迭代拟合几何模型消除错误匹配。尽管外点过滤算法大幅提高了时间效率,但其采用传统的两步法,构建初始匹配的方法仍然非常耗时,导致整个遥感图像拼接的速度提升仍然有限。为了提高遥感图像匹配的效率,本文提出了一种基于空间分治思想的快速匹配方法。方法 首先,通过提取图像的大尺度特征生成少量的初始匹配,并基于初始匹配在两幅图像之间构建成对的分治空间中心点;然后,基于范围树搜索分治空间中心点一定范围内的相邻特征点,构造成对分治空间点集;最后,在各个分治空间点集内分别进行遥感图像特征的匹配。结果 通过大量不同图像尺寸和相对旋转的遥感图像的实验表明,与传统的和其他先进方法相比,本文方法在保证较高精度的同时将匹配时间缩短到1/1001/10。结论 利用初始种子匹配构建分治匹配中心以将图像匹配分解在多个子区间进行的方法有助于提高遥感影像匹配的效率,该算法良好的时间性能对实时遥感应用具有实际价值。  相似文献   

17.
18.
For any visual feature‐based SLAM (simultaneous localization and mapping) solutions, to estimate the relative camera motion between two images, it is necessary to find “correct” correspondence between features extracted from those images. Given a set of feature correspondents, one can use a n‐point algorithm with robust estimation method, to produce the best estimate to the relative camera pose. The accuracy of a motion estimate is heavily dependent on the accuracy of the feature correspondence. Such a dependency is even more significant when features are extracted from the images of the scenes with drastic changes in viewpoints and illuminations and presence of occlusions. To make a feature matching robust to such challenging scenes, we propose a new feature matching method that incrementally chooses a five pairs of matched features for a full DoF (degree of freedom) camera motion estimation. In particular, at the first stage, we use our 2‐point algorithm to estimate a camera motion and, at the second stage, use this estimated motion to choose three more matched features. In addition, we use, instead of the epipolar constraint, a planar constraint for more accurate outlier rejection. With this set of five matching features, we estimate a full DoF camera motion with scale ambiguity. Through the experiments with three, real‐world data sets, our method demonstrates its effectiveness and robustness by successfully matching features (1) from the images of a night market where presence of frequent occlusions and varying illuminations, (2) from the images of a night market taken by a handheld camera and by the Google street view, and (3) from the images of a same location taken daytime and nighttime.  相似文献   

19.
董瑞  梁栋  唐俊  王年  鲍文霞 《微机发展》2006,16(12):16-18
提出一种基于颜色和几何特征的图像特征点匹配算法。首先提取两幅图像特征点集邻域色调的局部累加直方图,然后结合图像特征点的几何特征构造亲近矩阵,再对亲近矩阵进行奇异值分解(SVD),利用分解的结果构造出一个反应特征点之间匹配程度的关系矩阵,最后根据关系矩阵实现两幅图像的特征点匹配。实验结果显示,这种图像特征点匹配算法对真实图像的平面旋转和立体旋转都具有较高的匹配精确度。  相似文献   

20.
The existing object recognition methods can be classified into two categories: interest-point-based and discriminative-part-based. The interest-point-based methods do not perform well if the interest points cannot be selected very carefully. The performance of the discriminative-part-base methods is not stable if viewpoints change, because they select discriminative parts from the interest points. In addition, the discriminative-part-based methods often do not provide an incremental learning ability. To address these problems, we propose a novel method that consists of three phases. First, we use some sliding windows that are different in scale to retrieve a number of local parts from each model object and extract a feature vector for each local part retrieved. Next, we construct prototypes for the model objects by using the feature vectors obtained in the first phase. Each prototype represents a discriminative part of a model object. Then, we establish the correspondence between the local parts of a test object and those of the model objects. Finally, we compute the similarity between the test object and each model object, based on the correspondence established. The test object is recognized as the model object that has the highest similarity with the test object. The experimental results show that our proposed method outperforms or is comparable with the compared methods in terms of recognition rates on the COIL-100 dataset, Oxford buildings dataset and ETH-80 dataset, and recognizes all query images of the ZuBuD dataset. It is robust enough for distortion, occlusion, rotation, viewpoint and illumination change. In addition, we accelerate the recognition process using the C4.5 decision tree technique, and the proposed method has the ability to build prototypes incrementally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号