首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
陈洁  付冬梅  刘燕 《红外》2009,30(12):1-5
红外光与可见光处于不同波段,其图像间的相关性较小.传统的基于特征的图像配准方法(如利用角点、边缘点等),在特征点选择时容易造成误匹配,这是由于有时特征点间的距离比较近造成的.针对此问题,本文提出了一种基于图像轮廓特征的红外与可见光图像配准方法.首先通过设置目标过滤器来提取明显的轮廓,再利用部分Hausdorff距离对轮廓进行匹配,计算出匹配轮廓对的面积和质心,并以此作为配准依据来对两种不同的图像进行配准.然后通过实验证明该方法的配准精度更高且克服了特征点误匹配的难点,这就可以解决刚性变换中红外与可见光图像间的配准问题.  相似文献   

2.
A new suboptimal search strategy suitable for feature selection in very high-dimensional remote sensing images (e.g., those acquired by hyperspectral sensors) is proposed. Each solution of the feature selection problem is represented as a binary string that indicates which features are selected and which are disregarded. In turn, each binary string corresponds to a point of a multidimensional binary space. Given a criterion function to evaluate the effectiveness of a selected solution, the proposed strategy is based on the search for constrained local extremes of such a function in the above-defined binary space. In particular, two different algorithms are presented that explore the space of solutions in different ways. These algorithms are compared with the classical sequential forward selection and sequential forward floating selection suboptimal techniques, using hyperspectral remote sensing images (acquired by the airborne visible/infrared imaging spectrometer [AVIRIS] sensor) as a data set. Experimental results point out the effectiveness of both algorithms, which can be regarded as valid alternatives to classical methods, as they allow interesting tradeoffs between the qualities of selected feature subsets and computational cost  相似文献   

3.
This paper presents a new stereo feature matching method that extracts the disparity measure for the recovery of depth information in 2-D stereo images. In this method, a stereo pair of images are transformed row for row into strings carrying spatially varying Walsh coefficients as attributes. The significance of the information carried by the Walsh coefficients is expressed mathematically and through experimental evaluations. The choice of the Walsh coefficients in contrast to other orthogonal transform coefficients is a direct result of their computational simplicity and their interpretative meaning in terms of the information contained in the spatial domain. The string-to-string matching technique used to bring the two strings into correspondence integrates, into a unified process, both the feature detection and the feature matching processes. The uniqueness and the ordering constraints are explicitly integrated into this string-to-string matching technique. Both the issues of Gaussian filtering and the importance of enforcing the epipolar line constraint are addressed in view of the application of the proposed method. Experimental results are given and assessed in terms of both the accuracy in stereo matching and the ensuing computational requirements  相似文献   

4.
5.
6.
基于透视不变二值特征描述子的图像匹配算法   总被引:1,自引:0,他引:1  
针对基于局部特征的图像匹配算法普遍存在对透视变换顽健性差的缺点,提出了一种新的二值特征描述子PIBC(perspective invariant binary code),提高了图像匹配算法的透视变换顽健性。首先,在提取金字塔图像FAST特征点的基础上,利用Harris角点响应值去除非极大值点和边缘响应点;其次,通过模拟相机不同视角成像之间的透视变换,对单个FAST特征点生成不同视角变换下图像的二值描述子,使描述子具备描述不同视角图像中同一特征点的能力。实验结果表明,算法在提高描述子透视不变性的同时时间复杂度与SURF算法近似。  相似文献   

7.
This paper is concerned with the problem of feature point registration and scene recognition from images under weak perspective transformations which are well approximated by affine transformations and under possible occlusion and/or appearance of new objects. It presents a set of local absolute affine invariants derived from the convex hull of scattered feature points (e.g., fiducial or marking points, corner points, inflection points, etc.) extracted from the image. The affine invariants are constructed from the areas of the triangles formed by connecting three vertices among a set of four consecutive vertices (quadruplets) of the convex hull, and hence do make direct use of the area invariance property associated with the affine transformation. Because they are locally constructed, they are very well suited to handle the occlusion and/or appearance of new objects. These invariants are used to establish the correspondences between the convex hull vertices of a test image with a reference image in order to undo the affine transformation between them. A point matching approach for recognition follows this. The time complexity for registering L feature points on the test image with N feature points of the reference image is of order O(NxL). The method has been tested on real indoor and outdoor images and performs well.  相似文献   

8.
一种有效的全景图拼合预处理算法   总被引:2,自引:2,他引:0  
全景图拼合的预处理主要包括对样本图像的特征提取以及相邻图像的特征匹配.本文首先改进Harris角点检测算子,以便准确提取样本图像的特征点并赋予特征描述符;提出一种基于小波系数的特征索引算法,实现了图像特征点对的快速搜索和两幅图像之间的匹配.实验结果表明,该算法得到的匹配点精确,搜索效率高,能够实现全景图的无缝拼接.  相似文献   

9.
10.
The uniform theory of diffraction (UTD) plus an imposed edge diffraction extension is used to predict the backscatter cross sections of dihedral corner reflectors which have right, obtuse, and acute included angles. UTD allows individual backscattering mechanisms of the dihedral corner reflectors to be identified and provides good agreement with experimental cross section measurements in the azimuthal plane. Multiply reflected and diffracted fields of up to third order are included in the analysis for both horizontal and vertical polarizations. The coefficients of the uniform theory of diffraction revert to Keller's original geometrical theory of diffraction (GTD) in far-field cross section analyses, but finite cross sections can be obtained everywhere by considering mutual cancellation of diffractions from parallel edges. Analytic calculations are performed using UTD coefficients; hence accuracy required in angular measurements is more critical as the distance increases. In particular, the common "far-field" approximation that all rays to the observation point are parallel is too gross of an approximation for the angular parameters in the UTD coefficients in the far field.  相似文献   

11.
The use of shape as a cue for indexing into pictorial databases has been traditionally based on global invariant statistics and deformable templates, on the one hand, and local edge correlation on the other. This paper proposes an intermediate approach based on a characterization of the symmetry in edge maps. The use of symmetry matching as a joint correlation measure between pairs of edge elements further constrains the comparison of edge maps. In addition, a natural organization of groups of symmetry into a hierarchy leads to a graph-based representation of relational structure of components of shape that allows for deformations by changing attributes of this relational graph. A graduated assignment graph matching algorithm is used to match symmetry structure in images to stored prototypes or sketches. The results of matching sketches and grey-scale images against a small database consisting of a variety of fish, planes, tools, etc., are promising.  相似文献   

12.
13.
It is a challenging work to design a robust halftone image watermarking scheme against desynchronization attacks. In this paper, we propose a feature-based digital watermarking method for halftone images with low computational complexity, good visual quality and reasonable resistance toward desynchronization attacks. Firstly, the feature points are extracted from host halftone image by using multi-scale Harris–Laplace detector, and the local feature regions (LFRs) are constructed according to the feature scale theory. Secondly, discrete Fourier transform (DFT) is performed on the LFRs, and the embedding positions (DFT coefficients) are selected adaptively according to the magnitude spectrum information. Finally, the digital watermark is embedded into the LFRs by quantizing the magnitudes of the selected DFT coefficients. By binding the watermark with the geometrically invariant halftone image features, the watermark detection can be done without synchronization error. Simulation results show that the proposed scheme is invisible and robust against common signals processing such as median filtering, sharpening, noise adding, and JPEG compression, etc., and desynchronization attacks such as rotation, scaling, translation (RST), cropping, local random bend, and print-scan, etc.  相似文献   

14.
The motion compensated discrete cosine transform coding (MCDCT) is an efficient image sequence coding technique. In order to further reduce the bit-rate for the quantizied DCT coefficients and keep the visual quality, we propose an adaptive edge-based quadtree motion compensated discrete cosine transform coding (EQDCT). In our proposed algorithm, the overhead moving information is encoded by a quadtree structure and the nonedge blocks will be encoded at lower bit-rate but the edge blocks will be encoded at higher bit-rate. The edge blocks will be further classified into four different classes according to the orientations and locations of the edges. Each class of edge blocks selects the different set of the DCT coefficients to be encoded. By this method, we can just preserve and encode a few DCT coefficients, but still maintain the visual quality of the images. In the proposed EQDCT image sequence coding scheme, the average bit-rate of each frame is reduced to 0.072 bit/pixel and the average PSNR value is 32.11 dB.  相似文献   

15.
A Markov model for blind image separation by a mean-field EM algorithm.   总被引:1,自引:0,他引:1  
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.  相似文献   

16.
17.
18.
Wavelet multi-resolution analysis allows us to detect edges at different scales. However, the wavelet transform can only capture edge information in three directions, horizontal, vertical and diagonal. In addition, the extracted edges are discontinuous. A new edge detection method to solve these problems is proposed in his paper. Firstly, the image is extended symmetrically by applying horizontal and vertical reflections. Secondly, shear transform is taken on the extended images according to various shear matrixes. Thirdly, the edges of the sheared images are detected by means of wavelet transform. The edges detected in different directions have some difference and can complement each other, so we fuse them with a fusion rule. Finally, a threshold is set to refine the edges. The proposed method works efficiently on the images, and the continuity of the edge is getting better. Besides, the method is able to distinguish the real edges from the noise.  相似文献   

19.
结合光谱和尺度特征的高分辨率图像边缘检测算法   总被引:1,自引:0,他引:1  
高分辨率遥感图像具有高度细节化的多尺度表达能力,在有效表达地物边缘信息的同时,目标内部几何细节常以噪声的形式出现.提出将光谱相异性和小波变换相结合的边缘特征检测算法,克服了小波变换导致的边缘变形,并能够有效抑制噪声.根据光谱角原理定义归一化光谱相异性模型,并与二进小波变换结合,同时利用梯度方向余弦值对各个波段的梯度幅值加权,最后根据向量场模型计算多光谱图像的梯度幅值和梯度方向,细化后获取由细到粗的多层次边缘特征.实验结果与小波变换和传统检测算子的检测结果相比,表明该算法利用光谱相异性信息增强边缘响应强度,保证了所有尺度下获取的边缘轮廓不失真,边缘点定位准确;加权处理突出了多波段梯度主方向信息,也有效抑制了高分辨率图像上目标内部精细几何细节形成的噪声.  相似文献   

20.
Color quantization and processing by Fibonacci lattices   总被引:1,自引:0,他引:1  
Color quantization is sampling of three-dimensional (3-D) color spaces (such as RGB or Lab) which results in a discrete subset of colors known as a color codebook or palette. It is extensively used for display, transfer, and storage of natural images in Internet-based applications, computer graphics, and animation. We propose a sampling scheme which provides a uniform quantization of the Lab space. The idea is based on several results from number theory and phyllotaxy. The sampling algorithm is very much systematic and allows easy design of universal (image-independent) color codebooks for a given set of parameters. The codebook structure allows fast quantization and ordered dither of color images. The display quality of images quantized by the proposed color codebooks is comparable with that of image-dependent quantizers. Most importantly, the quantized images are more amenable to the type of processing used for grayscale ones. Methods for processing grayscale images cannot be simply extended to color images because they rely on the fact that each gray-level is described by a single number and the fact that a relation of full order can be easily established on the set of those numbers. Color spaces (such as RGB or Lab) are, on the other hand, 3-D. The proposed color quantization, i.e., color space sampling and numbering of sampled points, makes methods for processing grayscale images extendible to color images. We illustrate possible processing of color images by first introducing the basic average and difference operations and then implementing edge detection and compression of color quantized images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号