首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present a new information-theoretic approach to image segmentation. We cast the segmentation problem as the maximization of the mutual information between the region labels and the image pixel intensities, subject to a constraint on the total length of the region boundaries. We assume that the probability densities associated with the image pixel intensities within each region are completely unknown a priori, and we formulate the problem based on nonparametric density estimates. Due to the nonparametric structure, our method does not require the image regions to have a particular type of probability distribution and does not require the extraction and use of a particular statistic. We solve the information-theoretic optimization problem by deriving the associated gradient flows and applying curve evolution techniques. We use level-set methods to implement the resulting evolution. The experimental results based on both synthetic and real images demonstrate that the proposed technique can solve a variety of challenging image segmentation problems. Futhermore, our method, which does not require any training, performs as good as methods based on training.  相似文献   

2.
3.
Inverse and approximation problem for two-dimensional fractal sets   总被引:1,自引:0,他引:1  
The geometry of fractals is rich enough that they have extensively been used to model natural phenomena and images. Iterated function systems (IFS) theory provides a convenient way to describe and classify deterministic fractals in the form of a recursive definition. As a result, it is conceivable to develop image representation schemes based on the IFS parameters that correspond to a given fractal image. In this paper, we consider two distinct problems: an inverse problem and an approximation problem. The inverse problem involves finding the IFS parameters of a signal that is exactly generated via an IFS. We make use of the wavelet transform and of the image moments to solve the inverse problem. The approximation problem involves finding a fractal IFS-generated image whose moments match, either exactly or in a mean squared error sense, a range of moments of the original image. The approximating measures are generated by an IFS model of a special form and provide a general basis for the approximation of arbitrary images. Experimental results verifying our approach will be presented.  相似文献   

4.
为了解决SAR图像受相干斑噪声干扰和震后发生形变而识别率偏低的问题,提出了一种新的仿射、形变不变特征-热核特征,并将该特征用于SAR图像目标识别.首先采用推广的核模糊C-均值方法分割SAR图像,提取SAR图像目标形状;接着对目标形状进行Delaunay三角剖分,采用余切权重法对Laplace-Beltrami Operator离散化,通过离散化Laplace-Beltrami Operator特征值、特征向量求每一点热核特征;然后采用谱距离公式对点点间热核距离计算,转化为距离分布表示目标形状的热核特征;最后采用L1相似性准则对图像进行相似性度量,得到识别结果.实验表明:与经典的Hu不变矩方法相比,对于仿射变换和发生形变的SAR图像,该方法都具有更高的识别率.因此,基于热核特征的SAR图像识别方法是一种更加有效的识别方法.  相似文献   

5.
Markov-type models characterize the correlation among neighboring pixels in an image in many image processing applications. Specifically, a wide-sense Markov model, which is defined in terms of minimum linear mean-square error estimates, is applicable to image restoration, image compression, and texture classification and segmentation. In this work, we address first-order (auto-regressive) wide-sense Markov images with a separable autocorrelation function. We explore the effect of sampling in such images on their statistical features, such as histogram and the autocorrelation function. We show that the first-order wide-sense Markov property is preserved, and use this result to prove that, under mild conditions, the histogram of images that obey this model is invariant under sampling. Furthermore, we develop relations between the statistics of the image and its sampled version, in terms of moments and generating model noise characteristics. Motivated by these results, we propose a new method for texture interpolation, based on an orthogonal decomposition model for textures. In addition, we develop a novel fidelity criterion for texture reconstruction, which is based on the decomposition of an image texture into its deterministic and stochastic components. Experiments with natural texture images, as well as a subjective forced-choice test, demonstrate the advantages of the proposed interpolation method over presently available interpolation methods, both in terms of visual appearance and in terms of our novel fidelity criterion.  相似文献   

6.
基于视觉图像的石英摆片参数精密测量方法   总被引:2,自引:0,他引:2  
为了解决检测石英摆片易划伤和提高检测的速度、精度,提出了利用视觉图像进行石英摆片平面几何参数测量的非接触测量方法.该方法通过视觉系统对石英摆片图像进行摄取, 由Sobel算子对图像进行单像素精度的边缘初始位置定位,进而利用改进的Zernike矩亚像素定位算法实现对目标边缘的精确定位,最后通过最小二乘法拟合边缘点得出石英摆片的平面几何参数.实验结果表明:该方法稳定性好,测量精度高,定位精度优于0.1 pixel,可实现石英摆片平面几何参数的精密测量.  相似文献   

7.
Orthogonal rotation-invariant moments for digital image processing   总被引:2,自引:0,他引:2  
Orthogonal rotation-invariant moments (ORIMs), such as Zernike moments, are introduced and defined on a continuous unit disk and have been proven powerful tools in optics applications. These moments have also been digitized for applications in digital image processing. Unfortunately, digitization compromises the orthogonality of the moments and, therefore, digital ORIMs are incapable of representing subtle details in images and cannot accurately reconstruct images. Typical approaches to alleviate the digitization artifact can be divided into two categories: 1) careful selection of a set of pixels as close approximation to the unit disk and using numerical integration to determine the ORIM values, and 2) representing pixels using circular shapes such that they resemble that of the unit disk and then calculating ORIMs in polar space. These improvements still fall short of preserving the orthogonality of the ORIMs. In this paper, in contrast to the previous methods, we propose a different approach of using numerical optimization techniques to improve the orthogonality. We prove that with the improved orthogonality, image reconstruction becomes more accurate. Our simulation results also show that the optimized digital ORIMs can accurately reconstruct images and can represent subtle image details.  相似文献   

8.
In light-limited situations, camera motion blur is one of the prime causes for poor image quality. Recovering the blur kernel and latent image from the blurred observation is an inherently ill-posed problem. In this paper, we introduce a hand-held multispectral camera to capture a pair of blurred image and Near-InfraRed (NIR) flash image simultaneously and analyze the correlation between the pair of images. To utilize the high-frequency details of the scene captured by the NIR-flash image, we exploit the NIR gradient constraint as a new type of image regularization, and integrate it into a Maximum-A-Posteriori (MAP) problem to iteratively perform the kernel estimation and image restoration. We demonstrate our method on the synthetic and real images with both spatially invariant and spatially varying blur. The experiments strongly support the effectiveness of our method to provide both accurate kernel estimation and superior latent image with more details and fewer ringing artifacts.  相似文献   

9.
In this paper, we address a complex image registration issue arising while the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequently encountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast-enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weighs the two mixture components. The registration problem is formulated both as an energy minimization problem and as a maximum a posteriori estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated simultaneously, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into account spatial variations of intensity dependencies while keeping a good registration accuracy.  相似文献   

10.
The adaptive bases algorithm for intensity-based nonrigid image registration   总被引:11,自引:0,他引:11  
Nonrigid registration of medical images is important for a number of applications such as the creation of population averages, atlas-based segmentation, or geometric correction of functional magnetic resonance imaging (fMRI) images to name a few. In recent years, a number of methods have been proposed to solve this problem, one class of which involves maximizing a mutual information (MI)-based objective function over a regular grid of splines. This approach has produced good results but its computational complexity is proportional to the compliance of the transformation required to register the smallest structures in the image. Here, we propose a method that permits the spatial adaptation of the transformation's compliance. This spatial adaptation allows us to reduce the number of degrees of freedom in the overall transformation, thus speeding up the process and improving its convergence properties. To develop this method, we introduce several novelties: 1) we rely on radially symmetric basis functions rather than B-splines traditionally used to model the deformation field; 2) we propose a metric to identify regions that are poorly registered and over which the transformation needs to be improved; 3) we partition the global registration problem into several smaller ones; and 4) we introduce a new constraint scheme that allows us to produce transformations that are topologically correct. We compare the approach we propose to more traditional ones and show that our new algorithm compares favorably to those in current use.  相似文献   

11.
Tagged magnetic resonance imaging (MRI) is unique in its ability to noninvasively image the motion and deformation of the heart in vivo, but one of the fundamental reasons limiting its use in the clinical environment is the absence of automated tools to derive clinically useful information from tagged MR images. In this paper, we present a novel and fully automated technique based on nonrigid image registration using multilevel free-form deformations (MFFDs) for the analysis of myocardial motion using tagged MRI. The novel aspect of our technique is its integrated nature for tag localization and deformation field reconstruction using image registration and voxel based similarity measures. To extract the motion field within the myocardium during systole we register a sequence of images taken during systole to a set of reference images taken at end-diastole, maximizing the normalized mutual information between the images. We use both short-axis and long-axis images of the heart to estimate the full four-dimensional motion field within the myocardium. We also present validation results from data acquired from twelve volunteers.  相似文献   

12.
Otsu’s method is a classic thresholding approach in image segmentation. While the two-dimensional (2D) Otsu’s method performs better than the original one in segmenting images corrupted by noise, it is sensitive to Salt&Pepper noise. In order to solve this problem, we present a robust 2D Otsu’s thresholding method in this paper. Our method builds the 2D histogram based on the image smoothed by both median and average filters, in contrast to the traditional method using averaged image only. Then the optimal threshold vector is determined with two one-dimensional searches on the two dimensions of the 2D histogram. In addition, we introduce a region post-processing step to deal with the pixels of noise and edges. Compared with the traditional 2D Otsu’s method, our method improves the robustness to Salt&Pepper noise and Gaussian noise significantly. Experimental results on both synthetic and real images validate the effectiveness of the proposed MAOTSU_2D method.  相似文献   

13.
This study aims to explore a novel approach to reconstruct multi-gray-level images based on circular blocks reconstruction method using two exact and fast moments: Zernike (CBR-EZM) and pseudo-Zernike (CBR-EPZM): An image is first divided into a set of sub-images which are then reconstructed independently. We also introduced Chamfer distance (CD) to capitalize on the use of discrete distance instead of Euclidean one. The combination of our methods and CD leads to CBR-EZM-CD and CBR-EPZM-CD methods. Obviously, image partitioning offers significant advantages, but an undesirable circular blocking effect can occur. To mitigate this effect, we have implemented overlapping feature to our new methods leading to OCBR-EZM-CD and OCBR-EPZM-CD, by exploiting neighborhood information of the circular blocks. The main motivation of this novel approach is to explore new applications of Zernike and pseudo-Zernike moments. One of the fields is feature extraction for pattern recognition: Zernike and pseudo-Zernike moments are well known to capture only the global features, but thanks to the circular block reconstruction, we can now use those moments to extract also local features.  相似文献   

14.
给出了一种对具有旋转、放缩或光照差异等现象的图像进行加速拼接的方法.采用多分辨分析提取Harris特征点,构建特征描述向量;然后利用主成分分析方法,在不损失特征向量信息的前提下,降低特征向量维度,减少特征匹配计算时间.实验表明:该拼接方法能够实现图像的有效拼接.  相似文献   

15.
Because the imaging spectra of infrared images and visible light images are different, there is a huge modal difference between visible light images and infrared ones. Existing methods use image conversion to solve the problem of modal difference between two images, but these methods usually fail to focus on the complete information of images, which lead to the results of cross modal person re-identification are unstable. To solve this problem, we propose a new visible–infrared person re-identification method, called dual-path image pair joint discriminant model (DPJD), which simultaneously optimizes the distance within and between classes, and supervises the network learning to identify feature representations. We generate images with different modalities for the samples, and separately compose the same modality image pair and different modality image pair so as to overcome the inconsistent alignment issues. In addition, we also propose a discriminant module based on dual-path (DMDP) to improve the generation quality and discrimination accuracy of image pairs. Experiments on two benchmark datasets SYSU-MM01 and RegDB demonstrate its effectiveness.  相似文献   

16.
An unsupervised stochastic model-based approach to image segmentation is described, and some of its properties investigated. In this approach, the problem of model parameter estimation is formulated as a problem of parameter estimation from incomplete data, and the expectation-maximization (EM) algorithm is used to determine a maximum-likelihood (ML) estimate. Previously, the use of the EM algorithm in this application has encountered difficulties since an analytical expression for the conditional expectations required in the EM procedure is generally unavailable, except for the simplest models. In this paper, two solutions are proposed to solve this problem: a Monte Carlo scheme and a scheme related to Besag's (1986) iterated conditional mode (ICM) method. Both schemes make use of Markov random-field modeling assumptions. Examples are provided to illustrate the implementation of the EM algorithm for several general classes of image models. Experimental results on both synthetic and real images are provided.  相似文献   

17.
A level-set approach to image blending   总被引:1,自引:0,他引:1  
This paper presents a novel method for blending images. Image blending refers to the process of creating a set of discrete samples of a continuous, one-parameter family of images that connects a pair of input images. Image blending has uses in a variety of computer graphics and image processing applications. In particular, it ran be used for image morphing, which is a method for creating video streams that depict transformations of objects in scenes based solely on pairs of images and sets of user-defined fiducial points. Image blending also has applications for video compression and image-based rendering. The proposed method for image blending relies on the progressive minimization of a difference metric which compares the level sets between two images. This strategy results in an image blend which is the solution of a pair of coupled, nonlinear, first-order partial differential equations that model multidimensional level-set propagations. When compared to interpolation this method produces more natural appearances of motion because it manipulates the shapes of image contours rather than simply interpolating intensity values. This strategy results in a process that has the qualitative property of deforming greyscale objects in images rather than producing a simple fade from one object to another. This paper presents the mathematics that underlie this new method, a numerical implementation, and results on real images that demonstrate its effectiveness.  相似文献   

18.
目标探测多波段图像统一复原及实验验证   总被引:1,自引:0,他引:1       下载免费PDF全文
为解决在目标探测过程中多波段图像的复原和清晰化问题,提出了一种实用新颖的可适用于弹/机/星载目标探测多波段图像的统一复原校正方法。该方法不需要知道退化方式和退化模型等先验知识,仅利用光谱图像的有效信息,构造关于点扩展函数的非负最小二乘空间平滑性约束的极小化准则函数,通过优化计算方法求解点扩展函数,利用非盲目图像复原方法得到清晰图像。输入大量实际图片,对方法进行了测试验证。实验结果表明:该方法有效地克服了目前图像复原方法不具有通用性、对仿真图像复原效果好对实际图像复原效果差、耗时长等缺点,对红外、可见光、毫米波等多波段退化图像均能有效恢复,不需先验知识,输入一帧模糊图像即可得到高清晰图像,为目标探测系统提供了一种有效图像复原方法。  相似文献   

19.
Fractional-order anisotropic diffusion for image denoising.   总被引:8,自引:0,他引:8  
This paper introduces a new class of fractional-order anisotropic diffusion equations for noise removal. These equations are Euler-Lagrange equations of a cost functional which is an increasing function of the absolute value of the fractional derivative of the image intensity function, so the proposed equations can be seen as generalizations of second-order and fourth-order anisotropic diffusion equations. We use the discrete Fourier transform to implement the numerical algorithm and give an iterative scheme in the frequency domain. It is one important aspect of the algorithm that it considers the input image as a periodic image. To overcome this problem, we use a folded algorithm by extending the image symmetrically about its borders. Finally, we list various numerical results on denoising real images. Experiments show that the proposed fractional-order anisotropic diffusion equations yield good visual effects and better signal-to-noise ratio.  相似文献   

20.
Techniques for fast image retrieval over large databases have attracted considerable attention due to the rapid growth of web images. One promising way to accelerate image search is to use hashing technologies, which represent images by compact binary codewords. In this way, the similarity between images can be efficiently measured in terms of the Hamming distance between their corresponding binary codes. Although plenty of methods on generating hash codes have been proposed in recent years, there are still two key points that needed to be improved: 1) how to precisely preserve the similarity structure of the original data and 2) how to obtain the hash codes of the previously unseen data. In this paper, we propose our spline regression hashing method, in which both the local and global data similarity structures are exploited. To better capture the local manifold structure, we introduce splines developed in Sobolev space to find the local data mapping function. Furthermore, our framework simultaneously learns the hash codes of the training data and the hash function for the unseen data, which solves the out-of-sample problem. Extensive experiments conducted on real image datasets consisting of over one million images show that our proposed method outperforms the state-of-the-art techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号