共查询到20条相似文献,搜索用时 15 毫秒
1.
Image Fusion for Enhanced Visualization: A Variational Approach 总被引:3,自引:0,他引:3
Gemma Piella 《International Journal of Computer Vision》2009,83(1):1-11
We present a variational model to perform the fusion of an arbitrary number of images while preserving the salient information
and enhancing the contrast for visualization. We propose to use the structure tensor to simultaneously describe the geometry
of all the inputs. The basic idea is that the fused image should have a structure tensor which approximates the structure
tensor obtained from the multiple inputs. At the same time, the fused image should appear ‘natural’ and ‘sharp’ to a human
interpreter. We therefore propose to combine the geometry merging of the inputs with perceptual enhancement and intensity
correction. This is performed through a minimization functional approach which implicitly takes into account a set of human
vision characteristics. 相似文献
2.
Perceptual grouping of segmented regions in color images 总被引:3,自引:0,他引:3
Image segmentation is often the first yet important step of an image understanding system. However, general-purpose image segmentation algorithms that do not rely on specific object models still cannot produce perceptually coherent segmentation of regions at a level comparable to humans. Over-segmentation and under-segmentation have plagued the research community in spite of many significant advances in the field. Therefore, grouping of segmented region plays a significant role in bridging image segmentation and high-level image understanding. In this paper, we focused on non-purposive grouping (NPG), which is built on general expectations of a perceptually desirable segmentation as opposed to any object specific models, such that the grouping algorithm is applicable to any image understanding application. We propose a probabilistic model for the NPG problem by defining the regions as a Markov random field (MRF). A collection of energy functions is used to characterize desired single-region properties and pair-wise region properties. The single-region properties include region area, region convexity, region compactness, and color variances in one region. The pair-wise properties include color mean differences between two regions; edge strength along the shared boundary; color variance of the cross-boundary area; and contour continuity between two regions. The grouping process is implemented by a greedy method using a highest confidence first (HCF) principle. Experiments have been performed on hundreds of color photographic images to show the effectiveness of the grouping algorithm using a set of fixed parameters. 相似文献
3.
4.
Huang and Hsu (1981) describe an image sequence enhancement algorithm based on computing motion vectors between successive frames and using these vectors to determine the correspondence between pixels for frame averaging. In this note, we demonstrate that it may be sufficient to use only the components of the motion vectors in the gradient direction (called the normal components) to perform the enhancement. 相似文献
5.
《Expert systems with applications》2014,41(13):5892-5906
In this work, a method to enhance images based on a new artificial life model is presented. The model is inspired on the behavior of a herbivore organism, when this organism is in a certain environment and selects its food. This organism travels through the image iteratively, selecting the more suitable food and eating parts of it in each iteration. The path that the organism travels through in the image is defined by a priori knowledge about the environment and how it should move in it. Here, we modeled the control and perception centers of the organism, as well as the simulation of its actions and effects on the environment. To demonstrate the efficiency of our method quantitative and qualitative results of the enhancement of synthetic and real images with low contrast and different levels of noise are presented. Obtained results confirm the ability of the new artificial life model for improving the contrast of the objects in the input images. 相似文献
6.
Yong -Jian Zheng 《Machine Vision and Applications》1995,8(5):262-274
Feature extraction and image segmentation (FEIS) are two primary goals of almost all image-understanding systems. They are also the issues at which we look in this paper. We think of FEIS as a multilevel process of grouping and describing at each level. We emphasize the importance of grouping during this process because we believe that many features and events in real images are only perceived by combining weak evidence of several organized pixels or other low-level features. To realize FEIS based on this formulation, we must deal with such problems as how to discover grouping rules, how to develop grouping systems to integrate grouping rules, how to embed grouping processes into FEIS systems, and how to evaluate the quality of extracted features at various levels. We use self-organizing networks to develop grouping systems that take the organization of human visual perception into consideration. We demonstrate our approach by solving two concrete problems: extracting linear features in digital images and partitioning color images into regions. We present the results of experiments on real images. 相似文献
7.
基于影像特征级数据融合的遥感图像重构是在突出目标地物的空间结构和纹理特征情况下的信息融合。在数字图像小波多分辨率分析理论基础上,采用小波变换方法对高分辨遥感图像的目标地物边缘进行信息增强,然后与多光谱遥感图像进行特征信息融合。在融合过程中,首先对多光谱图像中的R、G、B三个波段的图像进行小波分解,得到相应的低频图像,然后对特征增强后的高分辨率图像进行小波分解,再将分解后的高频图像分别与低频图像进行融合,最后经RGB合成为彩色图像。该方法既改善了图像的清晰度和分辨率,同时也保留了原图像的光谱信息。通过融合实验验证了上述结论。 相似文献
8.
Based on scale space theory and an image normalization technique, a new feature-based image watermarking scheme robust to general geometric attacks is proposed in this paper. First, the Harris–Laplace detector is utilized to extract steady feature points from the host image; then, the local feature regions (LFR) are ascertained adaptively according to the characteristic scale theory, and they are normalized by an image normalization technique; finally, according to the predistortion compensation theory, several copies of the digital watermark are embedded into the nonoverlapped normalized LFR by comparing the DFT mid-frequency magnitudes. Experimental results show that the proposed scheme is not only invisible and robust against common signals processing methods such as median filtering, sharpening, noise adding, and JPEG compression etc., but also robust against the general geometric attacks such as rotation, translation, scaling, row or column removal, shearing, local geometric distortion and combination attacks etc. 相似文献
9.
传统图像增强算法在增强对比度的同时,也很大地提升图像噪声,需要对图像进行降噪处理。小波增强方法兼顾图像信号的空域和频域特性,但没有充分考虑到视觉的非线性特性。针对现有图像增强技术的这一缺陷,在分析小波变换对噪声影响规律的基础上,结合小波多尺度的特性,提出了一种基于小波多尺度的图像增强新算法,利用不同尺度上的小波系数间的相关性和小波分析的时频局部化特性来有效区分噪声和图像信息,有效改善了图像增强过程中的噪声放大问题。 相似文献
10.
Pierrick Bruneau Author Vitae Fabien Picarougne Author Vitae 《Pattern recognition》2010,43(2):485-493
In this paper, we propose an approach to interactive navigation in image collections. As structured groups are more appealing to users than flat image collections, we propose an image clustering algorithm, with an incremental version that handles time-varying collections. A 3D graph-based visualization technique reflects the classification state. While this classification visualization is itself interactive, we show how user feedback may assist the classification, thus enabling a user to improve it. 相似文献
11.
Rodrigo Moreno Miguel Angel Garcia Domenec Puig Carme JuliàAuthor vitae 《Computer Vision and Image Understanding》2011,115(11):1536-1551
This paper presents a new method for edge-preserving color image denoising based on the tensor voting framework, a robust perceptual grouping technique used to extract salient information from noisy data. The tensor voting framework is adapted to encode color information through tensors in order to propagate them in a neighborhood by using a specific voting process. This voting process is specifically designed for edge-preserving color image denoising by taking into account perceptual color differences, region uniformity and edginess according to a set of intuitive perceptual criteria. Perceptual color differences are estimated by means of an optimized version of the CIEDE2000 formula, while uniformity and edginess are estimated by means of saliency maps obtained from the tensor voting process. Measurements of removed noise, edge preservation and undesirable introduced artifacts, additionally to visual inspection, show that the proposed method has a better performance than the state-of-the-art image denoising algorithms for images contaminated with CCD camera noise. 相似文献
12.
Chong-Yaw Wee 《Information Sciences》2007,177(12):2533-2552
This paper deals with the design and implementation of a novel image sharpness metric based on the statistical approach. This sharpness metric is derived by modelling the image sharpness problem as a generalized eigenvalues problem. This problem is solved using Rayleigh quotient optimization where relevant statistical information of an image is extracted and then represented through a series of eigenvalues. The novelty of this paper comes from the application of eigenvalues in image sharpness metric formulation to provide robust assessment with the presence of various blur and noisy conditions. Firstly, the input image is normalized by its energy to minimize the effects caused by image contrast. Secondly, the covariance matrix is computed from this normalized image before it is diagonalized using Singular Values Decomposition (SVD) to obtain a series of eigenvalues. Finally, the image sharpness of the normalized image is determined by the trace of the first several eigenvalues. The performance of the proposed metric is gauged by comparing with several objective image sharpness metrics. Experimental results using synthetic and real images with known and unknown distortion conditions show the robustness and feasibility of the proposed metric in providing relative image sharpness. In particular, the proposed metric provides wider working range and more precise prediction consistency under all tested deformation conditions although it is slightly expensive in terms of computation than other metrics. 相似文献
13.
This paper presents a homogeneity similarity based method, which is a new patch-based image denoising method. In traditional patch-based methods, such as the NL-means method, block matching mainly depends on structure similarity. The homogeneity similarity is defined in adaptive weighted neighborhoods, which can find more similar points than the structure similarity, and so it is more effective, especially for points with less repetitive patterns, such as corner and end points. Comparative results on synthetic and real image denoising indicate that our method can effectively remove noise and preserve effective information, such as edges and contrast, while avoiding artifacts. The application on medical image denoising also demonstrates that our method is practical. 相似文献
14.
A good objective metric of image quality assessment (IQA) should be consistent with the subjective judgment of human beings. In this paper, a four-stage perceptual approach for full reference IQA is presented. In the first stage, the visual features are extracted by 2-D Gabor filter that has the excellent performance of modeling the receptive fields of simple cells in the primary visual cortex. Then in the second stage, the extracted features are post-processed by the divisive normalization transform to reflect the nonlinear mechanisms in human visual systems. In the third stage, mutual information between the visual features of the reference and distorted images is employed to measure the visual quality. And in the last pooling stage, the mutual information is converted to the final objective quality score. Experimental results show that the proposed metic has a high correlation with the subjective assessment and outperforms other state-of-the-art metrics. 相似文献
15.
A good objective metric of image quality assessment (IQA) should be consistent with the subjective judgment of human beings. In this paper, a four-stage perceptual approach for full reference IQA is presented. In the first stage, the visual features are extracted by 2-D Gabor filter that has the excellent performance of modeling the receptive fields of simple cells in the primary visual cortex. Then in the second stage, the extracted features are post-processed by the divisive normalization transform to reflect the nonlinear mechanisms in human visual systems. In the third stage, mutual information between the visual features of the reference and distorted images is employed to measure the visual quality. And in the last pooling stage, the mutual information is converted to the final objective quality score. Experimental results show that the proposed metic has a high correlation with the subjective assessment and outperforms other state-of-the-art metrics. 相似文献
16.
In this paper, a new signal subspace-based approach for enhancing a speech signal degraded by environmental noise is presented. The Perceptual Karhunen–Loève Transform (PKLT) method is improved here by including the Variance of the Reconstruction Error (VRE) criterion, in order to optimize the subspace decomposition model. The incorporation of the VRE in the PKLT (namely the PKLT-VRE hybrid method) yields a good tradeoff between the noise reduction and the speech distortion thanks to the combination of a perceptual criterion and the optimal determination of the noisy subspace dimension. In adverse conditions, the experimental tests, using objective quality measures, show that the proposed method provides a higher noise reduction and a lower signal distortion than the existing speech enhancement techniques. 相似文献
17.
18.
The optimisation of image processing steps such as segmentation and feature extraction individually in an application does not yield an optimal pipeline. In this paper we demonstrate how the use of different image segmentation algorithms directly impacts upon the quality of texture measures extracted from segmented regions and final classification ability. The difference between the best and the worst possible performances by choosing different algorithms is found to be significant. We then develop the methodology for determining the optimal pipeline for scene analysis and show our experimental results on the publicly available benchmark “MINERVA”. 相似文献
19.
Poisson editing, introduced in 2003, is becoming a technique with major applications in many different domains of image processing and computer graphics. This letter presents an exact and fast Fourier implementation of the Poisson editing equation proposed in (Pérez et al., 2003). The proposed algorithm can handle well all Poisson editing methods that are currently implemented with finite differences and multigrid methods. But it also authorizes fast complex editing strategies where the edited region is obtained by an algorithm instead of a manual selection. The selected region can therefore have a complex topology without additional computational cost. In this letter the proposed method is applied to a classic local contrast enhancement principle introduced in (Caselles et al., 1999). The manual selection of the dark regions is replaced by a lower threshold and the method becomes fast, efficient, level-line preserving, and interactive. The proposed method can be tried on line on any uploaded image at http://www.ipol.im/pub/demo/lmps_selective_contrast_adjustment/. 相似文献
20.
The problem of scale is of fundamental interest in image processing, as the features that we visually perceive and find meaningful vary significantly depending on their size and extent. It is well known that the strength of a feature in an image may depend on the scale at which the appropriate detection operator is applied. It is also the case that many features in images exist significantly over a limited range of scales, and, of particular interest here, that the most salient scale may vary spatially over the feature. Hence, when designing feature detection operators, it is necessary to consider the requirements for both the systematic development and adaptive application of such operators over scale- and image-domains.
We present a new approach to the design of scalable derivative edge detectors, based on the finite element method, that addresses the issues of method and scale adaptability. The finite element approach allows us to formulate scalable image derivative operators that can be implemented using a combination of piecewise-polynomial and Gaussian basis functions. The issue of scale is addressed by partitioning the image in order to identify local key scales at which significant edge points may exist. This is achieved by consideration of empirically designed functions of local image variance. 相似文献