首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
为了提高计算全息图的衍射效率,以目前计算全息领域中显示纹理较为清晰的傅里叶全息图为基础,提出了一种新的全息图制作与显示方法。采用离散余弦变换生成相应的全息图,采用逆离散余弦变换对其进行重构。通过该方法所得的全息图的衍射效率比采用傅里叶算法的全息图的衍射效率提高了13.65%,有效衍射效率提高了56.82%,从计算机模拟再现的结果可以看出,得到了一种显示效果清晰的、衍射效率更高的新型全息图。  相似文献   

2.
一种基于小波变换的多聚焦图像融合方法   总被引:1,自引:0,他引:1  
提出了一种改进的基于小波变换的多聚焦图像融合方法。该方法采用小波变换对源图像进行多尺度分解,得到高频和低频图像;对高频分量采用基于邻域方差加权平均的方法得到高频融合系数,对低频分量采用基于局部区域梯度信息的方法得到低频融合系数;进行小波反变换得到融合图像。采用均方根误差、信息熵以及峰值信噪比等评价标准,将该方法与传统融合方法的融合效果进行了比较。实验结果表明,该方法所得融合图像的效果和质量均有明显提高。  相似文献   

3.
结合NSCT和压缩感知的红外与可见光图像融合   总被引:3,自引:0,他引:3       下载免费PDF全文
目的 红外成像传感器只敏感于目标场景的辐射,对热目标的探测性能较好,但其对场景的成像清晰度低;可见光图像只敏感于目标场景的反射,场景图像较为清晰,但目标不易被清晰观察.因而将两者图像进行融合,生成具有较好目标指示特性和可见光图像的清晰场景信息,有利于提高目标识别的准确性和降低高分辨图像传感器研究的技术难度.方法 结合非下采样contourlet变换 (NSCT)和压缩感知的优点,研究一种新的红外与可见光图像融合方法.首先对两源图像进行NSCT变换,得到一个低频子带和多个不同方向、尺度的高频子带.然后对两低频子带采用压缩感知理论获得测量向量,利用方差最大的方法对测量向量进行融合,再进行稀疏重建;高频子带采用区域能量最大的方法进行融合.最后利用NSCT逆变换获得融合图像.结果 为了验证本文方法的有效性,与其他几种方法相比较,并利用主观和客观的方法对融合结果进行评价.提出的新方法融合结果的熵、空间频率、方差明显优于其他几种方法,运行时间居中.主观上可以看出,融合结果在较好地显示目标的基础上,能够较为清晰地保留场景图像的信息.结论 实验结果表明,该方法具有较好的目标检测能力,并且方法简单,具有较强的适应性,可应用于航空、遥感图像、目标识别等诸多领域.  相似文献   

4.
目的 医学超声图像常常受到斑点噪声的污染而导致质量降低,影响后续诊疗.为了解决医学超声图像在滤波去斑的同时保持图像边缘细节和结构特征的问题,借鉴量子力学的基础理论,提出一种量子衍生偏微分方程(PDE)医学超声图像去斑方法.方法 针对传统P-M方程各向异性扩散的自适应去斑能力有限的问题,引入量子理论改进扩散系数增强去斑算法的自适应能力.同时构造出各向异性扩散模型,提出一种量子衍生的偏微分方程医学超声图像去斑方法.结果 通过对模拟斑点噪声污染的图像和真实医学超声图像实验,比较信噪比(SNR)、边缘保持度、结构相似度(SSIM)等客观评价指标,本文方法较其他图像去斑方法更能有效去除斑点噪声,同时又能较好地保持图像边缘细节与结构特征.结论 本文方法能够有效地解决医学超声图像去斑中保持图像细节特征的问题,同时,量子理论的引入也为后续医学超声图像的研究提供了新思路.  相似文献   

5.
提出一种基于局部感兴趣区域中熵与梯度函数优化的近红外手背静脉图像分割算法。该算法首先基于压缩感知理论对图像进行去噪。其次,通过条带波变换提取存在静脉信息的感兴趣区域,在这些区域中对建立的关于熵和梯度的函数进行约束与优化,实现静脉与背景分离。最后,融合所有区域的分割结果,完成静脉图像的分割。实验表明在处理近红外静脉图像分割问题时,该算法相对其它算法能保留更完整的静脉特征。此外,该算法对于具有纹理特征的指静脉、掌静脉图像的分割具有较好的借鉴价值。  相似文献   

6.
基于NSCT与模糊逻辑的图像融合方法   总被引:2,自引:1,他引:1       下载免费PDF全文
提出一种基于非下采样Contourlet变换(NSCT)和模糊逻辑的图像融合方法。NSCT分解有利于更好地保持图像的边缘信息和轮廓结构,并增强图像的平移不变性。在对原图像进行多尺度几何分解后,针对图像融合过程中原图像融入信息程度的不确定性,采用基于模糊逻辑(Fuzzy logic)的融合规则指导图像的融合:根据高低频各自的频带特性和待融合图像的特点,对高频使用基于信息熵与模糊逻辑的融合方法,而对低频使用基于亮度、梯度、方差、信息熵等特性及模糊逻辑的规则得到融合系数,最终经过NSCT逆变换得到融合图像。实验结果表明,该算法的融合图像具有良好的视觉效果及客观评指标,指标与视觉效果也能够较好的统一。  相似文献   

7.
This paper proposes two new non-reference image quality metrics that can be adopted by the state-of-the-art image/video denoising algorithms for auto-denoising. The first metric is proposed based on the assumption that the noise should be independent of the original image. A direct measurement of this dependence is, however, impractical due to the relatively low accuracy of existing denoising method. The proposed metric thus tackles the homogeneous regions and highly-structured regions separately. Nevertheless, this metric is only stable when the noise level is relatively low. Most denoising algorithms reduce noise by (weighted) averaging repeated noisy measurements. As a result, another metric is proposed for high-level noise based on the fact that more noisy measurements will be required when the noise level increases. The number of measurements before converging is thus related to the quality of noisy images. Our patch-matching based metric proposes to iteratively find and add noisy image measurements for averaging until there is no visible difference between two successively averaged images. Both metrics are evaluated on LIVE2 (Sheikh et al. in LIVE image quality assessment database release 2: 2013) and TID2013 (Ponomarenko et al. in Color image database tid2013: Peculiarities and preliminary results: 2005) data sets using standard Spearman and Kendall rank-order correlation coefficients (ROCC), showing that they subjectively outperforms current state-of-the-art no-reference metrics. Quantitative evaluation w.r.t. different level of synthetic noisy images also demonstrates consistently higher performance over state-of-the-art non-reference metrics when used for image denoising.  相似文献   

8.
In this paper, a novel approach is proposed for jointly registering and fusing a multisensor ensemble of images. Based on the idea that both groupwise registration and fusion can be treated as estimation problems, the proposed approach simultaneously models the mapping from the fused image to the source images and the joint intensity of all images with motion parameters at first, and then combines these models into a maximum likelihood function. The relevant parameters are determined through employing an expectation maximization algorithm. To evaluate the performance of the proposed approach, some representative image registration and fusion approaches are compared on different multimodal image datasets. The criterion, which is defined as the average pixel displacement from its true registered position, is used to compare the performances of registration approaches. As for evaluating the fusion performance, three fusion quality measures which are metric Qab/f, mutual information and average gradient are employed. The experimental results show that the proposed approach has improved performance compared to conventional approaches.  相似文献   

9.
基于Lp范数的局部自适应偏微分方程图像恢复   总被引:1,自引:0,他引:1  
提出了一种新的基于Lp范数的自适应偏微分方程图像处理模型, 改进了Tony Chan的TV变分模型和张红英的p-Laplace模型. TV模型对图像采用全局约束, 而新的扩散方程在图像不同的位置上采用不同的约束, 具有局部自适应的特性, 在扩散的同时更好地保持了图像的边缘信息, 进而将其应用到图像恢复(去噪, 去除模糊)中去. 实验结果表明新模型的综合性能优于Tony Chan和张等现有的模型.  相似文献   

10.

Image restoration is an important and interesting problem in the field of image processing because it improves the quality of input images, which facilitates postprocessing tasks. The salt-and-pepper noise has a simpler structure than other noises, such as Gaussian and Poisson noises, but is a very common type of noise caused by many electronic devices. In this article, we propose a two-stage filter to remove high-density salt-and-pepper noise on images. The range of application of the proposed denoising method goes from low-density to high-density corrupted images. In the experiments, we assessed the image quality after denoising using the peak signal-to-noise ratio and structural similarity metric. We also compared our method against other similar state-of-the-art denoising methods to prove its effectiveness for salt and pepper noise removal. From the findings, one can conclude that the proposed method can successfully remove super-high-density noise with noise level above 90%.

  相似文献   

11.
目的 动漫制作中线稿绘制与上色耗时费力,为此很多研究致力于动漫制作过程自动化。目前基于数据驱动的自动化研究工作快速发展,但并没有一个公开的线稿数据集可供使用。针对真实线稿图像数据获取困难,以及现有线稿提取方法效果失真等问题,提出基于循环生成对抗网络的线稿图像自动提取模型。方法 模型基于循环生成对抗网络结构,以解决非对称数据训练问题。然后将不同比例的输入图像及其边界图输入到掩码指导卷积单元,以自适应选择网络中间特征。同时为了进一步提升网络提取线稿的效果,提出边界一致性约束损失函数,确保生成结果与输入图像在梯度变化上的一致性。结果 在公开的动漫彩色图像数据集Danbooru2018上,应用本文模型提取的线稿图像相比于现有线稿提取方法,噪声少、线条清晰且接近真实漫画家绘制的线稿图像。实验中邀请30名年龄在2025岁的用户,对本文以及其他4种方法提取的线稿图像进行打分。最终在30组测试样例中,本文方法提取的线稿图像被认为最佳的样例占总样例84%。结论 通过在循环生成对抗网络中引入掩码指导单元,更加合理地提取彩色图像的线稿图像,并通过对已有方法提取效果进行用户打分证明,在动漫线稿图像提取中本文方法优于对比方法。此外,该模型不需要大量真实线稿图像训练数据,实验中仅采集1 000幅左右真实线稿图像。模型不仅为后续动漫绘制与上色研究提供数据支持,同时也为图像边缘提取方法提供了新的解决方案。  相似文献   

12.

The paper presents a novel method to measure the performance of entropy-based image thresholding techniques using a new Sum of Absolute value of Differences (SAD) metric in the absence of ground-truth images. The metric is further applied to estimate the parameters of generalized Renyi, Tsallis, Masi entropy measures and the optimal threshold automatically from the image histogram. This leads to a new entropy-based image thresholding algorithm with three variants—one for each generalized entropy. The SAD metric and proposed method are first validated using ground-truth images HYTA dataset. The SAD metric is compared with misclassification error metric, Jaccard and SSIM indices and is found to exhibit consistent behavior. It is further observed that the proposed new method with SAD metric produces same or less misclassification errors than the older algorithms. Inspired by the success of the results, a large-scale performance analysis of 8 image thresholding algorithms over diverse datasets containing 621 images is carried out. The investigation reveals that the variant of the new algorithm with Tsallis, Renyi and Masi entropies segment images better than others.

  相似文献   

13.
Neutrosophic set (NS), a part of neutrosophy theory, studies the origin, nature and scope of neutralities, as well as their interactions with different ideational spectra. NS is a formal framework that has been recently proposed. However, NS needs to be specified from a technical point of view for a given application or field. We apply NS, after defining some concepts and operations, for image segmentation.The image is transformed into the NS domain, which is described using three membership sets: T, I and F. The entropy in NS is defined and employed to evaluate the indeterminacy. Two operations, α-mean and β-enhancement operations are proposed to reduce the set indeterminacy. Finally, the proposed method is employed to perform image segmentation using a γ-means clustering. We have conducted experiments on a variety of images. The experimental results demonstrate that the proposed approach can segment the images automatically and effectively. Especially, it can segment the “clean” images and the images having noise with different noise levels.  相似文献   

14.
目的 为了消除低阶彩色图像去噪模型产生视觉上不希望得到的"阶梯效应"并提高去噪过程中的边缘保持效果,提出一种黎曼几何驱动的高阶彩色图像去噪模型,并在扩散中使用一阶梯度信息引导高阶信息驱动的扩散,以改善去噪过程中的边界探测和保持能力。方法 在黎曼几何框架下,对低阶彩色图像去噪模型进行分析,并由面积微元出发得到对应的二阶微分形式,利用二阶导数矩阵的Frobenius范数构造高阶彩色图像变分能量泛函,由此得到一个彩色图像去噪的高阶扩散模型。为在扩散中保持边界,使用高斯卷积后的一阶梯度信息引导高阶扩散,得到一个多通道耦合的高阶非线性彩色图像去噪模型。分析表明,该模型在扩散时兼顾了单通道和多通道、低阶和高阶等多种信息之间的关系进行耦合去噪。结果 在实验中对不同噪声水平下的1维彩色信号、合成彩色图像和标准彩色测试图像进行去噪,并使用峰值信噪比(PSNR)与结构相似性(SSIM)作为客观评价指标,将本文结果与相关彩色图像去噪扩散模型的结果进行对比。在不同噪声水平下本文模型去噪结果的平均PSNR与相关模型相比提高了2.33%,平均SSIM提高了0.4%。结论 本文模型能够有效去除彩色图像中不同噪声水平的高斯白噪声,能较好消除视觉上的"阶梯效应",得到分片线性光滑的彩色图像,同时还能够较好保持图像边界信息。  相似文献   

15.
目的 针对图像融合中存在的目标信息减弱、背景细节不清晰、边缘模糊和融合效率低等不足,为了充分利用源图像的有用特征,将双尺度分解与基于视觉显著性的融合权重的思想融合在一起,提出了一种基于显著性分析和空间一致性的双尺度图像融合方法。方法 利用均值滤波器对源图像进行双尺度分解,先后得到源图像的基层图像信息和细节层图像信息;对基层图像基于加权平均规则融合,对细节层图像先基于显著性分析得到初始权重图,再利用引导滤波优化得到的最终权重图指导加权;通过双尺度重建得到融合图像。结果 根据传统方法与深度学习的不同特点,在TNO等公开数据集上从主观和客观两方面对所提方法进行评价。从主观分析来看,本文方法可以有效提取和融合源图像中的重要信息,得到融合质量高、视觉效果自然清晰的图像。从客观评价来看,实验验证了本文方法在提升融合效果上的有效性。与各种融合结果进行量化比较,在平均梯度、边缘强度、空间频率、特征互信息和交叉熵上的平均精度均为最优;与深度学习方法相比,熵、平均梯度、边缘强度、空间频率、特征互信息和交叉熵等指标均值分别提升了6.87%、91.28%、91.45%、85.10%、0.18%和45.45%。结论 实验结果表明,所提方法不仅在目标、背景细节和边缘等信息的增强效果显著,而且能快速有效地利用源图像的有用特征。  相似文献   

16.
为了充分利用各种遥感图像的信息.本文对以往图像融合技术进行了简要的概述,分析了各种方法的优缺点.给出了一种新的基于方向梯度局部方差的金字塔图像融合方法。将各源图像进行金字塔分解,得到多分辨率描述.建立方向梯度金字塔.采用局部方差准则融合源图像的各分解层.再进行金字塔分解的逆变换,得到融合图像。此外.利用信息熵等准则和其他融合方法进行比较.实验结果表明用此方法能得到更好的融合效果。  相似文献   

17.
The reduction of rician noise from MR images without degradation of the underlying image features has attracted much attention and has a strong potential in several application domains including medical image processing. Interpretation of MR images is difficult due to their tendency to gain rician noise during acquisition. In this work, we proposed a novel selective non-local means algorithm for noise suppression of MR images while preserving the image features as much as possible. We have used morphological gradient operators that separate the image high frequency areas from smooth areas. Later, we have applied novel selective NLM filter with optimal parameter values for different frequency regions of image to remove the noise. A method of selective weight matrix is also proposed to preserve the image features against smoothing. The results of experimentation performed using proposed adapted selective filter prove the soundness of the method. We compared results with the results of many well known techniques presented in literature like NLM with optimized parameters, wavelet based de-noising and anisotropic diffusion filter and discussed the improvements achieved.  相似文献   

18.
We consider the problem of exact histogram specification for digital (quantized) images. The goal is to transform the input digital image into an output (also digital) image that follows a prescribed histogram. Classical histogram modification methods are designed for real-valued images where all pixels have different values, so exact histogram specification is straightforward. Digital images typically have numerous pixels which share the same value. If one imposes the prescribed histogram to a digital image, usually there are numerous ways of assigning the prescribed values to the quantized values of the image. Therefore, exact histogram specification for digital images is an ill-posed problem. In order to guarantee that any prescribed histogram will be satisfied exactly, all pixels of the input digital image must be rearranged in a strictly ordered way. Further, the obtained strict ordering must faithfully account for the specific features of the input digital image. Such a task can be realized if we are able to extract additional representative information (called auxiliary attributes) from the input digital image. This is a real challenge in exact histogram specification for digital images. We propose a new method that efficiently provides a strict and faithful ordering for all pixel values. It is based on a well designed variational approach. Noticing that the input digital image contains quantization noise, we minimize a specialized objective function whose solution is a real-valued image with slightly reduced quantization noise, which remains very close to the input digital image. We show that all the pixels of this real-valued image can be ordered in a strict way with a very high probability. Then transforming the latter image into another digital image satisfying a specified histogram is an easy task. Numerical results show that our method outperforms by far the existing competing methods.  相似文献   

19.
《Information Fusion》2003,4(3):167-183
The aim of this paper is twofold: (i) to define appropriate metrics which measure the effects of input sensor noise on the performance of signal-level image fusion systems and (ii) to employ these metrics in a comparative study of the robustness of typical image fusion schemes whose inputs are corrupted with noise. Thus system performance metrics for measuring both absolute and relative degradation in fused image quality are proposed when fusing noisy input modalities. A third metric, which considers fusion of noise patterns, is also developed and used to evaluate the perceptual effect of noise corrupting homogenous image regions (i.e. areas with no salient features). These metrics are employed to compare the performance of different image fusion methodologies and feature selection/information fusion strategies operating under noisy input conditions. Altogether, the performance of seventeen fusion schemes is examined and their robustness to noise considered at various input signal-to-noise ratio values for three types of sensor noise characteristics.  相似文献   

20.

Generally, the anatomical CT/MRI modalities exhibit the brain tissue anatomy with a high spatial resolution, where PET/SPECT modalities show the metabolic features with low spatial resolution. Therefore, the integration of these two classes significantly improves several clinical applications and computer-aided diagnoses. In the proposed scheme, a fast local Laplacian filter (FLLF) is first applied to the source images to enhance the edge information and suppress the noise artifacts. Second, the RGB images are converted to YUV color space to separate the Y-component. Then to capture the spatial and spectral features of the source images, the NSST is applied to decompose the input (grayscale and/or Y-component) images into one low (LFS) and several high-frequency subbands (HFS). Third, an improved salience measure and matching factor (SMF) method by the local features-based fuzzy-weighted matrix (FW-SMF) is introduced to fuse the LFS coefficient. Due to the fast convergence with fewer iterations and robust pixels selection procedure, the PA-PCNN model is adopted to fuse the HFS coefficients. Fourth, the final fused image is obtained by applying inverse NSST and YUV format. Visual and statistical analysis performed on various experiments prove that the proposed scheme not only integrates the spatial and texture features details of the source images but also enhances the visual quality and contrast of the fused image compared to the existing state-of-arts.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号