首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
本文在深入研究稀疏表示和字典学习理论的基础上,建立了图像去噪模型并提出一种新的图像去噪算法。该算法采用同伦方法学习字典,充分利用了同伦方法收敛速度快以及对信号的恢复准确度高的特点。之后利用 OMP 算法求出带噪图像在该字典下的稀疏表示系数,并结合稀疏去噪模型实现对图像的去噪。实验结果显示本文算法在不同的噪声环境下具有较好的去噪效果,同时在与 K-SVD 算法关于收敛速度比较的实验中,实验结果充分显示了使用同伦算法学习字典在收敛速度上的优势。   相似文献   

2.
针对无人机航拍图像在采集或者传输过程中容易产生噪声的问题,提出了一种基于量子图像分解的中值滤波算法。该算法首先将经典图像表示为量子图像,并通过量子理论对量子图像进行分解,得到分解子图,然后利用改进的快速中值滤波算法对分解子图进行去噪,最后将各子图去噪结果进行合成得到最终的去噪图像。实验证明,该方法能够有效去除无人机航拍图像中的椒盐噪声和高斯噪声,所提算法在去噪效果方面与传统中值滤波算法相比,信噪比提高了约17%,与递归中值滤波相比提高了约28%。并且算法的效率得到了一定程度的提高。  相似文献   

3.
四阶偏微分方程(PDE)图像去噪方法具有良好的去噪性能,但该类方法计算量大,耗时长.为提高算法的快速性和有效性,提出一种高效并行的四阶PDE图像去噪算法.该方法基于MPI并行环境,通过分析四阶PDE离散化后差分方程求解的并行性,对噪声图像进行条状重叠的数据划分,采用并行方式对图像去噪,极大地降低了运行时间.  相似文献   

4.
针对军事目标红外图像信噪比低、NAS-RIF算法复原模糊图像时敏感于噪声的缺陷,提出一种基于Contourlet多尺度变换去噪和图像细节规整化的改进NAS-RIF盲复原算法。首先,通过Contourlet变换对图像进行去噪预处理;然后,利用最优阈值分割技术提取目标的可靠支持域,并引入规整化方法,在代价函数中添加目标边缘保持约束项,保存图像细节特征;最后,利用共轭梯度(CG)算法优化代价函数,以保持算法的收敛速度。两组实验的结果表明,针对信噪比较低的气动红外退化图像,与原始NAS-RIF方法相比,本文提出的改进算法具有更好的复原效果,算法的收敛速度基本保持不变。  相似文献   

5.
《现代电子技术》2016,(20):159-162
噪声图像,特别是含有高密度噪声图像在经过去噪后,图像细节(图像高频)丢失较多。针对这一问题,提出一种基于字典学习和高频增强的方法。该算法首先让噪声图像经过降噪算法处理,然后由样本图像依次模拟加噪和去噪过程得到去噪样本图像,样本图像和去噪样本图像相减得到样本差分图像,最后分别训练样本差分图像和去噪样本图像,得到一对高、低分辨率字典,用于重建图像去噪后所缺失的高频。实验结果表明,所提算法在主观的人眼视觉和客观评价上要优于经典的图像降噪算法。  相似文献   

6.
基于聚类的图像稀疏去噪方法   总被引:2,自引:0,他引:2  
在图像去噪方法的研究中,非局部均值算法与稀疏去噪算法是近几年受到广为关注的方法.非局部均值算法将具有邻域相似性的像素点作加权平均;而稀疏去噪算法是将图像的非噪声部分用过完备字典进行稀疏表示.基于上述两种方法的思想,本文提出了基于聚类的稀疏去噪方法,该方法结合了非局部均值算法与稀疏去噪算法的优点,对相似的图像块进行聚类,并通过施加l1/l2范数的正则化约束,对同一类中的图像块在过完备字典上进行相同结构的稀疏表示,从而达到去噪目的.在字典的选择上,本文使用DCT字典和双正交小波字典,能够同时保留原图像中的平滑分量与细节分量.实验结果表明,本文方法比传统的稀疏去噪方法有更好的去噪效果.  相似文献   

7.
针对低剂量计算机断层扫描(computerized tomography,CT)在图像采集过程中引入较多噪声,造成图像质量严重下降的问题, 提出一种基于残差注意力机制与复合感知损失的低剂量CT去噪算法。在该算法中,利用生 成对抗网络完成对低剂量CT图像的去噪,在网络框架中引入多尺度特征提取及残差注意力 模块,以融合图像中不同尺度的信息,提高网络对噪声特征的区分能力,避免在去噪过程中 丢失图像细节信息。同时采用复合感知损失函数,以加快网络收敛速度,促使去噪图像在感 知上与原图像更接近。实验结果表明:与现有的算法相比,所提算法能够有效抑制低剂量 CT图像中的噪声,并恢复更多的纹理细节;对比低剂量CT图像,所提算法处理后的CT 图像峰值信噪比(peak signal-to-noise ratio,PSNR) 值提高了31.72%, 结构相似性(structural similarity,SSIM)值提高了13.15%,可以满足更高的医学影像诊断要求 。  相似文献   

8.
陈柘  陈海 《国外电子元器件》2014,(2):168-170,173
提出一种基于混合字典的图像稀疏分解去噪方法。使用小波包函数和离散余弦函数构成混合字典,采用匹配追踪算法对图像进行稀疏分解,提取含噪图像中的稀疏成分,最后利用稀疏成分进行图像重构,达到去除图像中噪声的目的。实验中与单一字典稀疏分解去噪算法进行了对比,结果表明,所提出的混合字典稀疏去噪算法可有效提取图像中的稀疏结构,改善重构图像的主客观质量。  相似文献   

9.
去噪是图像处理中的一个重要技术,一般的去噪算法会造成图像边缘信息被平滑,为了有效地抑制噪声而同时又保护好边缘信息,在多小波变换的基础上,提出了一种新的去噪算法,它结合了多小波变化和各向异性扩散(P-M扩散)两者的优点,利用多小波变换把纹理图像分解为高频子带和低频子带,然后根据子带图的特点分别采用不同的各向异性扩散方法,实验结果表明,该算法去噪效果好,改善了图像的峰值信噪比(PSNR)和最小均方误差(MSE),同时更好地保留了图像的纹理和细节.  相似文献   

10.
SAR图像去噪的分数阶多尺度变分PDE模型及自适应算法   总被引:7,自引:1,他引:6  
在合成孔径雷达(SAR)图像相干斑噪声抑制中,保持图像的边缘和纹理是非常重要的。该文首先利用分数阶导数和负指数Sobolev空间对图像进行建模,建立了分数阶多尺度变分偏微分方程(PDE)模型,然后给出了模型参数自适应选择方法,并在此基础上提出了区域、尺度自适应的去噪算法。数值实验表明,新方法能在去除噪声,抑制图像的 阶梯效应,保持图像的边缘、纹理细节几个方面取得较好的效果。  相似文献   

11.
Modified curvature motion for image smoothing and enhancement   总被引:7,自引:0,他引:7  
We formulate a general modified mean curvature based equation for image smoothing and enhancement. The key idea is to consider the image as a graph in some R(72), and apply a mean curvature type motion to the graph. We consider some special cases relevant to grey-scale and color images.  相似文献   

12.
张健  李白燕 《激光技术》2014,38(6):863-866
为了提高图像分割的质量,采用图论最小割集算法进行了研究。首先将图像中的像素点映射为图论节点,节点权值通过平衡因子与共享最近邻节点数的比率计算;然后基于最小化能量方程建立图像最小割集,提取分割块内的灰度值作为块特征向量,用最小生成树对图分割;接着用判定函数判断临近区域是合并或者分割;最后给出了算法流程。结果表明,该算法可以分割出目标信息,并且算法鲁棒性好、峰值内存小。  相似文献   

13.
为了解决现有图空间上形态学应用中固定选取结构元素的问题,在图空间上提出了相似权的概念,定义了自适应结构图,对结构图的性质进行了证明.在此基础上提出一种图空间上自适应形态学算子,并从理论上验证了该算子的完备性.新算子不仅考虑了图像像素点的局部特征,同时考虑了连续像素的全局特征.实验结果表明,新算子不仅在保存彩色信息的完整及关联性方面优于现有的彩色形态学,而且可以根据图像的特征自适应的选取阈值和结构元素,在更精细的图像处理方面具有良好的应用前景.  相似文献   

14.
该文提出了一种基于三元采样图卷积网络的度量学习方法,以实现遥感图像的半监督检索。所提方法由三元图卷积网络(TGCN)和基于图的三元组采样(GTS)两部分组成。TGCN由3个具有共享权重的并行卷积神经网络和图卷积网络组成,用以提取图像的初始特征以及学习图像的图嵌入。通过同时学习图像特征以及图嵌入,TGCN能够得到用于半监督图像检索的有效图结构。接着,通过提出的GTS算法对图结构内隐含的图像相似性信息进行评价,以选择合适的困难三元组(Hard Triplet),并利用困难三元组组成的样本集合对模型进行有效快速的模型训练。通过TGCN和GTS的组合,提出的度量学习方法在两个遥感数据集上进行了测试。实验结果表明,TGCN-GTS具有以下两方面的优越性:TGCN能够根据图像及图结构学习到有效的图嵌入特征及度量空间;GTS有效评估图结构内隐含的图像相似性信息选择合适的困难三元组,显著提升了半监督遥感图像检索效果。  相似文献   

15.
一种改进的图谱阈值分割算法   总被引:1,自引:1,他引:0  
针对图像分割是典型的结构不良问题,将图谱划分理论作为一种新型的模式分析工具应用到图像分割并引起广大学者关注。考虑到现有的图谱阈值法中图权计算方法采用基于欧氏距离的幂指数函数导致其计算量过大的不足,首先采用基于欧氏距离的分式型柯西函数代替基于欧氏距离的幂指数函数提出图权计算的新方法,其次将其应用基于图谱划分测度的图像阈值分割算法中并得到一种改进的图谱阈值分割方法。实验结果表明,该方法的计算量小且对目标和背景相差比例较大的图像能获得满意的结果。  相似文献   

16.
We present an extension of the random walker segmentation to images with uncertain gray values. Such gray-value uncertainty may result from noise or other imaging artifacts or more general from measurement errors in the image acquisition process. The purpose is to quantify the influence of the gray-value uncertainty onto the result when using random walker segmentation. In random walker segmentation, a weighted graph is built from the image, where the edge weights depend on the image gradient between the pixels. For given seed regions, the probability is evaluated for a random walk on this graph starting at a pixel to end in one of the seed regions. Here, we extend this method to images with uncertain gray values. To this end, we consider the pixel values to be random variables (RVs), thus introducing the notion of stochastic images. We end up with stochastic weights for the graph in random walker segmentation and a stochastic partial differential equation (PDE) that has to be solved. We discretize the RVs and the stochastic PDE by the method of generalized polynomial chaos, combining the recent developments in numerical methods for the discretization of stochastic PDEs and an interactive segmentation algorithm. The resulting algorithm allows for the detection of regions where the segmentation result is highly influenced by the uncertain pixel values. Thus, it gives a reliability estimate for the resulting segmentation, and it furthermore allows determining the probability density function of the segmented object volume.  相似文献   

17.
The problem of semi-automatic segmentation has attracted much interest over the last few years. The Random Walker algorithm [1] has proven to be quite a popular solution to this problem, as it is able to deal with several components and models the image using a convenient graph structure. We propose two improvements to the image graph used by the Random Walker method. First, we propose a new way of computing the edge weights. Traditionally, such weights are based on the similarity between two neighbouring pixels, using their greyscale intensities or colours. We substitute a new definition of weights based on the probability distributions of colours. This definition is much more robust than traditional measures, as it allows for textured objects, and objects that are composed of multiple perceptual components. Second, the traditional graph has a vertex set which is the set of pixels and edges between each pair of neighbouring pixels. We substitute a smaller, irregular graph based on Mean Shift oversegmentation. This new graph is typically several orders of magnitude smaller than the original image graph, which can lead to a major savings in computing time. We show results demonstrating the substantial improvement achieved when using the proposed image graph.  相似文献   

18.
In this paper, we assess three standard approaches to build irregular pyramid partitions for image retrieval in the bag-of-bags of words model that we recently proposed. These three approaches are: kernel \(k\)-means to optimize multilevel weighted graph cuts, normalized cuts and graph cuts, respectively. The bag-of-bags of words (BBoW) model is an approach based on irregular pyramid partitions over the image. An image is first represented as a connected graph of local features on a regular grid of pixels. Irregular partitions (subgraphs) of the image are further built by using graph partitioning methods. Each subgraph in the partition is then represented by its own signature. The BBoW model with the aid of graph extends the classical bag-of-words model, by embedding color homogeneity and limited spatial information through irregular partitions of an image. Compared with existing methods for image retrieval, such as spatial pyramid matching, the BBoW model does not assume that similar parts of a scene always appear at the same location in images of the same category. The extension of the proposed model to pyramid gives rise to a method we name irregular pyramid matching. The experiments on Caltech-101 benchmark demonstrate that applying kernel \(k\)-means to graph clustering process produces better retrieval results, as compared with other graph partitioning methods such as graph cuts and normalized cuts for BBoW. Moreover, this proposed method achieves comparable results and outperforms SPM in 19 object categories on the whole Caltech-101 dataset.  相似文献   

19.
Building irregular pyramids by dual-graph contraction   总被引:1,自引:0,他引:1  
Many image analysis tasks lead to, or make use of, graph structures that are related through the analysis process with the planar layout of a digital image. The author presents a theory that allows the building of different types of hierarchies on top of such image graphs. The theory is based on the properties of a pair of dual-image graphs that the reduction process should preserve, e.g. the structure of a particular input graph. The reduction process is controlled by decimation parameters, i.e. a selected subset of vertices, called survivors and a selected subset of the graph's edges; the parent-child connections. It is formally shown that two phases of contractions transform a dual-image graph to a dual-image graph built by the surviving vertices. Phase one operates on the original (neighbourhood) graph, and eliminates all nonsurviving vertices. Phase two operates on the dual (face) graph, and eliminates all degenerated faces that have been created in phase one. The resulting graph preserves the structure of the survivors; it is minimal and unique with respect to the selected decimation parameters. The result is compared with two modified specifications already in use for building stochastic and adaptive irregular pyramids  相似文献   

20.
针对传统手指静脉识别方法往往存在识别率低或者计算量大等问题,本文提出一种基于轻量型图卷积网络的手指静脉识别方法。首先用一个加权图描述一张手指静脉图像,图的顶点特征和加权边集分别由指静脉图像的局部方向能量特征和特征间相关性确定。图数据作为输入,经过基于切比雪夫多项式的图卷积层和由图粗化协助的快速池化层,然后全连接层进行特征整合,再进行分类识别。实验结果显示,该方法识别效率远高于传统算法,并在实验室自制手指静脉数据库达到96.80%的识别率,在不同数据库有较好的普适性。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号