首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 359 毫秒
1.
自适应字典学习利用图像结构自相似性,将图像自身作为训练样本,通过字典学习使图像中的相似块在字典下具有稀疏表示形式.本文将全局字典学习中利用图像库获取附加信息的思想融入到自适应字典学习的过程中,提出了一种基于自适应多字典学习的单幅图像超分辨率算法,从低分辨率图像自身与图像库同时获取附加信息.该算法对低分辨率图像金字塔结构中的图像块进行聚类,在聚类结果的引导下将图像库中的图像块进行分类,利用各类中的样本分别构建针对各类的多个字典,从而确定表达重建图像块的最优字典.实验表明,与ScSR、SISR、NLIBP、CSSS以及mSSIM等算法相比,本文算法具有更好的超分重建效果.  相似文献   

2.
周琳  杨娜 《红外技术》2015,(4):277-282
为了提高图像超分辨率重建的质量,采用离线双字典学习算法。首先图像块建立字典稀疏模型,确定字典中原子数量;然后使用基于离线字典学习对图像稀疏编码,同时把稀疏编码统一到一个框架中进行优化编码;接着对字典进行分解多个子字典,将图像块中像素点的列向量在子字典展开;最后双字典与超分辨率重构中不同分辨率的异构数据进行同构化,确定控制残差条件,给出了算法实现过程。实验仿真显示本文算法重建效果清楚,峰值信噪比最大,BIQI最小。  相似文献   

3.
非局部学习字典的图像修复   总被引:2,自引:0,他引:2  
李民  程建  李小文  乐翔 《电子与信息学报》2011,33(11):2672-2678
该文提出一种新的基于学习的图像修复算法。与经典的稀疏表示模型不同,该文将非局部自相似图像块统一进行联合稀疏表示,训练高效的学习字典,并使自相似块间保持相同的稀疏模式。该方法既确保自相似块投影到稀疏空间后也具有相似性,也较好地保留了自相似块间的相关性信息,更有效地建立了它们的联合稀疏关联,并将这种关联作为先验知识来指导图像的修复。该算法使用大量自然图像样本来训练初始的过完备字典,既利用了样本图像的先验知识,又充分考虑了待处理图像本身的相关信息,自适应性强。通过对自然图像进行大﹑小范围图像修复和文字去除实验,该文方法均取得不错的修复效果。  相似文献   

4.
干宗良 《电视技术》2012,36(14):19-23
简要介绍了基于稀疏字典约束的超分辨力重建算法,提出了具有低复杂度的基于K均值聚类的自适应稀疏约束图像超分辨力重建算法。所提算法从两个方面降低其计算复杂度:分类训练字典,对图像块归类重建,降低每个图像块所用字典的大小;对图像块的特征进行分析,自适应地选择重建方法。实验结果表明,提出的快速重建方法在重建质量与原算法相当的前提下,可以较大程度地降低重建时间。  相似文献   

5.
针对单幅低分辨率图像的超分辨率重建,提出一种基于稀疏表示的改进算法。通过联合输入低分辨率图像块和对应生成的高分辨率图像块,求解其在高低分辨率字典对上的稀疏表示系数,再将系数与高分辨率字典结合,修正输出的高分辨率图像块。仿真实验表明,文中提出的算法有效提升了重建图像的质量。  相似文献   

6.
针对单字典学习的稀疏表示超分辨算法不能保证相邻图像块的兼容性而导致稀疏重建后图像质量低的问题,提出了图像块对学习的稀疏表示的改进方法。该方法使用主成份分析法处理训练样本的图像特征块;然后在输入的低分辨率图像块的稀疏表示系数中恢复出高分辨率图像块;最后将低分辨图像块的稀疏表示与高分辨图像块字典组合生成高分辨率图像块的超分辨重建算法。实验数据对于提出的算法能有效地恢复出质量更好的图像且峰值信噪比有所提高。  相似文献   

7.
詹曙  方琪  杨福猛  常乐乐  闫婷 《电子学报》2016,44(5):1189-1195
针对目前基于字典学习的图像超分辨率重建效果欠佳或字典训练时间过长的问题,本文提出了一种耦合特征空间下改进字典学习的图像超分辨率重建算法.该算法首先利用高斯混合模型聚类算法对训练图像块进行聚类,然后使用更改字典更新方式的改进KSVD字典学习算法来快速获得高、低分辨率特征空间下字典对和映射矩阵.重建时根据测试样本与各个类别的似然概率自适应地选择最匹配的字典对和映射矩阵进行高分辨率重建.最后利用图像非局部相似性,将其与迭代反向投影算法相结合对重建后的图像进行后处理获得最佳重建效果.实验结果表明了本文方法的有效性.  相似文献   

8.
经典的基于稀疏表示和字典学习的超分辨率算法在图像重建质量和计算复杂度上都具备较好的表现。然而,基于外部样本训练得到的字典和待重建图像缺少相关性,会伴随算法鲁棒性较差的问题。为克服这一缺点,提出了一种基于自学习的新型单幅图像超分辨率高质量重建算法。该算法无需引入外部训练图像,即完全通过待重建图像自身构建的样本进行字典学习和图像重建;这一机制增强了训练字典与待重建图像的相关性。具体而言,在字典训练阶段,针对输入的待重建图像,基于二维经验模态分解进行高频修复预处理,以增强样本源的高频特征;随后构建训练样本集,使用K-奇异值分解算法获得自学习主字典和自学习残差字典,构成双字典。在图像重建阶段,将双字典结构与自学习相结合,先通过主字典实现主高频恢复,再进一步通过残差字典恢复图像的残差高频信息。实验结果表明,所提算法在重建图像的主观视觉效果以及专业质量评价指标上,相对于传统插值算法及经典的字典学习算法具有显著优势。  相似文献   

9.
基于聚类的图像稀疏去噪方法   总被引:2,自引:0,他引:2  
在图像去噪方法的研究中,非局部均值算法与稀疏去噪算法是近几年受到广为关注的方法.非局部均值算法将具有邻域相似性的像素点作加权平均;而稀疏去噪算法是将图像的非噪声部分用过完备字典进行稀疏表示.基于上述两种方法的思想,本文提出了基于聚类的稀疏去噪方法,该方法结合了非局部均值算法与稀疏去噪算法的优点,对相似的图像块进行聚类,并通过施加l1/l2范数的正则化约束,对同一类中的图像块在过完备字典上进行相同结构的稀疏表示,从而达到去噪目的.在字典的选择上,本文使用DCT字典和双正交小波字典,能够同时保留原图像中的平滑分量与细节分量.实验结果表明,本文方法比传统的稀疏去噪方法有更好的去噪效果.  相似文献   

10.
陈利霞  李子  袁华  欧阳宁 《电视技术》2015,39(17):16-20
针对基于单一字典训练稀疏表示的图像融合算法忽略图像局部特征的问题,提出了基于块分类稀疏表示的图像融合算法。此算法是根据图像局部特征的差异将图像块分为平滑、边缘和纹理三种结构类型,对边缘和纹理结构分别训练出各自的冗余字典。平滑结构利用算术平均法进行融合,边缘和纹理结构由对应字典利用稀疏表示算法进行融合,并对边缘结构稀疏表示中的残余量进行小波变换融合。实验结果证明,该算法相对于单一字典稀疏表示算法,在融合图像的主观评价和客观评价指标上都有显著改进,并且算法速度也有提高。  相似文献   

11.
Low-Dose X-ray CT Reconstruction via Dictionary Learning   总被引:1,自引:0,他引:1  
Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures.  相似文献   

12.
文章提出一种新的基于支持向量回归(SVR)和稀疏表示的图像超分辨重建算法。SVR对输入数据有良好预测输出类别能力。图像统计表明,图像块可以从过完备字典中通过稀疏线性组合很好的表示。对一幅低分辨率输入图像,可以将图像超分辨问题视为在高分辨图像中估计其像素位置。与传统的支持向量回归方法相比,本文采用的特征是不同类型的图像块的稀疏表示。研究表明,稀疏表示作为特征对噪声有一定的鲁棒性。实验结果表明,本文方法与传统支持向量回归方法相比在图像重建质量上有一定的优势。  相似文献   

13.
计算机断层成像是医学检查的常用方法,但是检查中过量的辐射可能对病人造成二次伤害.基于此提出了一种稀疏贝叶斯学习(Sparse Bayesian Learning,SBL)的肺部计算机断层成像(Computed Tomography,CT)图像重构方法,首先应用高斯随机分布矩阵对肺部图像进行测量,并建立基于小波变换的稀疏...  相似文献   

14.
Redundant dictionary learning based image noise reduction methods explore the sparse prior of patches and have proved to lead to state-of-the-art results; however, they do not explore the non-local similarity of image patches. In this paper we exploit both the structural similarities and sparse prior of image patches and propose a new dictionary learning and similarity regularization based image noise reduction method. By formulating the image noise reduction as a multiple variables optimization problem, we alternately optimize the variables to obtain the denoised image. Some experiments are taken on comparing the performance of our proposed method with its counterparts on some benchmark natural images, and the superiorities of our proposed method to its counterparts can be observed in both the visual results and some numerical guidelines.  相似文献   

15.

Terahertz computed tomography (THz CT) demonstrates its advantages in aspects of nonmetallic and nonpolar materials penetration, 3D internal structure visualization, etc. To perform satisfied reconstruction results, it is necessary to obtain complete measurements from many different views. However, this process is time-consuming and we usually obtain incomplete projections for THz CT in practice, which generates artifacts in the final reconstructed images. To address this issue, dictionary learning-based THz CT reconstruction (DLTR) model is proposed in this study. Especially, the image patches are extracted from other state-of-the-art reconstructed images to train the initial dictionary by using the K-SVD algorithm. Then, the dictionary can be adaptively updated during THz CT reconstruction. Finally, the updated dictionary is used for further updating reconstructed images. In order to verify the accuracy and quality of DLTR method, the filtered back-projection (FBP), simultaneous algebraic reconstruction technique (SART), and total variation (TV) reconstruction are chosen as comparisons. The experiment results show that the DLTR method has a good capability for noise suppression and structures preservation.

  相似文献   

16.
Restricted visualization of the surgical field is one of the most critical challenges for minimally invasive surgery (MIS). Current intraoperative visualization systems are promising. However, they can hardly meet the requirements of high resolution and real time 3D visualization of the surgical scene to support the recognition of anatomic structures for safe MIS procedures. In this paper, we present a new approach for real time 3D visualization of organ deformations based on optical imaging patches with limited field-of-view and a single preoperative scan of magnetic resonance imaging (MRI) or computed tomography (CT). The idea for reconstruction is motivated by our empirical observation that the spherical harmonic coefficients corresponding to distorted surfaces of a given organ lie in lower dimensional subspaces in a structured dictionary that can be learned from a set of representative training surfaces. We provide both theoretical and practical designs for achieving these goals. Specifically, we discuss details about the selection of limited optical views and the registration of partial optical images with a single preoperative MRI/CT scan. The design proposed in this paper is evaluated with both finite element modeling data and ex vivo experiments. The ex vivo test is conducted on fresh porcine kidneys using 3D MRI scans with 1.2 mm resolution and a portable laser scanner with an accuracy of 0.13 mm. Results show that the proposed method achieves a sub-3 mm spatial resolution in terms of Hausdorff distance when using only one preoperative MRI scan and the optical patch from the single-sided view of the kidney. The reconstruction frame rate is between 10 frames/s and 39 frames/s depending on the complexity of the test model.  相似文献   

17.
基于图像块分类稀疏表示的超分辨率重构算法   总被引:6,自引:0,他引:6       下载免费PDF全文
练秋生  张伟 《电子学报》2012,40(5):920-925
 目前基于图像块稀疏表示的超分辨率重构算法对所有图像块都用同一字典表示,不能反映不同类型图像块间的差别.针对这一缺点,本文提出基于图像块分类稀疏表示的方法.该方法先利用图像局部特征将图像块分为平滑、边缘和不规则结构三种类型,其中边缘块细分为多个方向.然后利用稀疏表示方法对边缘和不规则结构块分别训练各自对应的低分辨率和高分辨率字典.重构时对平滑块利用简单双三次插值方法,边缘和不规则结构块由其对应的高、低分辨率字典通过正交匹配追踪算法重构.实验结果表明,与单字典稀疏表示算法相比,本文算法对图像边缘部分重构质量明显改善,同时重构速度显著提高.  相似文献   

18.
In radiotherapy (RT), organ motion caused by breathing prevents accurate patient positioning, radiation dose, and target volume determination. Most of the motion-compensated trial techniques require collaboration of the patient and expensive equipment. Estimating the motion between two computed tomography (CT) three-dimensional scans at the extremes of the breathing cycle and including this information in the RT planning has been shyly considered, mainly because that is a tedious manual task. This paper proposes a method to compute in a fully automatic fashion the spatial correspondence between those sets of volumetric CT data. Given the large ambiguity present in this problem, the method aims to reduce gradually this uncertainty through two main phases: a similarity-parametrization data analysis and a projection-regularization phase. Results on a real study show a high accuracy in establishing the spatial correspondence between both sets. Embedding this method in RT planning tools is foreseen, after making some suggested improvements and proving the validity of the two-scan approach.  相似文献   

19.
A geometric snake model for segmentation of medical imagery   总被引:29,自引:0,他引:29  
We employ the new geometric active contour models, previously formulated, for edge detection and segmentation of magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound medical imagery. Our method is based on defining feature-based metrics on a given image which in turn leads to a novel snake paradigm in which the feature of interest may be considered to lie at the bottom of a potential well. Thus, the snake is attracted very quickly and efficiently to the desired feature  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号