首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 296 毫秒
1.
在基于邻域嵌入的图像超分辨率重建中,提出了一种对训练集进行分层的方法,可有效解决待重建图像块在训练集中搜索时间过长问题.同时对待重建的图像区域加以分类,对于平坦区域选择一般的双立方插值的重建方法,对于含有丰富细节的区域则采用邻域嵌入的重建方法.最后对重建图像进行IBP全局后处理,进一步提高图像质量.实验结果表明,利用本文方法重建的图像主观和客观质量都有较大的提高,且重建时间可以大大缩短.  相似文献   

2.
在邻域嵌入超分辨率重建算法中,训练和重建过程均在特征空间进行的,因此,特征选择对算法的性能具有较大的影响。另外,大多数基于邻域嵌入算法对训练得到的样本库未经测试直接使用,使得邻域选择具有“盲目”性。考虑到特征选择的重要性以及避免邻域选择的盲目性,本文提出了一种新的邻域嵌入超分辨率重建算法。第一步:利用专家矢量场模型估计出输入图像的全局图像;第二步:利用邻域嵌入算法重建残差图像。在重建残差图像的过程中,首先将图像分成若干子块并利用线性滤波器提取特征;然后,将训练图像分成两组,第一组训练得到高、低分辨率重建样本库,第二组对重建样本库测试,得到邻域选择库;最后,自适应的选择输入图像子块的邻域数目,并利用重建样本库重建。仿真实验结果表明,相比其他基于邻域嵌入算法,提出算法可以重建更多的细节信息和锐利的边缘,重建得到的高分辨率图像具有较高的主客观质量。   相似文献   

3.
马祥 《现代电子技术》2012,35(18):105-107
为了提高人脸图像超分辨率重建算法中残差补偿步骤的效果,提出一种通用的基于内容相似图像块线性组合逼近的残差补偿框架,不经过搜索步骤,使用训练集人脸图像同一内容的图像块来进行运算。所提框架中的全局重建步骤,可以使用不同的重建方法。实验结果表明,在这种框架下的残差补偿方法,相比经典的邻域嵌入残差补偿方法,可以更好地恢复出初步重建的人脸图像细节信息。因为这是一种通用的残差补偿方法,从而可以推测凡使用邻域残差补偿的算法,均可借助本算法框架将重建结果进一步的提升。  相似文献   

4.
为了提高传统的基于邻域嵌入的图像超分辨率重构算法的时间效率,采用了一种利用图像块方向信息进行邻域选择和训练集分类的新方法。该方法首先利用图像块方向的不同对训练集进行分类,然后在分类后的子训练集中选择与待重构图像块的方向相似的图像块作为邻域进行重构,并对重构结果进行迭代反投影全局后处理,进一步提高重构质量,最后对改进方法进行数值实验验证。结果表明,该方法不仅把超分辨率重构的时间效率提高了10倍以上,而且重构质量也得到了改善,具有较好的实际应用价值。  相似文献   

5.
邻域嵌入算法是一种基于学习的超分辨率算法,但是存在图像特征计算复杂和分类搜索难度大的问题.本文提出了一种基于二阶梯度比例特征的邻域嵌入超分辨率算法,其图像特征简单,分类和搜索复杂度低,同时图像库存储量小,适合于硬件实现.实验结果表明,与传统超分辨率算法相比,本文算法重建的高分辨率图像具有更丰富的纹理和更锐利的边缘,具有更好的主客观质量.  相似文献   

6.
《红外技术》2017,(11):1032-1037
针对非制冷红外焦平面探测器面阵规模较小,难以获取大尺度红外图像的问题,提出一种基于低秩矩阵恢复和邻域嵌入的单幅红外图像超分辨方法。利用低秩矩阵恢复算法学习出相似矩阵潜在的低秩分量,对恢复的低秩分量进行邻域嵌入以获得初始的超分辨估计值,再通过全局重建约束,最终获得超分辨结果。大量仿真实验结果表明,本文算法重建的图像无论是定量计算还是定性分析都获得较好的超分辨结果,该方法既保证重建的高分辨率图像均匀区域的一致性,又保留了图像的细节信息和边缘轮廓的完整性。  相似文献   

7.
在基于邻域嵌入人脸图像的超分辨率重建算法中,训练和重建均在特征空间进行,因此,特征选择对算法性能具有较大影响。另外,算法模型对重建权重未加限定,导致负数权重出现而产生过拟合效应,使得重建人脸图像质量衰退。考虑到人脸图像的特征选择以及权重符号限定的重要作用,该文提出一种基于2维主成分分析(2D- PCA)特征描述的非负权重邻域嵌入人脸超分辨率重建算法。首先将人脸图像分成若干子块,利用K均值聚类获得图像子块的局部视觉基元,并利用得到的局部视觉基元对图像子块分类。然后,利用2D-PCA对每一类人脸图像子块提取特征,并建立高、低分辨率样本库。最后,在重建过程中使用新的非负权重求解方法求取权重。仿真实验结果表明,相比其他基于邻域嵌入人脸超分辨率重建方法,所提算法可有效提高权重的稳定性,减少过拟合效应,其重建人脸图像具有较好的主客观质量。  相似文献   

8.
张长伦  余沾  王恒友  何强 《电子学报》2018,46(10):2400-2409
针对传统压缩感知重建算法存在重建质量偏低、重建时间偏长等问题,本文提出了一种基于分离字典训练的快速重建算法.首先选取某类图像作为训练集,建立其广义低秩矩阵分解模型;其次采用交替方向乘子法求解该模型,训练出一组分离字典;最后将该分离字典用于图像重建中,通过简单的线性运算实现图像的快速重建.实验结果表明,本文算法相比于传统的重建算法,针对训练集同类图像,具有十分显著的重建性能,对于其他不同类型的图像,依然有不错的重建质量,极大地降低了重建时间.  相似文献   

9.
密文图像的可逆数据隐藏技术既能保证载体内容不被泄露,又能传递附加信息。本文提出了一种基于块容量标签(block capacity label, BCL)的高容量密文图像可逆数据隐藏算法。该方案在图像加密之前进行预处理,首先将图像分为两个区域:参考像素区域和预测像素区域。然后将预测像素区域分为不重叠的块,根据所提出的算法确定分块的BCL,在对图像进行加密之后嵌入BCL,生成加密图像;在秘密数据嵌入阶段,根据BCL和数据隐藏密钥嵌入秘密数据。实验测试了BOWS-2数据集,平均嵌入容量为3.806 8 bpp,与现有方法相比,该方法可以获得更高的秘密数据嵌入容量,并可以实现原始图像的完美重建。  相似文献   

10.
面向自然场景分类的贝叶斯网络局部语义建模方法   总被引:3,自引:0,他引:3  
本文提出了一种基于贝叶斯网络的局部语义建模方法.网络结构涵盖了区域邻域的方向特性和区域语义之间的邻接关系.基于这种局部语义模型,建立了场景图像的语义表述,实现自然场景分类.通过对已标注集的图像样本集的学习训练,获得贝叶斯刚络的参数.对于待分类的图像,利用该模型融合区域的特征及其邻接区域的信息,推理得到区域的语义概率;并通过网络迭代收敛得到整幅图像的区域语义标记和语义概率;最后在此基础上形成图像的全局描述,实现场景分类.该方法利用了场景内部对象之间的上下文关系,弥补了仅利用底层特征进行局部语义建模的不足.通过在六类自然场景图像数据集上的实验表明,本文所提的局部语义建模和图像描述方法是有效的.  相似文献   

11.
H.264中的运动估计和帧内预测算法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
李绍滋  苏松志  成运  孙岩  郭锋 《电子学报》2008,36(Z1):175-180
 针对一些算法过早的确定搜索方向,容易陷入局部最优点缺失搜索准确度的情况,提出一种新的基于搜索方向预测的运动估计算法.实验结果表明,与单一搜索图形相比,该算法具有更高的搜索精度和搜索速度.帧内预测技术作为影响帧编码效率的关键,一直被广泛研究,为此,本文提出了一种快速的帧内预测算法,对宏块划分提出了新的"模板"宏块对比算法思想,并在已有的抽样算法和搜索窗算法的基础上,提出了改进方法.实验结果证明,算法在不降低图像质量的基础上,编码速度平均提高80%.  相似文献   

12.
Underwater image processing has played an important role in various fields such as submarine terrain scanning, submarine communication cable laying, underwater vehicles, underwater search and rescue. However, there are many difficulties in the process of acquiring underwater images. Specifically, the water body will selectively absorb part of the light when light travels through the water, resulting in color degradation of underwater images. At the same time, due to the influence of floating substances in the water, the light has a certain degree of scattering, which will bring serious problems such as blurred details and low contrast to underwater images. Therefore, using image processing technology to restore the real appearance of underwater images has a high practical value. In order to solve the above problems, we combine the color correction method with the deblurring network to improve the quality of underwater images in this paper. Firstly, aiming at the problem of insufficient number and diversity of underwater image samples, a network combined with depth image reconstruction and underwater image generation is proposed to simulate underwater images based on the style transfer method. Secondly, for the problem of color distortion, we propose a dynamic threshold color correction method based on image global information combined with the loss law of light propagation in water. Finally, in order to solve the problem of image blurring caused by scattering and further improve the overall image clarity, the color-corrected image is reconstructed by a multi-scale recursive convolutional neural network. Experiment results show that we can obtain images closer to underwater style with shorter training time. Compared with several latest underwater image processing methods, the proposed method has obvious advantages in multiple underwater scenes. Simultaneously, we can restore the color information, remove blurring and boost detail for underwater images.  相似文献   

13.
易拓源  户盼鹤  刘振 《信号处理》2023,39(2):323-334
图像超分辨是解决ISAR欺骗干扰中由于模型样本不完备导致难以对大带宽ISAR实现高逼真假目标模拟的重要手段。利用生成对抗网络(GAN)可通过端到端映射实现ISAR图像的超分辨,然而,当测试输入样本与训练输入样本分辨率差异较大时,超分辨图像中会出现伪散射点从而导致目标失真。考虑到循环生成对抗网络(CycleGAN)对输入样本差异适应性较好,本文提出了一种基于改进CycleGAN的ISAR欺骗干扰超分辨样本生成方法,分别从损失函数、优化过程、判别器结构三方面对CycleGAN网络结构进行改进,加快了网络的收敛速度,同时对于输入分辨率差异较大的ISAR图像泛化性能更好。利用暗室测量数据验证了所提方法的有效性,与GAN方法相比,对于训练输入样本分辨率差异较大的测试输入样本,生成的超分辨样本散射点位置与真实数据具有更好的匹配效果。  相似文献   

14.
吴秀秀  肖珊  张煜 《电子学报》2015,43(2):383-386
肺4D-CT数据在肺癌治疗中有重要意义.但肺4D-CT数据纵向(Z方向)分辨率低,为显示正确比例图像需进行插值运算,由此带来图像的模糊.本文提出了一种基于Active Demons配准的超分辨率重建技术来提高肺4D-CT图像分辨率.我们将不同相位同一位置的低分辨率图像视为不同"帧"图像.首先采用Active Demons配准方法得到不同"帧"图像之间的运动估计;而后采用凸集投影(Projection Onto Convex Set,POCS)超分辨率算法重建高分辨率肺图像.实验结果表明,与三次样条插值和反投影方法相比较,我们的方法能得到更清晰的肺图像,明显增强图像结构.  相似文献   

15.
The use of time-of-flight (TOF) information during positron emission tomography (PET) reconstruction has been found to improve the image quality. In this work we quantified this improvement using two existing methods: 1) a very simple analytical expression only valid for a central point in a large uniform disk source and 2) efficient analytical approximations for postfiltered maximum likelihood expectation maximization (MLEM) reconstruction with a fixed target resolution, predicting the image quality in a pixel or in a small region of interest based on the Fisher information matrix. Using this latter method the weighting function for filtered backprojection reconstruction of TOF PET data proposed by C. Watson can be derived. The image quality was investigated at different locations in various software phantoms. Simplified as well as realistic phantoms, measured both with TOF PET systems and with a conventional PET system, were simulated. Since the time resolution of the system is not always accurately known, the effect on the image quality of using an inaccurate kernel during reconstruction was also examined with the Fisher information-based method. First, we confirmed with this method that the variance improvement in the center of a large uniform disk source is proportional to the disk diameter and inversely proportional to the time resolution. Next, image quality improvement was observed in all pixels, but in eccentric and high-count regions the contrast-to-noise ratio (CNR) increased less than in central and low- or medium-count regions. Finally, the CNR was seen to decrease when the time resolution was inaccurately modeled (too narrow or too wide) during reconstruction. Although the maximum CNR is not very sensitive to the time resolution error, using an inaccurate TOF kernel tends to introduce artifacts in the reconstructed image.   相似文献   

16.
We present a new supervised learning model designed for the automatic segmentation of the left ventricle (LV) of the heart in ultrasound images. We address the following problems inherent to supervised learning models: 1) the need of a large set of training images; 2) robustness to imaging conditions not present in the training data; and 3) complex search process. The innovations of our approach reside in a formulation that decouples the rigid and nonrigid detections, deep learning methods that model the appearance of the LV, and efficient derivative-based search algorithms. The functionality of our approach is evaluated using a data set of diseased cases containing 400 annotated images (from 12 sequences) and another data set of normal cases comprising 80 annotated images (from two sequences), where both sets present long axis views of the LV. Using several error measures to compute the degree of similarity between the manual and automatic segmentations, we show that our method not only has high sensitivity and specificity but also presents variations with respect to a gold standard (computed from the manual annotations of two experts) within interuser variability on a subset of the diseased cases. We also compare the segmentations produced by our approach and by two state-of-the-art LV segmentation models on the data set of normal cases, and the results show that our approach produces segmentations that are comparable to these two approaches using only 20 training images and increasing the training set to 400 images causes our approach to be generally more accurate. Finally, we show that efficient search methods reduce up to tenfold the complexity of the method while still producing competitive segmentations. In the future, we plan to include a dynamical model to improve the performance of the algorithm, to use semisupervised learning methods to reduce even more the dependence on rich and large training sets, and to design a shape model less dependent on the training set.  相似文献   

17.
In many classification tasks, multiple images that form image set may be available rather than a single image for object. For image set classification, crucial issues include how to simply and efficiently represent the image sets and deal with outliers. In this paper, we develop a novel method, called image set-based classification using collaborative exemplars representation, which can achieve the data compression by finding exemplars that have a clear physical meaning and remove the outliers that will significantly degrade the classification performance. Specifically, for each gallery set, we explicitly select its exemplars that can appropriately describe this image set. For probe set, we can represent it collaboratively over all the gallery sets formed by exemplars. The distance between the query set and each gallery set can then be evaluated for classification after resolving representation coefficients. Experimental results show that our method outperforms the state-of-the-art methods on three public face datasets, while for object classification, our result is very close to the best result.  相似文献   

18.
图像特征提取中领域尺寸和本征维数的自动选择算法   总被引:1,自引:1,他引:0  
本文首先对图像特征提取过程中的近邻距离进行线 性重构,然后对其在整个流形上的分布进行优化运算,得到一个最优线性重构权为变量的图 像数据局部低维特征的表达函数。最后以该函数稳定性的最小度量构造出适用于图像特征提 取的邻域尺寸和本征维数的自动选择策略。实验表明该方法实现了图像数据本征维数与邻域 尺寸的自动化选择,并具有计算简单、匹配率高且计算复杂性低等特点。  相似文献   

19.
李菊  李军 《激光杂志》2020,41(1):96-99
全息重建方法是全息成像技术和全息术的一个核心部分。本文提出了一种基于U_net网络的单幅全息图重建方法。首先使用基于马赫-增德尔干涉仪的光路结构全息实验成像系统记录全息图,将得到的全息图作为训练集,然后采用训练集对U_net网络进行训练,在经过训练后得到全息图的数学模型,用该数学模型便可以对全息图进行重建。仿真和实验结果表明,基于U_net网络的单幅全息图重建方法计算速度更快还只需要更少的测量数据,并且仅使用单幅全息图就可以重建图像,且重建图像没有直流项和孪生项的干扰。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号