首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
安平  陈欣  陈亦雷  黄新彭  杨超 《信号处理》2022,38(9):1818-1830
光场(Light Field, LF)成像能同时捕获场景中光线的空间信息和角度信息,应用广泛。然而,它的分辨率受到成像设备硬件以及空间和角度分辨率之间制衡的限制。过低的空间分辨率严重影响了光场图像的质量及其应用。因此,本文充分利用光场特性增强图像细节,提出一种基于视点图像(Viewpoint Image, VI)和极平面图像(Epipolar Plane Image, EPI)特征融合的端到端光场超分辨率方法,能够同时超分辨率所有视点图像。本方法将低分辨率光场图像按照水平和垂直EPI方向堆叠排列,利用三维视点图像堆栈中包含EPI信息的特点,采用双分支结构的3D递减卷积网络处理输入的四维光场数据。这样能够同时对视点图像和EPI信息进行特征提取和融合,充分探索光场的纹理信息及几何一致性。在真实和合成光场数据集上的实验结果均表明,该方法相比现有主流方法不仅在客观指标上具有更好的表现,主观质量上也能保持更好的几何一致性,同时还具有更少的模型参数和更快的执行速度。   相似文献   

2.
基于正则化稀疏表示的图像超分辨率算法   总被引:8,自引:8,他引:0  
朱波  李华  高伟  宋宗玺 《光电子.激光》2013,(10):2024-2030
为了从单幅低分辨率(LR)图像恢复出高分辨率(H R)图像,提出了一种应用正则化稀疏表示和基于机器学习 的超分辨率(SR)图像恢复算法。构造了一种基于稀疏表示的SR凸变模型,为了提高 恢复效果,针对模型 提出了两种稀疏正则化约束条件,一是将分类效果更好的图表拉普拉斯作为正则化约束条件 ,从而找到与 输入LR图像块在结构上最接近的学习样本;另一种是针对冗余的学习样本进行约 束,保证了图像边 缘的锐利。将输入的每一块LR图像应用正则化稀疏表示,经过学习得到与之对应的HR图像块 , 最终得到整幅HR图像。试验结果表明,算法恢复出的HR图像峰值信噪比(PSNR )值较双三次插值算法最高提升约2dB,主观目视清晰、边缘锐利。  相似文献   

3.
孙超  吕俊伟  刘峰  周仁来 《激光与红外》2017,47(12):1559-1564
针对红外图像空间分辨率低、成像质量不高的问题,提出了基于迁移学习的红外图像超分辨率方法。该方法以基于卷积神经网络的自然图像超分辨率方法为基础进行改进:增加网络的层数进行更深层次的学习训练,串联多层小的卷积核使其能够利用更多的图像信息,以“相差图”为目标进行训练,减小网络训练时间,提升网络收敛速度;利用迁移学习知识,再以少量高质量红外图像为目标样本,对自然图像超分辨率的网络进行二次训练,将网络权重经过微调后迁移应用到红外图像的超分辨率上。实验结果表明:基于卷积神经网络的超分辨率方法能够有效迁移应用到红外图像的超分辨率上,且改进后的网络具有更好的自然及红外图像的超分辨率性能,验证了本文所提方法的有效性及优越性。  相似文献   

4.
贾宇  温习  王晨晟 《激光与红外》2020,50(10):1283-1288
单幅红外图像超分辨率重构算法作为红外图像分辨率提升应用的关键技术,近年来得到了广泛的研究。为了提高红外图像的分辨力,提出了一种基于残差密集对抗式生成网络的单幅红外图像分辨力提升方法。与以往基于对抗式生成网络的分辨力提升方法不同,本文方法的新颖性主要包含两个方面。首先,在网络架构方面进行改进,以提高性能。设计密集残差网络作为对抗式生成网络的生成网络,充分利用了低分辨率图像的有效特征。在生成网络中引入了一种连续内存机制,以利用密集的剩余块。其次,将Wasserstein-GAN作为损失函数,对判别网络模型进行修正,以达到稳定训练的目的。利用红外高分辨率图像数据集进行了大量的实验,结果表明,该方法在客观评价和主观评价方面均优于目前最新的方法。  相似文献   

5.
崔立尉  高宏伟 《光电子.激光》2023,34(10):1097-1104
图像超分辨率在医疗和安防等领域应用广泛,本文针对传统超分辨率重建(super-resolution reconstruction, SR)方法无法重建出边缘特征图像的不足,提出了一种先验信息与密集连接网络模型的重建方案,利用考虑输入统计信息的残差特征的不同组合,引入了多注意力模块,通过与主干网络结构协作,在不增加额外模块的情况下提高了网络性能。新提出的模型与现有复杂结构的技术(state-of-the-art, SOTA)模型相比,具有更好的性能。为了避免输入的身份特征会急剧漂移的问题,提出了一种基于先验信息引入注意力机制网络模块来分辨真实低分辨率(low resolution, LR)对应物的模型,这种模型在捕获运动噪声等方面具有优势。经实验验证得出,本文方法相比其他主流方法,在评价指标和主观可视化分析方面更具优势。  相似文献   

6.
基于Log-WT的人脸图像超分辨率重建   总被引:1,自引:0,他引:1  
目前已有的基于学习的人脸超分辨率图像重建算法大都对亮度变化特别是阴影非常敏感,针对这一缺点,该文提出了一种不随光照变化的图像表示方法对数-小波变换(Log-WT),并在此基础上构造了一种新的人脸超分辨率图像重建算法。该方法首先利用Log-WT变换提取低分辨率图像与光照无关的内在特性,然后借助流形学习的思想建模高分辨率图像和低分辨率图像之间的关系,并对其加入人脸图像的专用先验约束,从而同时实现了超分辨率重建和图像增强。仿真结果表明该算法有效克服了传统方法受光照因素影响的缺点,在提高图像分辨率的同时克服了光照因素的影响,特别是对阴影效应的消除具有明显效果,将该方法应用于人脸识别,有效提高了识别率。  相似文献   

7.
利用图像超分辨率重建(SRR)技术可以在现有成像系统基础上提高图像空间分辨力。凸集投影(POCS)是超分辨率重建的主流方法之一。对POCS算法进行了改进,具体改进体现在两个方面:(1)用可控核回归插值图像作为POCS重建的初始估计以提高初始估计图像的质量;(2)将POCS重建中使用的点扩散函数(PSF)由高斯核改为可控核以减少重建图像的边缘振荡效应。对所提出的算法进行了仿真,实验结果显示采用本文方法重建图像的边缘效果有了明显的改善。  相似文献   

8.
This paper describes a simple framework allowing us to leverage state-of-the-art single image super-resolution (SISR) techniques into light fields, while taking into account specific light field geometrical constraints. The idea is to first compute a representation compacting most of the light field energy into as few components as possible. This is achieved by aligning the light field using optical flow and then by decomposing the aligned light field using singular value decomposition (SVD). The principal basis captures the information that is coherent across all the views, while the other basis contain the high angular frequencies. Super-resolving this principal basis using an SISR method allows us to super-resolve all the information that is coherent across the entire light field. In this paper, to demonstrate the effectiveness of the approach, we have used the very deep super resolution (VDSR) method, which is one of the leading SISR algorithms, to restore the principal basis. The information restored in the principal basis is then propagated to restore all the other views using the computed optical flow. This framework allows the proposed light field super-resolution method to inherit the benefits of the SISR method used. Experimental results show that the proposed method is competitive, and most of the time superior, to recent light field super-resolution methods in terms of both PSNR and SSIM quality metrics, with a lower complexity. Moreover, the subjective results demonstrate that our method manages to restore sharper light fields which enables to generate refocused images of higher quality.  相似文献   

9.
李方彪  何昕  魏仲慧  何家维  何丁龙 《红外与激光工程》2018,47(2):203003-0203003(8)
生成式对抗神经网络在约束图像生成表现出了巨大潜力,使得其适合运用于图像超分辨率重建。但是使用生成式对抗神经网络重建后的超分辨率图像存在过度平滑,缺少高频细节信息的缺点。针对单帧图像超分辨率重建方法不能有效利用图像序列间的时间-空间相关性的问题,提出了一种基于生成式对抗神经网络的多帧红外图像超分辨率重建方法(M-GANs)。首先,对低分辨率图像序列进行运动补偿;其次,使用权值表示卷积层对运动补偿后的图像序列进行权值转换计算;最后,将其输入生成式对抗重建网络,输出重建后的高分辨率图像。实验结果表明:文中方法在主观及客观评价中均优于当前代表性的超分辨率重建方法。  相似文献   

10.
In recent years, hyperspectral image super-resolution has attracted the attention of many researchers and has become a hot topic in the field of computer vision. However, it is difficult to obtain high-resolution images due to imaging hardware devices. At present, many existing hyperspectral image super-resolution methods have not achieved good results. In this paper, we propose a hyperspectral image super-resolution method combining with deep residual convolutional neural network (DRCNN) and spectral unmixing. Firstly, the spatial resolution of the image is enhanced by learning a priori knowledge of natural images. The DRCNN reconstructs high spatial resolution hyperspectral images by concatenating multiple residual blocks, each containing two convolutional layers. Secondly, the spectral features of low-resolution and high-resolution hyperspectral images are linked by spectral unmixing. This approach aims to obtain the endmember matrix and the abundance matrix. The final reconstruction result is obtained by multiplying the endmember matrix and the abundance matrix. In addition, in order to improve the visual effect of the reconstructed image, the total variation regularity is used to impose constraints on the abundance matrix to enhance the relationship between the pixels. The experimental results of remote sensing data based on ground facts show that the proposed method has good performance and preserves spatial information and spectral information without the need for auxiliary images.  相似文献   

11.
Super-resolution reconstruction technology has important scientific significance and application value in the field of image processing by performing image restoration processing on one or more low-resolution images to improve image spatial resolution. Based on the SCSR algorithm and VDSR network, in order to further improve the image reconstruction quality, an image super-resolution reconstruction algorithm combined with multi-residual network and multi-feature SCSR(MRMFSCSR) is proposed. Firstly, at the sparse reconstruction stage, according to the characteristics of image blocks, our algorithm extracts the contour features of non-flat blocks by NSCT transform, extracts the texture features of flat blocks by Gabor transform, then obtains the reconstructed high-resolution (HR) images by using sparse models. Secondly, according to improve the VDSR deep network and introduce the feature fusion idea, the multi-residual network structure (MR) is designed. The reconstructed HR image obtained by the sparse reconstruction stage is used as the input of the MR network structure to optimize the high-frequency detail residual information. Finally, we can obtain a higher quality super-resolution image compared with the SCSR algorithm and the VDSR algorithm.  相似文献   

12.
In recent years, the light field (LF) as a new imaging modality has attracted wide interest. The large data volume of LF images poses great challenge to LF image coding, and the LF images captured by different devices show significant differences in angular domain. In this paper we propose a view prediction framework to handle LF image coding with various sampling density. All LF images are represented as view arrays. We first partition the views into reference view (RV) set and intermediate view (IV) set. The RVs are rearranged into a pseudo sequence and directly compressed by a video encoder. Other views are then predicted by the RVs. To exploit the four dimensional signal structure, we propose the linear approximation prior (LAP) to reveal the correlation among LF views and efficiently remove the LF data redundancy. Based on the LAP, a distortion minimization interpolation (DMI) method is used to predict IVs. To robustly handle the LF images with different sampling density, we propose an Iteratively Updating depth image based rendering (IU-DIBR) method to extend our DMI. Some auxiliary views are generated to cover the target region and then the DMI calculates reconstruction coefficients for the IVs. Different view partition patterns are also explored. Extensive experiments on different types LF images also valid the efficiency of the proposed method.  相似文献   

13.
基于图像块分类稀疏表示的超分辨率重构算法   总被引:6,自引:0,他引:6       下载免费PDF全文
练秋生  张伟 《电子学报》2012,40(5):920-925
 目前基于图像块稀疏表示的超分辨率重构算法对所有图像块都用同一字典表示,不能反映不同类型图像块间的差别.针对这一缺点,本文提出基于图像块分类稀疏表示的方法.该方法先利用图像局部特征将图像块分为平滑、边缘和不规则结构三种类型,其中边缘块细分为多个方向.然后利用稀疏表示方法对边缘和不规则结构块分别训练各自对应的低分辨率和高分辨率字典.重构时对平滑块利用简单双三次插值方法,边缘和不规则结构块由其对应的高、低分辨率字典通过正交匹配追踪算法重构.实验结果表明,与单字典稀疏表示算法相比,本文算法对图像边缘部分重构质量明显改善,同时重构速度显著提高.  相似文献   

14.
谢冰  万淑慧  殷云华 《红外与激光工程》2022,51(3):20210468-1-20210468-10
基于视觉的无人机自主导航过程中,对航路点进行准确识别是引导无人机朝着航路点方向精确飞行的关键。然而,当无人机到达航路点识别距离后,由于机载图像传感器受天气因素及成像过程中的脱焦、衍射等现象影响,常导致获取到的航拍图像模糊、空间分辨率较低,从而直接影响了后续航路点识别的精度。针对这一问题,提出了一种改进稀疏表示正则化的航拍图像超分辨率重建算法。首先,基于稀疏表示正则化框架,利用自回归和非局部相似约束构建目标函数的正则化项;其次,根据图像局部方差能有效区分图像的边缘区域和平滑区域这一特性,自适应地选取正则化参数得到超分辨率重建模型中的目标函数;最后,使用MM (Majorization-Minorization) 算法求解目标函数的凸优化问题,得到重建后的高分辨率图像。实验结果表明:与传统的正则化SR重建算法相比,文中算法能够有效的提高航拍图像的空间分辨率,使得重建后的图像包含了更多的特征细节信息,这为航路点识别提供了帮助。  相似文献   

15.
Video super-resolution aims at restoring the spatial resolution of the reference frame based on consecutive input low-resolution (LR) frames. Existing implicit alignment-based video super-resolution methods commonly utilize convolutional LSTM (ConvLSTM) to handle sequential input frames. However, vanilla ConvLSTM processes input features and hidden states independently in operations and has limited ability to handle the inter-frame temporal redundancy in low-resolution fields. In this paper, we propose a multi-stage spatio-temporal adaptive network (MS-STAN). A spatio-temporal adaptive ConvLSTM (STAC) module is proposed to handle input features in low-resolution fields. The proposed STAC module utilizes the correlation between input features and hidden states in the ConvLSTM unit and modulates the hidden states adaptively conditioned on fused spatio-temporal features. A residual stacked bidirectional (RSB) architecture is further proposed to fully exploit the processing ability of the STAC unit. The proposed STAC and RSB architecture promote the vanilla ConvLSTM’s ability to exploit the inter-frame correlations, thus improving the reconstruction quality. Furthermore, different from existing methods that only aggregate features from the temporal branch once at a specified stage of the network, the proposed network is organized in a multi-stage manner. The corresponding temporal correlation in features at different stages can be fully exploited. Experimental results on Vimeo-90K-T and UMD10 datasets show that the proposed method has comparable performance with current video super-resolution methods. The code is available at https://github.com/yhjoker/MS-STAN.  相似文献   

16.
应自炉  商丽娟  徐颖  刘健 《信号处理》2018,34(6):668-679
为改善单帧图像分辨率退化问题,减少网络参数,本文提出一种基于紧凑型多径结构卷积神经网络的图像超分辨率重构算法。本文算法采用多径结构模型充分使用低分辨率图像信息,并利用残差学习策略学习低分辨率和高分辨率图像间残差信息以重建高分辨率图像。当卷积核数量有限时,含有ReLU的网络重构性能表现不佳,因此引入最大特征图激活函数,增强网络泛化能力,使网络结构更加紧凑,以捕捉具有竞争性特征,完成图像超分辨率重构。实验结果表明,本文方法具有良好的重构能力,图像清晰度和边缘锐度明显提高,在客观评价和主观视觉效果方面优于当前主流的超分辨率重构方法。为便携式高性能超分辨率重构奠定理论基础。   相似文献   

17.
金哲彦  徐之海  冯华君  李奇 《红外与激光工程》2020,49(5):20190463-20190463-9
双分辨率相机同时兼顾大视场和高分辨,利用同轴光学固定结构避免了变焦镜头运动部件带来的诸多问题,在深空探测目标跟踪和手机等智能终端上具有应用价值。针对现有的基于深度学习的双分辨率图像变焦算法的速度慢,信息量没有提升,图像网络结构适配性差和图像信息修复的伪造性等问题尝试性地提出了加入基于深度信息的解决办法。论证将图像对焦清晰度作为深度信息引入双分辨变焦算法的可行性,探讨对焦深度信息检测精度与效果,分别测试深度学习和基于深度信息的传统方法的双分辨率变焦算法,得到了一种不影响正常成像速度,内存开销降低35%和算法复杂度降低60%,超分辨信息真实可靠,图像结果评价提升10%到50%的全新算法。  相似文献   

18.
李宁  王军敏  司文杰  耿则勋 《红外与激光工程》2021,50(12):20210233-1-20210233-7
针对合成孔径雷达(Synthetic aperture radar,SAR)目标分类问题,提出基于最大熵准则的多视角方法。采用经典的图像相似度测度构建不同视角SAR图像之间的相关性矩阵,在此基础上分别计算不同视角组合条件下的非线性相关信息熵值。非线性相关信息熵值可分析多个变量之间的统计特性,熵值的大小即可反映不同变量之间的内在关联。根据最大熵的原则选择最优的视角子集,其中SAR图像具有最大的内在相关性。分类过程以联合稀疏表示为基础,对具有最大熵值的多个视角进行联合表示。联合稀疏表示模型同时处理若干稀疏表示问题,在它们具有关联的条件下具有提升重构精度的优势。根据不同视角求解得到的表示系数,按照类别分别计算对于选取多视角的重构误差,并根据误差最小的准则进行最终决策。文中方法可有效对多视角SAR图像样本进行相关性分析,并利用联合稀疏表示利用这种相关性,能够更好提高分类精度。采用MSTAR数据集对方法进行分析测试,通过与几类其他方法在多种测试条件下进行对比,结果显示了最大熵准则在多视角选取中的有效性和文中方法对SAR目标分类性能的优越性。  相似文献   

19.
Multiset canonical correlation analysis (MCCA) is a powerful technique for multi-view joint dimensionality reduction by maximizing linear correlations among the projections. However, most existing MCCA-related methods fail to discover the intrinsic discriminating structure among data spaces and the correspondence between multiple views. In order to address these problems, we incorporate the collaborative representation structure of data points in each view. Then we construct a view-consistent collaborative multiset correlation projection (C2MCP) framework, in which the structures among different views are guaranteed to be consistent and preserved in low-dimensional subspaces. Also, by taking within-class and between-class collaborative reconstruction into account to improve discriminative power for the supervised scenario, we then propose a novel algorithm, called view-consistent collaborative discriminative multiset correlation projection (C2DMCP), to explicitly consider both between-set cumulative correlations and discriminative structure in multiple representation data. The feasibility and effectiveness of the proposed method has been verified on three benchmark databases, i.e., ETH-80, AR and Extended Yale B, with promising results.  相似文献   

20.
由于强大的高质量图像生成能力,生成对抗网络在图像融合和图像超分辨率等计算机视觉的研究中得到了广泛关注。目前基于生成对抗网络的遥感图像融合方法只使用网络学习图像之间的映射,缺乏对遥感图像中特有的全锐化领域知识的应用。该文提出一种融入全色图空间结构信息的优化生成对抗网络遥感图像融合方法。通过梯度算子提取全色图空间结构信息,将提取的特征同时加入判别器和具有多流融合架构的生成器,设计相应的优化目标和融合规则,从而提高融合图像的质量。结合WorldView-3卫星获取的图像进行实验,结果表明,所提方法能够生成高质量的融合图像,在主观视觉和客观评价指标上都优于大多先进的遥感图像融合方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号