首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Infrared and visible image fusion aims to synthesize a single fused image containing salient targets and abundant texture details even under extreme illumination conditions. However, existing image fusion algorithms fail to take the illumination factor into account in the modeling process. In this paper, we propose a progressive image fusion network based on illumination-aware, termed as PIAFusion, which adaptively maintains the intensity distribution of salient targets and preserves texture information in the background. Specifically, we design an illumination-aware sub-network to estimate the illumination distribution and calculate the illumination probability. Moreover, we utilize the illumination probability to construct an illumination-aware loss to guide the training of the fusion network. The cross-modality differential aware fusion module and halfway fusion strategy completely integrate common and complementary information under the constraint of illumination-aware loss. In addition, a new benchmark dataset for infrared and visible image fusion, i.e., Multi-Spectral Road Scenarios (available at https://github.com/Linfeng-Tang/MSRS), is released to support network training and comprehensive evaluation. Extensive experiments demonstrate the superiority of our method over state-of-the-art alternatives in terms of target maintenance and texture preservation. Particularly, our progressive fusion framework could round-the-clock integrate meaningful information from source images according to illumination conditions. Furthermore, the application to semantic segmentation demonstrates the potential of our PIAFusion for high-level vision tasks. Our codes will be available at https://github.com/Linfeng-Tang/PIAFusion.  相似文献   

2.
深度学习的发展加快了图像语义分割的研究.目前,最有效的图像语义分割研究方法大部分都是基于全卷积神经网络(FCNN),尽管现有的语义分割方法能有效地对图像进行整体分割,但对于图像中的重叠遮挡物体不能清晰地识别出边缘信息,也不能有效地融合图像高低层的特征信息.针对以上问题,在采用FCNN来解决图像语义分割问题的基础上,利用...  相似文献   

3.
4.
目的 目标语义特征提取效果直接影响图像语义分割的精度,传统的单尺度特征提取方法对目标的语义分割精度较低,为此,提出一种基于多尺度特征融合的工件目标语义分割方法,利用卷积神经网络提取目标的多尺度局部特征语义信息,并将不同尺度的语义信息进行像素融合,使神经网络充分捕获图像中的上下文信息,获得更好的特征表示,有效实现工件目标的语义分割。方法 使用常用的多类工件图像定义视觉任务,利用残差网络模块获得目标的单尺度语义特征图,再结合本文提出的多尺度特征提取方式获得不同尺度的局部特征语义信息,通过信息融合获得目标分割图。使用上述方法经多次迭代训练后得到与视觉任务相关的工件目标分割模型,并对训练权重与超参数进行保存。结果 将本文方法和传统的单尺度特征提取方法做定性和定量的测试实验,结果表明,获得的分割网络模型对测试集中的目标都具有较精确的分割能力,与单尺度特征提取方法相比,本文方法的平均交并比mIOU(mean intersection over union)指标在验证集上训练精度提高了4.52%,在测试集上分割精度提高了4.84%。当测试样本中包含的目标种类较少且目标边缘清晰时,本文方法能够得到更精准的分割结果。结论 本文提出的语义分割方法,通过多尺度特征融合的方式增强了神经网络模型对目标特征的提取能力,使训练得到的分割网络模型比传统的单尺度特征提取方式在测试集上具有更优秀的性能,从而验证了所提出方法的有效性。  相似文献   

5.
With recent advance in Earth Observation techniques, the availability of multi-sensor data acquired in the same geographical area has been increasing greatly, which makes it possible to jointly depict the underlying land-cover phenomenon using different sensor data. In this paper, a novel multi-attentive hierarchical fusion net (MAHiDFNet) is proposed to realize the feature-level fusion and classification of hyperspectral image (HSI) with Light Detection and Ranging (LiDAR) data. More specifically, a triple branch HSI-LiDAR Convolutional Neural Network (CNN) backbone is first developed to simultaneously extract the spatial features, spectral features and elevation features of the land-cover objects. On this basis, hierarchical fusion strategy is adopted to fuse the oriented feature embeddings. In the shallow feature fusion stage, we propose a novel modality attention (MA) module to generate the modality integrated features. By fully considering the correlation and heterogeneity between different sensor data, feature interaction and integration is released by the proposed MA module. At the same time, self-attention modules are also adopted to highlight the modality specific features. In the deep feature fusion stage, the obtained modality specific features and modality integrated features are fused to construct the hierarchical feature fusion framework. Experiments on three real HSI-LiDAR datasets demonstrate the effectiveness of the proposed framework. The code will be public on https://github.com/SYFYN0317/-MAHiDFNet.  相似文献   

6.
Infrared (IR) image segmentation technology plays a pivotal role in many urgent fields, such as traffic surveillance, nondestructive detection and autonomous driving. In recent years, active contour model (ACM) has been one of the most commonly used tools for image segmentation, but the precision sharply decreases when dealing with IR images with intensity inhomogeneity. To solve this problem, a new ACM based on global and local multi-feature fusion (GLMF) is proposed in this paper. First of all, the multi-feature fitting maps inside and outside the contour are calculated using the strategy of global and local information fusion. Then, a hybrid signed pressure function (SPF) is designed by combining multiple fitting error maps, which are measured by the similarity between the multi-feature fitting map and the original feature map. Next, a level set formulation (LSF) is constructed using the proposed hybrid SPF and the level set function is thus evolved. Finally, the contour of IR foreground object with visual saliency can be extracted using the zero level set of the converged level set function. Both qualitative and quantitative experiments based on IR datasets verify that the presented ACM has remarkable advantages in terms of accuracy and robustness when compared with other typical ACMs. Our codes are available at https://github.com/MinjieWan/Global-and-Local-Multi-Feature-Fusion-Based-Active-Contour-Model-for-Infrared-Image-Segmentation.  相似文献   

7.
冠脉计算机断层扫描血管造影(CTA)图像分割在辅助医生判断血管堵塞、血管疾病诊断等许多实际应用中发挥重要作用.针对CTA图像中存在大量噪声和FCN、U-Net、V-Net等经典深度学习算法分割结果不细腻的问题,提出了全局特征及多层次特征聚合网络.这种新型的网络由全局特征模块、特征融合与V形细化多层次特征聚合模块以及深度...  相似文献   

8.
目的 视网膜血管健康状况的自动分析对糖尿病、心脑血管疾病以及多种眼科疾病的快速无创诊断具有重要参考价值。视网膜图像中血管网络结构复杂且图像背景亮度不均使得血管区域的准确自动提取具有较大难度。本文通过使用具有对称全卷积结构的U-net深度神经网络实现视网膜血管的高精度分割。方法 基于U-net网络中的层次化对称结构和Dense-net网络中的稠密连接方式,提出一种改进的适用于视网膜血管精准提取的深度神经网络模型。首先使用白化预处理技术弱化原始彩色眼底图像中的亮度不均,增强图像中血管区域的对比度;接着对数据集进行随机旋转、Gamma变换操作实现数据增广;然后将每一幅图像随机分割成若干较小的图块,用于减小模型参数规模,降低训练难度。结果 使用多种性能指标对训练后的模型进行综合评定,模型在DRIVE数据集上的灵敏度、特异性、准确率和AUC(area under the curve)分别达到0.740 9、0.992 9、0.970 7和0.917 1。所提算法与目前主流方法进行了全面比较,结果显示本文算法各项性能指标均表现良好。结论 本文针对视网膜图像中血管区域高精度自动提取难度大的问题,提出了一种具有稠密连接方式的对称全卷积神经网络改进模型。结果表明该模型在视网膜血管分割中能够达到良好效果,具有较好的研究及应用价值。  相似文献   

9.
目的 语义分割是计算机视觉中一项具有挑战性的任务,其核心是为图像中的每个像素分配相应的语义类别标签。然而,在语义分割任务中,缺乏丰富的多尺度信息和足够的空间信息会严重影响图像分割结果。为进一步提升图像分割效果,从提取丰富的多尺度信息和充分的空间信息出发,本文提出了一种基于编码-解码结构的语义分割模型。方法 运用ResNet-101网络作为模型的骨架提取特征图,在骨架末端附加一个多尺度信息融合模块,用于在网络深层提取区分力强且多尺度信息丰富的特征图。并且,在网络浅层引入空间信息捕获模块来提取丰富的空间信息。由空间信息捕获模块捕获的带有丰富空间信息的特征图和由多尺度信息融合模块提取的区分力强且多尺度信息丰富的特征图将融合为一个新的信息丰富的特征图集合,经过多核卷积块细化之后,最终运用数据依赖的上采样(DUpsampling)操作得到图像分割结果。结果 此模型在2个公开数据集(Cityscapes数据集和PASCAL VOC 2012数据集)上进行了大量实验,验证了所设计的每个模块及整个模型的有效性。新模型与最新的10种方法进行了比较,在Cityscapes数据集中,相比于RefineNet模型、DeepLabv2-CRF模型和LRR(Laplacian reconstruction and refinement)模型,平均交并比(mIoU)值分别提高了0.52%、3.72%和4.42%;在PASCAL VOC 2012数据集中,相比于Piecewise模型、DPN(deep parsing network)模型和GCRF(Gaussion conditional random field network)模型,mIoU值分别提高了6.23%、7.43%和8.33%。结论 本文语义分割模型,提取了更加丰富的多尺度信息和空间信息,使得分割结果更加准确。此模型可应用于医学图像分析、自动驾驶、无人机等领域。  相似文献   

10.
针对传统深度神经网络在对血管壁图像分割中难以提取具有针对性有效特征的问题,提出一种融合密度连接与自适应加权损失的血管壁图像分割方法。首先通过构建密集连接的分割网络学习更多的边界和轮廓表征以促进特征复用融合;然后设计了改进的自适应加权损失和边界紧凑性损失约束训练网络,利用自适应加权损失自动调整不同区域分割产生的损失比例来引导网络向最佳方向学习,同时引入边界紧凑性损失约束以充分利用边界信息,提升对血管壁图像的分割精度;最后对包含2 544张MRI的MERGE血管壁数据集进行了验证实验。结果表明,提出的改进方法能够有效提取血管壁图像的特征信息,在管腔和外壁轮廓分割中的Dice分别达到了93.65%和95.81%,设计的消融实验也充分证明了所提各个模块和网络的有效性,能够更好地实现高精度的图像分割。  相似文献   

11.
In the image fusion field, the design of deep learning-based fusion methods is far from routine. It is invariably fusion-task specific and requires a careful consideration. The most difficult part of the design is to choose an appropriate strategy to generate the fused image for a specific task in hand. Thus, devising learnable fusion strategy is a very challenging problem in the community of image fusion. To address this problem, a novel end-to-end fusion network architecture (RFN-Nest) is developed for infrared and visible image fusion. We propose a residual fusion network (RFN) which is based on a residual architecture to replace the traditional fusion approach. A novel detail-preserving loss function, and a feature enhancing loss function are proposed to train RFN. The fusion model learning is accomplished by a novel two-stage training strategy. In the first stage, we train an auto-encoder based on an innovative nest connection (Nest) concept. Next, the RFN is trained using the proposed loss functions. The experimental results on public domain data sets show that, compared with the existing methods, our end-to-end fusion network delivers a better performance than the state-of-the-art methods in both subjective and objective evaluation. The code of our fusion method is available at https://github.com/hli1221/imagefusion-rfn-nest.  相似文献   

12.
针对视网膜血管图像特征信息复杂程度高,现有算法存在微血管分割较低和病理信息误分割等问题,提出一种融合DenseNet和U-Net网络的血管分割模型。首先,通过限制对比度直方图均衡化和filter滤波对图像进行血管增强处理;其次利用局部自适应Gamma提升图像亮度信息并降低伪影的干扰;再次,由多尺度形态学滤波局部增强微血管特征信息;最后,利用U型密集连接模块进行分割。该算法在DRIVE数据集上实验,其平均准确率、灵敏度和特异性分别高达96.74%、81.50%和98.20%。  相似文献   

13.
深度卷积神经网络在医学图像分割领域运用广泛,目前的网络改进普遍是引入多尺度融合结构,增加了模型的复杂度,在提升精度的同时降低了训练效率。针对上述问题,提出一种新型的WU-Net肺结节图像分割方法。该方法对U-Net网络进行改进,在原下采样编码通路引入改进的残余连接模块,同时利用新提出的dep模块改进的信息通路完成特征提取和特征融合。实验利用LUNA16的数据集对WU-Net和其他模型进行训练和验证,在以结节为尺度的实验中,Dice系数和交并比分别能达到96.72%、91.78%;在引入10%的负样本后,F;值达到了92.41%,相比UNet3+提高了1.23%;在以肺实质为尺度的实验中,Dice系数和交并比分别达到了83.33%、66.79%,相比RU-Net分别提升了1.35%、2.53%。相比其他模型,WU-Net模型的分割速度最快,比U-Net提升了39.6%。结果显示,WU-Net提升肺结节分割效果的同时加快了模型的训练速度。  相似文献   

14.
针对场景图像语义分割任务中存在多尺度目标以及特征提取网络缺乏对全局上下文信息的获取等问题,设计了一种嵌入改进自注意力机制以及自适应融合多尺度特征的双路径分割算法。在空间路径利用双分支的简易下采样模块进行4倍下采样提取高分辨率的边缘细节信息,使网络对目标边界分割更精确。在语义路径嵌入上下文捕获模块和自适应特征融合模块,为解码阶段提供具有丰富多尺度的高语义上下文信息,并采用类别平衡策略进一步提升分割效果。经过实验验证,该模型在Camvid和Aeroscapes数据集上的MIOU(mean intersection over union)指标分别为59.4%和60.1%,具有较好的分割效果。  相似文献   

15.
针对传统深度网络模型难以精确提取建筑物边缘轮廓特征及对不同尺寸建筑物无法自适应提取的问题,提出一种膨胀卷积特征提取的多尺度特征融合深度神经网络模型(Multiscale-feature fusion Deep Neural Networks with dilated convolution,MDNNet)对遥感图像建筑物自动分割的方法。首先在ResNet101模型中引入膨胀卷积扩大提取视野保留更多特征图像分辨率;其次利用多尺度特征融合模块获取多个尺度的建筑物特征并将不同尺度的特征融合;最终利用特征解码模块将特征图恢复到原始输入图像尺寸,实现遥感图像建筑物精确分割。在WHU遥感图像数据集的实验结果表明,提出模型有效克服道路、树木和阴影等因素影响,分割结果有效保留建筑物边界细节信息,有效提升分割精度,像素准确率PA达到0.864,平均交并比mIoU达到0.815,召回率Recall达到0.862。  相似文献   

16.
视网膜血管分割是医学图像分割中常见的一项任务, 视网膜血管图像有着分割目标小而多的特点, 过去的网络在分割中可以较好地提取粗血管, 但是很容易忽略细血管, 而这部分细血管的提取在一定程度上影响网络的性能, 甚至是诊断的结果. 因此, 为了达到在保证准确提取粗血管的前提下, 提取到更多更连续的细血管的目标, 本文使用对称编解码网络作为基础网络, 使用一种新的卷积模块DR-Conv, 旨在防止过拟合的同时提高网络的学习能力. 同时, 针对最大池化层造成的信息损失问题, 考虑使用小波变换进行图像分解并使用逆小波变换对图像进行恢复, 利用混合损失函数结合不同损失函数的特性以弥补单个损失函数优化能力不足的问题. 为了评估网络的性能, 在3个公共视网膜血管数据集上分别对网络进行了测试, 并与最新方法进行了比较, 实验结果表明本文网络拥有更优的性能.  相似文献   

17.
由于眼底血管结构复杂多变,且图像中血管与背景对比度低,眼底血管分割存在巨大困难,尤其是微小型血管难以分割.基于深层全卷积神经网络的U-Net能够有效提取血管图像全局及局部信息,但由于其输出为灰度图像,并采用硬阈值实现二值化,这会导致血管区域丢失、血管过细等问题.针对这些问题,提出一种结合U-Net与脉冲耦合神经网络(P...  相似文献   

18.
低光照图像分割一直是图像分割的难点,低光照引起的低对比度和高模糊性使得这类图像分割比一般图像分割困难很多。为了提高低光照环境下语义分割的准确度,根据低光照图像自身特征,提出一种噪声指导下过滤光照风格的低光照场景语义分割模型(SFIS)。该模型综合利用信噪比作为先验知识,通过指导长距离分支中的自注意力操作、长/短距离分支的特征融合,对图像中不同噪声的区域采用不同距离的交互。还进一步设计了一个光照过滤器,该模块从图像的整体风格中进一步提取光照风格信息。通过交替训练光照过滤器与语义分割模型,逐步减小不同光照条件之间的光照风格差距,从而使分割网络学习到光照不变特征。提出的模型在数据集LLRGBD上优于之前的工作,取得了较好的结果。在真实数据集LLRGBD-real上的mIoU达到66.8%,说明所提出的长短距离分支模块和光照过滤器模块能够有效提升模型在低光照环境下的语义分割能力。  相似文献   

19.
Multispectral imaging (MSI) technique is often used to capture images of the fundus by illuminating it with different wavelengths of light. However, these images are taken at different points in time such that eyeball movements can cause misalignment between consecutive images. The multispectral image sequence reveals important information in the form of retinal and choroidal blood vessel maps, which can help ophthalmologists to analyze the morphology of these blood vessels in detail. This in turn can lead to a high diagnostic accuracy of several diseases. In this paper, we propose a novel semi-supervised end-to-end deep learning framework called “Adversarial Segmentation and Registration Nets” (ASRNet) for the simultaneous estimation of the blood vessel segmentation and the registration of multispectral images via an adversarial learning process. ASRNet consists of two subnetworks: (i) A segmentation module S that fulfills the blood vessel segmentation task, and (ii) A registration module R that estimates the spatial correspondence of an image pair. Based on the segmention-driven registration network, we train the segmentation network using a semi-supervised adversarial learning strategy. Our experimental results show that the proposed ASRNet can achieve state-of-the-art accuracy in segmentation and registration tasks performed with real MSI datasets.  相似文献   

20.
针对目前室内场景语义分割网络无法很好融合图像的RGB信息和深度信息的问题,提出一种改进的室内场景语义分割网络。为使网络能够有选择性地融合图像的深度特征和RGB特征,引入注意力机制的思想,设计了特征融合模块。该模块能够根据深度特征图和RGB特征图的特点,学习性地调整网络参数,更有效地对深度特征和RGB特征进行融合;同时使用多尺度联合训练,加速网络收敛,提高分割准确率。通过在SUNRGB-D和NYUDV2数据集上验证,相比于包含深度敏感全连接条件随机场的RGB-D全卷积神经网络(DFCN-DCRF)、深度感知卷积神经网络(Depth-aware CNN)、多路径精炼网络(RefineNet)等目前主流的语义分割网络,所提网络具有更高的分割精度,平均交并比(mIoU)分别达到46.6%和48.0%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号