首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对传统编解码结构的医学图像分割网络存在特征信息利用率低、泛化能力不足等问题,该文提出了一种结合编解码模式的多尺度语义感知注意力网络(multi-scale semantic perceptual attention network,MSPA-Net) 。首先,该网络在解码路径加入双路径多信息域注意力模块(dual-channel multi-information domain attention module,DMDA) ,提高特征信息的提取能力;其次,网络在级联处加入空洞卷积模块(dense atrous convolution module,DAC) ,扩大卷积感受野;最后,借鉴特征融合思想,设计了可调节多尺度特征融合模块 (adjustable multi-scale feature fusion,AMFF) 和双路自学习循环连接模块(dual self-learning recycle connection module,DCM) ,提升网络的泛化性和鲁棒性。为验证网络的有效性,在CVC-ClinicDB、ETIS-LaribPolypDB、COVID-19 CHEST X-RAY、Kaggle_3m、ISIC2017和Fluorescent Neuronal Cells等数据 集上进行验证,实验结果表明,相似系数分别达到了94.96%、92.40%、99.02%、90.55%、92.32%和75.32%。因此,新的分割网络展现了良好的泛化能力,总体性能优于现有网络,能够较好实现通用医学图像的有效分割。  相似文献   

2.
Image deblurring techniques play important roles in many image processing applications. As the blur varies spatially across the image plane, it calls for robust and effective methods to deal with the spatially-variant blur problem. In this paper, a Saliency-based Deblurring (SD) approach is proposed based on the saliency detection for salient-region segmentation and a corresponding compensate method for image deblurring. We also propose a PDE-based deblurring method which introduces an anisotropic Partial Differential Equation (PDE) model for latent image prediction and employs an adaptive optimization model in the kernel estimation and deconvolution steps. Experimental results demonstrate the effectiveness of the proposed algorithm.  相似文献   

3.
本文提出了一种场景文本检测方法,用于应对复杂自然场景中文本检测的挑战。该方法采用了双重注意力和多尺度特征融合的策略,通过双重注意力融合机制增强了文本特征通道之间的关联性,提升了整体检测性能。在考虑到深层特征图上下采样可能引发的语义信息损失的基础上,提出了空洞卷积多尺度特征融合金字塔(dilated convolution multi-scale feature fusion pyramid structure, MFPN),它采用双融合机制来增强语义特征,有助于加强语义特征,克服尺度变化的影响。针对不同密度信息融合引发的语义冲突和多尺度特征表达受限问题,创新性地引入了多尺度特征融合模块(multi-scale feature fusion module, MFFM)。此外,针对容易被冲突信息掩盖的小文本问题,引入了特征细化模块(feature refinement module, FRM)。实验表明,本文的方法对复杂场景中文本检测有效,其F值在CTW1500、ICDAR2015和Total-Text 3个数据集上分别达到了85.6%、87.1%和86.3%。  相似文献   

4.
在动作识别任务中,如何充分学习和利用视频的空间特征和时序特征的相关性,对最终识别结果尤为重要。针对传统动作识别方法忽略时空特征相关性及细小特征,导致识别精度下降的问题,本文提出了一种基于卷积门控循环单元(convolutional GRU, ConvGRU)和注意力特征融合(attentional feature fusion,AFF) 的人体动作识别方法。首先,使用Xception网络获取视频帧的空间特征提取网络,并引入时空激励(spatial-temporal excitation,STE) 模块和通道激励(channel excitation,CE) 模块,获取空间特征的同时加强时序动作的建模能力。此外,将传统的长短时记忆网络(long short term memory, LSTM)网络替换为ConvGRU网络,在提取时序特征的同时,利用卷积进一步挖掘视频帧的空间特征。最后,对输出分类器进行改进,引入基于改进的多尺度通道注意力的特征融合(MCAM-AFF)模块,加强对细小特征的识别能力,提升模型的准确率。实验结果表明:在UCF101数据集和HMDB51数据集上分别达到了95.66%和69.82%的识别准确率。该算法获取了更加完整的时空特征,与当前主流模型相比更具优越性。  相似文献   

5.
Obtaining a good-quality image requires exposure to light for an appropriate amount of time. If there is camera or object motion during the exposure time, the image is blurred. To remove the blur, some recent image deblurring methods effectively estimate a point spread function (PSF) by acquiring a noisy image additionally, and restore a clear latent image with the PSF. Since the groundtruth PSF varies with the location, a blockwise approach for PSF estimation has been proposed. However, the block to estimate a PSF is a straightly demarcated rectangle which is generally different from the shape of an actual region where the PSF can be properly assumed constant. We utilize the fact that a PSF is substantially related to the local disparity between two views. This paper presents a disparity-based method of space-variant image deblurring which employs disparity information in image segmentation, and estimates a PSF, and restores a latent image for each region. The segmentation method firstly over-segments a blurred image into sufficiently many regions based on color, and then merges adjacent regions with similar disparities. Experimental results show the effectiveness of the proposed method.  相似文献   

6.
刘亚灵  郭敏  马苗 《光电子.激光》2021,32(12):1271-1277
针对声音事件检测中仅在时频维度使用注意力机制的局限性以及卷积层单一导致的特征提取不足问题,本文提出基于多尺度注意力特征融合的卷积循环神经网络(convolutional recurrent neural network,CRNN)模型,以提高声音事件检测性能.首先,提出多尺度注意力模块,实现对局部时频单元和全局通道特征...  相似文献   

7.
针对自然场景中任意形状文本容易漏检、错检的问题,提出了一种基于双重注意力融合和空洞残差特征增强的场景文本检测方法.为了增强文本特征通道之间的潜在联系,提出了双重注意力融合(DAF)模块,采用双向特征金字塔与双重注意力融合模块相结合的方式进行多层的特征融合;另外针对深层特征图在降维的过程中可能造成语义丢失的现象,提出了空...  相似文献   

8.
针对现有的动态场景图像去模糊方法存在的特征提取不准确、未充分利用有效特征的问题,本文提出了一种基于双分支特征提取与循环细化的动态场景图像去模糊网络。整个网络包括特征提取网络、循环细化网络(cyclic refinement network, CRN)、图像重建(image reconstruction, IR)3部分。其中,特征提取网络包括模糊图像细节和轮廓特征(contour feature, CF)的提取,以残差单元作为特征提取网络的基本单元;循环细化网络通过交替融合轮廓特征和细节特征(detail feature, DF)来细化特征图,得到模糊图像的细化特征(refinement feature, RF);最后,在图像重建阶段,复用轮廓和细节特征,结合残差学习策略将轮廓特征、细节特征和细化后的特征逐级融合后通过非线性映射的方式重建清晰图像。在广泛使用的动态场景模糊数据集GOPRO上的实验结果表明,该方法的平均峰值信噪比(peak signal to noise ratio,PSNR)达到31.86,平均结构相似度(structure similarity,SSIM)达到0.947...  相似文献   

9.
在当前的目标跟踪领域,现有的基于分割的算法没有充分利用目标的长距离依赖信息和各个特征层的不同特性,前背景判别能力不强,对目标的多尺度估计不足。针对此问题,提出了自适应特征融合模块和混合域注意力模块,以提高网络对目标的多尺度估计能力和对目标的前背景辨别能力,并将其集成到当前基于视频分割的算法中,提出了一种新的目标跟踪算法,在各大公开数据集上的实验结果证明其达到了领先水平。  相似文献   

10.
In this paper, we propose a solution to transform spatially variant blurry images into the photo-realistic sharp manifold. Image deblurring task is valuable and challenging in computer vision. However, existing learning-based methods cannot produce images with clear edges and fine details, which exhibit significant challenges for generated-based loss functions used in existing methods. Instead of only designing architectures and loss functions for generators, we propose a generative adversarial network (GAN) framework based on an edge adversarial mechanism and a partial weight sharing network. In order to propel the entire network to learn image edges information consciously, we propose an edge reconstruction loss function and an edge adversarial loss function to restrict the generator and the discriminator respectively. We further introduce a partial weight sharing structure, the sharp features from clean images encourage the recovery of image details of deblurred images. The proposed partial weight sharing structure improves image details effectively. Experimental results show that our method is able to generate photo-realistic sharp images from real-world blurring images and outperforms state-of-the-art methods.  相似文献   

11.
Recently, Convolutional Neural Networks (CNNs) have achieved great success in Single Image Super-Resolution (SISR). In particular, the recursive networks are now widely used. However, existing recursion-based SISR networks can only make use of multi-scale features in a layer-wise manner. In this paper, a Deep Recursive Multi-Scale Feature Fusion Network (DRMSFFN) is proposed to address this issue. Specifically, we propose a Recursive Multi-Scale Feature Fusion Block (RMSFFB) to make full use of multi-scale features. Besides, a Progressive Feature Fusion (PFF) technique is proposed to take advantage of the hierarchical features from the RMSFFB in a global manner. At the reconstruction stage, we use a deconvolutional layer to upscale the feature maps to the desired size. Extensive experimental results on benchmark datasets demonstrate the superiority of the proposed DRMSFFN in comparison with the state-of-the-art methods in both quantitative and qualitative evaluations.  相似文献   

12.
To extract decisive features from gesture images and solve the problem of information redundancy in the existing gesture recognition methods, we propose a new multi-scale feature extraction module named densely connected Res2Net (DC-Res2Net) and design a feature fusion attention module (FFA). Firstly, based on the new dimension residual network (Res2Net), the DC-Res2Net uses channel grouping to extract fine-grained multi-scale features, and dense connection has been adopted to extract stronger features of different scales. Then, we apply a selective kernel network (SK-Net) to enhance the representation of effective features. Afterwards, the FFA has been designed to remove redundant information in features by fusing low-level location features with high-level semantic features. Finally, experiments have been conducted to validate our method on the OUHANDS, ASL, and NUS-II datasets. The results demonstrate the superiority of DC-Res2Net and FFA, which can extract more decisive features and remove redundant information while ensuring high recognition accuracy and low computational complexity.  相似文献   

13.
由于金属表面缺陷图像的特性,有效精确分割是图像处理任务中的一大挑战.为了获得缺陷的类型、大小及位置信息,本文提出一种融合注意力机制的金属缺陷图像分割网络.该网络分为两条路径,语义信息路径主要由残差块构成的卷积网络获得特征图,采样过程中分步融合注意力机制以增强特征与背景对比度.旁路路径设计注意力机制模块获得位置信息的权重...  相似文献   

14.
Blur is one of the most common distortion types in image acquisition. Image deblurring has been widely studied as an effective technique to improve the quality of blurred images. However, little work has been done to the perceptual evaluation of image deblurring algorithms and deblurred images. In this paper, we conduct both subjective and objective studies of image defocus deblurring. A defocus deblurred image database (DDID) is first built using state-of-the-art image defocus deblurring algorithms, and subjective test is carried out to collect the human ratings of the images. Then the performances of the deblurring algorithms are evaluated based on the subjective scores. With the observation that the existing image quality metrics are limited in predicting the quality of defocus deblurred images, a quality enhancement module is proposed based on Gray Level Co-occurrence Matrix (GLCM), which is mainly used to measure the loss of texture naturalness caused by deblurring. Experimental results based on the DDID database demonstrate the effectiveness of the proposed method.  相似文献   

15.
针对已有去雨网络在不同环境中去雨不彻底和图像细节信息损失严重的问题,本文提出一种基于注意力机制的多分支特征级联图像去雨网络。该模型结合多种注意力机制,形成不同类型的多分支网络,将图像空间细节和上下文特征信息在整体网络中自下而上地进行传递并级联融合,同时在网络分支间构建的阶段注意融合机制,可以减少特征提取过程中图像信息的损失,更大限度地保留特征信息,使图像去雨任务更加高效。实验结果表明,本文算法的客观评价指标优于其他对比算法,主观视觉效果得以有效提升,去雨能力更强,准确性更加突出,能够去除不同密度的雨纹,并且能够更好地保留图像背景中的细节信息。  相似文献   

16.
In this paper, we propose a single image deblurring algorithm to remove spatially variant defocus blur based on the estimated blur map. Firstly, we estimate the blur map from a single image by utilizing the edge information and K nearest neighbors (KNN) matting interpolation. Secondly, the local kernels are derived by segmenting the blur map according to the blur amount of local regions and image contours. Thirdly, we adopt a BM3D-based non-blind deconvolution algorithm to restore the latent image. Finally, ringing artifacts and noise are detected and removed, to obtain a high quality in-focus image. Experimental results on real defocus blurred images demonstrate that our proposed algorithm outperforms some state-of-the-art approaches.  相似文献   

17.
18.
A new region feature which emphasized the salience of target region and its neighbors is proposed. In region segmentation-based multisensor image fusion scheme, the presented feature can be extracted from each segmented region to determine the fusion weight. Experimental results demonstrate that the proposed feature has extensive application scope and it provides much more information for each region. It can not only be used in image fusion but also be used in other image processing applications.  相似文献   

19.
Non-uniform motion deblurring has been a challenging problem in the field of computer vision. Currently, deep learning-based deblurring methods have made promising achievements. In this paper, we propose a new joint strong edge and multi-stream adaptive fusion network to achieve non-uniform motion deblurring. The edge map and the blurred map are jointly used as network inputs and Edge Extraction Network (EEN) guides the Deblurring Network (DN) for image recovery and to complement the important edge information. The Multi-stream Adaptive Fusion Module (MAFM) adaptively fuses the edge information and features from the encoder and decoder to reduce feature redundancy to avoid image artifacts. Furthermore, the Dense Attention Feature Extraction Module (DAFEM) is designed to focus on the severely blurred regions of blurry images to obtain important recovery information. In addition, an edge loss function is added to measure the difference of edge features between the generated and clear images to further recover the edges of the deblurred images. Experiments show that our method outperforms currently public methods in terms of PSNR, SSIM and VIF, and generates images with less blur and sharper edges.  相似文献   

20.
Image inpainting aims to fill in the missing regions of damaged images with plausible content. Existing inpainting methods tend to produce ambiguous artifacts and implausible structures. To address the above issues, our method aims to fully utilize the information of known regions to provide style and structural guidance for missing regions. Specifically, the Adaptive Style Fusion (ASF) module reduces artifacts by transferring visual style features from known regions to missing regions. The Gradient Attention Guidance (GAG) module generates accurate structures by aggregating semantic information along gradient boundary regions. In addition, the Multi-scale Attentional Feature Extraction (MAFE) module extracts global contextual information and enhances the representation of image features. The sufficient experimental results on the three datasets demonstrate that our proposed method has superior performance in terms of visual plausibility and structural consistency compared to state-of-the-art inpainting methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号