首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.

Denoising of hyperspectral images (HSIs) is an important preprocessing step to enhance the performance of its analysis and interpretation. In reality, a remotely sensed HSI experiences disturbance from different sources and therefore gets affected by multiple noise types. However, most of the existing denoising methods concentrates in removal of a single noise type ignoring their mixed effect. Therefore, a method developed for a particular noise type doesn’t perform satisfactorily for other noise types. To address this limitation, a denoising method is proposed here, that effectively removes multiple frequently encountered noise patterns from HSI including their combinations. The proposed dual branch deep neural network based architecture works on wavelet transformed bands. The first branch of the network uses deep convolutional skip connected layers with residual learning for extracting local and global noise features. The second branch includes layered autoencoder together with subpixel upsampling that performs repeated convolution in each layer to extract prominent noise features from the image. Two hyperspectral datasets are used in the experiment to evaluate the performance of the proposed method for denoising of Gaussian, stripe and mixed noises. Experimental results demonstrate the superior performance of the proposed network compared to other state-of-the-art denoising methods with PSNR 36.74, SSIM 0.97 and overall accuracy 94.03?%.

  相似文献   

2.
Multimedia Tools and Applications - Single image dehazing algorithms are recently attracting more and more attention from many researchers because of their flexibility and practicality. However,...  相似文献   

3.
Neural Computing and Applications - Wearable technology offers a prospective solution to the increasing demand for activity monitoring in pervasive healthcare. Feature extraction and selection are...  相似文献   

4.
As is well known, activity level measurement and fusion rule are two crucial factors in image fusion. For most existing fusion methods, either in spatial domain or in a transform domain like wavelet, the activity level measurement is essentially implemented by designing local filters to extract high-frequency details, and the calculated clarity information of different source images are then compared using some elaborately designed rules to obtain a clarity/focus map. Consequently, the focus map contains the integrated clarity information, which is of great significance to various image fusion issues, such as multi-focus image fusion, multi-modal image fusion, etc. However, in order to achieve a satisfactory fusion performance, these two tasks are usually difficult to finish. In this study, we address this problem with a deep learning approach, aiming to learn a direct mapping between source images and focus map. To this end, a deep convolutional neural network (CNN) trained by high-quality image patches and their blurred versions is adopted to encode the mapping. The main novelty of this idea is that the activity level measurement and fusion rule can be jointly generated through learning a CNN model, which overcomes the difficulty faced by the existing fusion methods. Based on the above idea, a new multi-focus image fusion method is primarily proposed in this paper. Experimental results demonstrate that the proposed method can obtain state-of-the-art fusion performance in terms of both visual quality and objective assessment. The computational speed of the proposed method using parallel computing is fast enough for practical usage. The potential of the learned CNN model for some other-type image fusion issues is also briefly exhibited in the experiments.  相似文献   

5.
针对传统手工方法优化卷积神经网络(CNN)参数时存在耗时长、不准确,以及参数设置影响算法性能等问题,提出一种基于教与学优化(TLBO)的可变卷积自编码器(CAE)算法.该算法设计了可变长度的个体编码策略,从而快速构建CAE结构,并堆叠CAE为一个CNN;此外,充分利用优秀个体的结构信息来引导算法朝着更有希望的区域搜索,...  相似文献   

6.
This paper describes a novel method to enhance underwater images by image dehazing. Scattering and color change are two major problems of distortion for underwater imaging. Scattering is caused by large suspended particles, such as turbid water which contains abundant particles. Color change or color distortion corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. Our key contributions are proposed a new underwater model to compensate the attenuation discrepancy along the propagation path, and proposed a fast joint trigonometric filtering dehazing algorithm. The enhanced images are characterized by reduced noised level, better exposedness of the dark regions, improved global contrast while the finest details and edges are enhanced significantly. In addition, our method is comparable to higher quality than the state-of-the-art methods by assuming in the latest image evaluation systems.  相似文献   

7.

To the best of our knowledge, currently the physical model based method is still an ill posed problem. Additionally, the image enhancement approaches also suffer from the texture preservation issue. Retinex-based approach is proved its effectiveness in image dehazing while the parameter should be turned properly. Therefore, in this paper, the particle swarm optimization (PSO) algorithm is firstly performed to optimize the parameter and the hazed image is converted into hue, saturation, intensity(HSI) for color compensation, In the other hand, the multi-scale local detail upgrading and the bilateral filtering approaches are designed to overcome the dehazing artefacts and edge preservation, which could further improve the overall visual effect of images. Experimental results on natural and synthetic images by using qualitative analysis and frequently used quantitative evaluation metrics illustrate the approving defogging effect of the proposed method. For instance, in a natural image road, our method achieves the higher e for 0.63, γ for 3.21 and H for 7.81, respectively and lower σ for 0.04. In a synthetic image poster, the higher PSNR for 18.17 and SSIM for 0.78 are also acquired compared to other explored approaches in this paper. Besides, the results performed on other underwater and aerial images in this study further demonstrates its defog effectiveness.

  相似文献   

8.
Applied Intelligence - With the rapid advancement in network technologies, the need for cybersecurity has gained increasing momentum in recent years. As a primary defense mechanism, an intrusion...  相似文献   

9.
Multimedia Tools and Applications - Background modeling is a major prerequisite for a variety of multimedia applications like video surveillance, traffic monitoring, etc. Numerous approaches have...  相似文献   

10.
Using dark channel prior—a kind of statistics of the haze-free outdoor images—to remove haze from a single image input is simple and effective. However, due to the use of soft matting algorithm, the method suffers from massive consumption of both memory and time, which largely limits its scalability for large images. In this paper, we present a hierarchical approach to accelerate dark channel based image dehazing. The core of our approach is a novel, efficient scheme for solving the soft matting problem involved in image dehazing, using adaptively subdivided quadtrees built in image space. Acceleration is achieved by transforming the problem of solving a N-variable linear system required in soft matting, to a problem of solving a much smaller m-variable linear system, where N is the number of pixels and m is the number of the corners in the quadtree. Our approach significantly reduces both space and time cost while still maintains visual fidelity, and largely extends the practicability of dark channel based image dehazing to handle large images.  相似文献   

11.
In this paper, we propose a new fast dehazing method from single image based on filtering. The basic idea is to compute an accurate atmosphere veil that is not only smoother, but also respect with depth information of the underlying image. We firstly obtain an initial atmosphere scattering light through median filtering, then refine it by guided joint bilateral filtering to generate a new atmosphere veil which removes the abundant texture information and recovers the depth edge information. Finally, we solve the scene radiance using the atmosphere attenuation model. Compared with exiting state of the art dehazing methods, our method could get a better dehazing effect at distant scene and places where depth changes abruptly. Our method is fast with linear complexity in the number of pixels of the input image; furthermore, as our method can be performed in parallel, thus it can be further accelerated using GPU, which makes our method applicable for real-time requirement.  相似文献   

12.
Chen  Yuantao  Liu  Linwu  Tao  Jiajun  Chen  Xi  Xia  Runlong  Zhang  Qian  Xiong  Jie  Yang  Kai  Xie  Jingbo 《Multimedia Tools and Applications》2021,80(3):4237-4261

The automatic image annotation is an effective computer operation that predicts the annotation of an unknown image by automatically learning potential relationships between the semantic concept space and the visual feature space in the annotation image dataset. Usually, the auto-labeling image includes the processing: learning processing and labeling processing. Existing image annotation methods that employ convolutional features of deep learning methods have a number of limitations, including complex training and high space/time expenses associated with the image annotation procedure. Accordingly, this paper proposes an innovative method in which the visual features of the image are presented by the intermediate layer features of deep learning, while semantic concepts are represented by mean vectors of positive samples. Firstly, the convolutional result is directly output in the form of low-level visual features through the mid-level of the pre-trained deep learning model, with the image being represented by sparse coding. Secondly, the positive mean vector method is used to construct visual feature vectors for each text vocabulary item, so that a visual feature vector database is created. Finally, the visual feature vector similarity between the testing image and all text vocabulary is calculated, and the vocabulary with the largest similarity used for annotation. Experiments on the datasets demonstrate the effectiveness of the proposed method; in terms of F1 score, the proposed method’s performance on the Corel5k dataset and IAPR TC-12 dataset is superior to that of MBRM, JEC-AF, JEC-DF, and 2PKNN with end-to-end deep features.

  相似文献   

13.
针对间歇过程的非线性、多阶段性等特点及其三维数据形式,提出基于批次图像化的卷积自编码故障监测方法.首先,将每个批次数据看作一个灰度图,每个批次中数据变化可以看作图片的纹理变化,利用卷积自编码器(convolutional autoencoder,CAE)直接对间歇过程三维数据进行特征提取,避免三维数据展开成二维时导致的信息丢失,无需分阶段充分考虑批次全局信息,有效提取过程变量相关关系的动态变化;同时,利用卷积操作提取局部特征信息,自编码网络可以解决非线性问题,实现特征的无监督学习;然后,使用一类支持向量机(one-class support vector method, OCSVM)描述特征分布,构造新的统计量,确定控制限,实现故障监测;最后,通过将该方法应用到Pensim仿真平台及重组人粒细胞集落刺激因子发酵的实际生产数据,验证所提方法的准确性和有效性.  相似文献   

14.
基于稀疏自编码深度神经网络的林火图像分类   总被引:1,自引:0,他引:1  
针对林火与相似目标很难区分的问题,提出一种基于稀疏自编码深度神经网络的林火图像分类新方法。采用无监督的特征学习算法稀疏自编码从无标签图像小块中学习特征参数,完成深度神经网络的训练;利用学习到的特征从原始大小分类图像中提取特征并卷积和均值池化特征;对卷积和池化后的特征采用softmax回归来训练最终softmax分类器。实验结果表明,跟传统的BP神经网络相比,新方法能够更有效区分林火与红旗、红叶等类似物体。  相似文献   

15.
图像分类的深度卷积神经网络模型综述   总被引:3,自引:0,他引:3       下载免费PDF全文
图像分类是计算机视觉中的一项重要任务,传统的图像分类方法具有一定的局限性。随着人工智能技术的发展,深度学习技术越来越成熟,利用深度卷积神经网络对图像进行分类成为研究热点,图像分类的深度卷积神经网络结构越来越多样,其性能远远好于传统的图像分类方法。本文立足于图像分类的深度卷积神经网络模型结构,根据模型发展和模型优化的历程,将深度卷积神经网络分为经典深度卷积神经网络模型、注意力机制深度卷积神经网络模型、轻量级深度卷积神经网络模型和神经网络架构搜索模型等4类,并对各类深度卷积神经网络模型结构的构造方法和特点进行了全面综述,对各类分类模型的性能进行了对比与分析。虽然深度卷积神经网络模型的结构设计越来越精妙,模型优化的方法越来越强大,图像分类准确率在不断刷新的同时,模型的参数量也在逐渐降低,训练和推理速度不断加快。然而深度卷积神经网络模型仍有一定的局限性,本文给出了存在的问题和未来可能的研究方向,即深度卷积神经网络模型主要以有监督学习方式进行图像分类,受到数据集质量和规模的限制,无监督式学习和半监督学习方式的深度卷积神经网络模型将是未来的重点研究方向之一;深度卷积神经网络模型的速度和资源消耗仍不尽人意,应用于移动式设备具有一定的挑战性;模型的优化方法以及衡量模型优劣的度量方法有待深入研究;人工设计深度卷积神经网络结构耗时耗力,神经架构搜索方法将是未来深度卷积神经网络模型设计的发展方向。  相似文献   

16.
Multimedia Tools and Applications - This paper briefly explains about the application of deep learning-based methods for biometric applications. This work attempts to solve the problem of limited...  相似文献   

17.

The single image dehazing is performed using atmospheric scattering model (ASM). The ASM is based on transmission and atmospheric light. Thus, accurate estimation of transmission is essential for quality single image dehazing. Single image dehazing is of prime focus in research nowadays. The proposed work presents a fast and accurate method for single image dehazing. The proposed method works in two folds; (i) An adaptive dehazing control factor is proposed to estimate accurate transmission, which is based on difference of maximum and minimum color channel of hazy image, and (ii) a mathematical model to compute probability of a pixel to be at short distance is presented, which is utilized to locate haziest region of the image to compute the value of atmospheric light. The proposed method obtains visually compelling results, and recovers the information content (such as structural similarity, color, and visibility) accurately. The computation speed and accuracy of the proposed method is proved using quantitative and qualitative comparison of results with state of the art dehazing methods.

  相似文献   

18.
Li  Bin  Gong  Xiaofeng  Wang  Chen  Wu  Ruijuan  Bian  Tong  Li  Yanming  Wang  Zhiyuan  Luo  Ruisen 《Applied Intelligence》2021,51(10):7384-7401
Applied Intelligence - Imbalanced data classification problem is widely existed in commercial activities and social production. It refers to the scenarios with considerable gap of sample amount...  相似文献   

19.
In this article, we propose a novel approach based on convolutional features and sparse autoencoder (AE) for scene-level land-use (LU) classification. This approach starts by generating an initial feature representation of the scenes under analysis from a deep convolutional neural network (CNN) pre-learned on a large amount of labelled data from an auxiliary domain. Then these convolutional features are fed as input to a sparse AE for learning a new suitable representation in an unsupervised manner. After this pre-training phase, we propose two different scenarios for building the classification system. In the first scenario, we add a softmax layer on the top of the AE encoding layer and then fine-tune the resulting network in a supervised manner using the target training images available at hand. Then we classify the test images based on the posterior probabilities provided by the softmax layer. In the second scenario, we view the classification problem from a reconstruction perspective. To this end we train several class-specific AEs (i.e. one AE per class) and then classify the test images based on the reconstruction error. Experimental results conducted on the University of California (UC) Merced and Banja-Luka LU public data sets confirm the superiority of the proposed approach compared to state-of-the-art methods.  相似文献   

20.
杨帅东  谌海云  许瑾  汪敏 《控制与决策》2023,38(9):2496-2504
由于无人机视觉跟踪视角范围广且环境复杂,常遇到无人机飞行震动、目标遮挡、相似目标等问题,导致无人机跟踪目标发生漂移.因此,对具有回归计算的全卷积孪生网络跟踪算法(SiamRPN)进行改进,提出一种加强深度特征相关性的无人机视觉跟踪算法(SiamDFT).首先,将全卷积神经网络后三层卷积的网络宽度提升一倍,充分利用目标的外观信息,完成对模板帧和检测帧的特征提取;其次,在检测帧和模板帧分别提出注意力信息融合模块和特征深度卷积模块,两个深度的特征相关性计算方法能够有效抑制背景信息,增强像素对之间的关联性,高效完成分类和回归任务;然后,采用深度互相关运算完成相似性计算,并引入距离交并比的计算方法完成对目标的定位.实验结果表明, SiamDFT在无人机短时跟踪场景下精确率和成功率分别达到79.8%和58.3%,在无人机长时跟踪场景下精确率和成功率分别达到73.4%和55.2%,实景测试结果充分验证了所提出算法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号