首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

Anomaly detection (AD) is one of the most attracting topics within the recent 10 years in hyperspectral imagery (HSI). The goal of the AD is to label the pixels with significant spectral or spatial differences to their neighbours, as targets. In this paper, we propose a method that uses both spectral and spatial information of HSI based on human visual system (HVS). By inspiring the retina and the visual cortex functionality, the multiscale multiresolution analysis is applied to some principal components of hyperspectral data, to extract features from different spatial levels of the image. Then the global and local relations between features are considered based on inspiring the visual attention mechanism and inferotemporal (IT) part of the visual cortex. The effects of the attention mechanism are implemented using the logarithmic function which well highlights, small variations in pixels’ grey levels in global features. Also, the maximum operation is used over the local features for imitating the function of IT. Finally, the information theory concept is used for generating the final detection map by weighting the global and local detection maps to obtain the final anomaly map. The result of the proposed method is compared with some state-of-the-art methods such as SSRAD, FLD, PCA, RX, KPCA, and AED for two well-known real hyperspectral data which are San Diego airport and Pavia city, and a synthetic hyperspectral data. The results demonstrate that the proposed method effectively improves the AD capabilities, such as enhancement of the detection rate, reducing the false alarm rate and the computation complexity.  相似文献   

2.

The hyperspectral image (HSI) denoising has been widely utilized to improve HSI qualities. Recently, learning-based HSI denoising methods have shown their effectiveness, but most of them are based on synthetic dataset and lack the generalization capability on real testing HSI. Moreover, there is still no public paired real HSI denoising dataset to learn HSI denoising network and quantitatively evaluate HSI methods. In this paper, we mainly focus on how to produce realistic dataset for learning and evaluating HSI denoising network. On the one hand, we collect a paired real HSI denoising dataset, which consists of short-exposure noisy HSIs and the corresponding long-exposure clean HSIs. On the other hand, we propose an accurate HSI noise model which matches the distribution of real data well and can be employed to synthesize realistic dataset. On the basis of the noise model, we present an approach to calibrate the noise parameters of the given hyperspectral camera. Besides, on the basis of observation of high signal-to-noise ratio of mean image of all spectral bands, we propose a guided HSI denoising network with guided dynamic nonlocal attention, which calculates dynamic nonlocal correlation on the guidance information, i.e., mean image of spectral bands, and adaptively aggregates spatial nonlocal features for all spectral bands. The extensive experimental results show that a network learned with only synthetic data generated by our noise model performs as well as it is learned with paired real data, and our guided HSI denoising network outperforms state-of-the-art methods under both quantitative metrics and visual quality.

  相似文献   

3.

Camouflaged people like soldiers on the battlefield or even camouflaged objects in the natural environments are hard to be detected because of the strong resemblances between the hidden target and the background. That’s why seeing these hidden objects is a challenging task. Due to the nature of hidden objects, identifying them require a significant level of visual perception. To overcome this problem, we present a new end-to-end framework via a multi-level attention network in this paper. We design a novel inception module to extract multi-scale receptive fields features aiming at enhancing feature representation. Furthermore, we use a dense feature pyramid taking advantage of multi-scale semantic features. At last, to locate and distinguish the camouflaged target better from the background, we develop a multi-attention module that generates more discriminative feature representation and combines semantic information with spatial information from different levels. Experiments on the camouflaged people dataset show that our approach outperformed all state-of-the-art methods.

  相似文献   

4.
ABSTRACT

Hyperspectral image (HSI) classification is a most challenging task in hyperspectral remote sensing field due to unique characteristics of HSI data. It consists of huge number of bands with strong correlations in the spectral and spatial domains. Moreover, limited training samples make it more challenging. To address such problems, we have presented here a spatial feature extraction technique using deep convolutional neural network (CNN) for HSI classification. As optimizer plays an important role in learning process of deep CNN model, we have presented the effect of seven different optimizers on our deep CNN model in the application of HSI classification. The seven different optimizers used in this study are SGD, Adagrad, Adadelta, RMSprop, Adam, AdaMax, and Nadam. Extensive experimental results on four hyperspectral remote sensing data sets have been presented which demonstrate the superiority of the presented deep CNN model with Adam optimizer for HSI classification.  相似文献   

5.

In hyperspectral image (HSI) analysis, high-dimensional data may contain noisy, irrelevant and redundant information. To mitigate the negative effect from these information, feature selection is one of the useful solutions. Unsupervised feature selection is a data preprocessing technique for dimensionality reduction, which selects a subset of informative features without using any label information. Different from the linear models, the autoencoder is formulated to nonlinearly select informative features. The adjacency matrix of HSI can be constructed to extract the underlying relationship between each data point, where the latent representation of original data can be obtained via matrix factorization. Besides, a new feature representation can be also learnt from the autoencoder. For a same data matrix, different feature representations should consistently share the potential information. Motivated by these, in this paper, we propose a latent representation learning based autoencoder feature selection (LRLAFS) model, where the latent representation learning is used to steer feature selection for the autoencoder. To solve the proposed model, we advance an alternative optimization algorithm. Experimental results on three HSI datasets confirm the effectiveness of the proposed model.

  相似文献   

6.
张凯琳  阎庆  夏懿  章军  丁云 《计算机应用》2020,40(4):1030-1037
针对高光谱图像(HSI)训练数据获取困难的问题,采用了一种新的HSI半监督分类框架,该框架利用有限的标记数据和丰富的未标记数据来训练深度神经网络。同时,由于高光谱样本分布是不平衡的,导致不同样本分类难度存在巨大差异,采用原始交叉熵损失函数无法刻画这种分布特征,因而分类效果不理想。为了解决这个问题,在半监督分类框架中提出一种基于焦点损失的多分类目标函数。最后,考虑到HSI的空间信息对分类的影响,结合马尔可夫随机场(MRF),利用样本空间特征进一步改善分类效果。在两个常用的HSI数据集上,将所提方法与多种典型算法进行了实验对比分析,实验结果表明所提方法能够产生优于其他对比方法的分类效果。  相似文献   

7.
目的 胆管癌高光谱图像的光谱波段丰富但存在冗余,造成基于深度神经网络高光谱图像分割方法的分割精度下降,虽然一些基于通道注意力机制的网络能够关注重要通道,但在处理通道特征时存在信息表示不足问题,因此本文研究构建一种新的通道注意力机制深度网络,以提高分割准确性。方法 提出了傅里叶变换多频率通道注意力机制(frequency selecting channel attention,FSCA)。FSCA对输入特征进行2维傅里叶变换,提取部分频率特征,再通过两层全连接层得到通道权重向量,将通道权重与对应通道特征相乘,获得了融合通道注意力信息的输出。针对患癌区域和无癌区域数据不平衡问题引入了Focal损失,结合Inception模块,构建基于Inception-FSCA的胆管癌高光谱图像分割网络。结果 在采集的胆管癌高光谱数据集上进行实验,Inception-FSCA网络的准确率(accuracy)、精度(precision)、敏感性(sensitivity)、特异性(specificity)、Kappa系数分别为0.978 0、0.965 4、0.958 6、0.985 2、0.945 6,优于另外5种对比方法。与合成的假彩色图像的分割结果相比,高光谱图像上的实验指标分别提高了0.058 4、0.105 8、0.087 5、0.039 0、0.149 3。结论 本文所提出的傅里叶变换多频率通道注意力机制能够更有效地利用通道信息,基于Inception-FSCA的胆管癌高光谱图像分割网络能够提升分割效果,在胆管癌医学辅助诊断方面具有研究和应用价值。  相似文献   

8.
With recent advance in Earth Observation techniques, the availability of multi-sensor data acquired in the same geographical area has been increasing greatly, which makes it possible to jointly depict the underlying land-cover phenomenon using different sensor data. In this paper, a novel multi-attentive hierarchical fusion net (MAHiDFNet) is proposed to realize the feature-level fusion and classification of hyperspectral image (HSI) with Light Detection and Ranging (LiDAR) data. More specifically, a triple branch HSI-LiDAR Convolutional Neural Network (CNN) backbone is first developed to simultaneously extract the spatial features, spectral features and elevation features of the land-cover objects. On this basis, hierarchical fusion strategy is adopted to fuse the oriented feature embeddings. In the shallow feature fusion stage, we propose a novel modality attention (MA) module to generate the modality integrated features. By fully considering the correlation and heterogeneity between different sensor data, feature interaction and integration is released by the proposed MA module. At the same time, self-attention modules are also adopted to highlight the modality specific features. In the deep feature fusion stage, the obtained modality specific features and modality integrated features are fused to construct the hierarchical feature fusion framework. Experiments on three real HSI-LiDAR datasets demonstrate the effectiveness of the proposed framework. The code will be public on https://github.com/SYFYN0317/-MAHiDFNet.  相似文献   

9.
由于高光谱图像包含了丰富的光谱、空间和辐射信息,且具有光谱接近连续、图谱合一的特性,可用于地质勘探、精细农业、生态环境、城市遥感以及军事目标检测等领域的目标精准分类与识别。对高光谱图像进行空谱特征提取是遥感领域的研究热点和前沿课题之一。传统空谱特征提取方法对高光谱图像分类的计算量和样本需求小、理论可解释性好、抗噪声能力强,但应用于分类的精度受限于特征来源;基于深度学习的高光谱图像空谱特征提取方法虽然计算量和样本需求大,但是由于深层空谱特征的表达能力更好,可以大幅度提高分类器的性能。为了便于对高光谱图像空谱特征提取领域进行更深入有效的探索,本文系统综述了相关研究进展。首先,概述了空间纹理与形态学特征提取、空间邻域信息获取及空间信息后处理等传统高光谱空谱特征提取方法的原理,对大量的已有工作进行了梳理、分析与总结。然后,从深度空谱特征提取角度出发,介绍了当前流行的卷积神经网络、图卷积神经网络及跨场景多源数据模型的结构特点及研究进展,分析、评价了基于深度学习的网络模型对高光谱图像空谱特征提取的优势及问题所在。最后,对该研究领域的未来相关发展提出建议并进行了展望。  相似文献   

10.
Wang  Cong  Zhang  Man  Su  Zhixun  Yao  Guangle 《Multimedia Tools and Applications》2020,79(27-28):19595-19614

Rainy images severely degrade the visibility and make many computer vision algorithms invalid. Hence, it is necessary to remove rain streaks from single image. In this paper, we propose a novel network to handle with single image de-raining, which includes two modules: (a) multi-scale kernels de-raining layer and (b) multi-scale feature maps de-raining layer. Specifically, as spatial contextual information is important for single image de-raining, we develop a multi-scale kernels de-raining layer, which can utilize the multi-scale kernel that has receptive fields with different sizes to further capture the contextual information and these features are fused to learn the primary rain streaks structures. Moreover, we illustrate that convolution layers at different scales have similar structure of rain streaks by statistical pixel histogram and they can be processed in the same operation. So, we deal with the rain streaks information at different scales by using multi-scale kernels de-raining layers with shared parameters, where we call this operation as multi-scale feature maps de-raining layer. Finally, we employ dense connections to connect multi-scale feature maps de-raining layers to maximize the information flow along features from different levels. Quantitative and qualitative experimental results demonstrate the superiority of proposed method compared with several state-of-the-art de-raining methods, while the parameters of our proposed method are greatly reduced that benefits from the proposed shared parameters strategy at different scales

  相似文献   

11.

Classification of remotely sensed hyperspectral images (HSI) is a challenging task due to the presence of a large number of spectral bands and due to the less available data of remotely sensed HSI. The use of 3D-CNN and 2D-CNN layers to extract spectral and spatial features shows good test results. The recently introduced HybridSN model for the classification of remotely sensed hyperspectral images is the best to date compared to the other state-of-the-art models. But the test performance of the HybridSN model decreases significantly with the decrease in training data or number of training epochs. In this paper, we have considered cyclic learning for training of the HybridSN model, which shows a significant increase in the test performance of the HybridSN model with 10%, 20%, and 30% training data and limited number of training epochs. Further, we introduce a new cyclic function (ncf) whose training and test performance is comparable to the existing cyclic learning rate policies. More precisely, the proposed HybridSN(ncf ) model has higher average accuracy compared to HybridSN model by 19.47%, 1.81% and 8.33% for Indian Pines, Salinas Scene and University of Pavia datasets respectively in case of 10% training data and limited number of training epochs.

  相似文献   

12.
13.
针对高光谱遥感图像训练样本较少、光谱维度较高、空间特征与频谱特征存在差异性而导致高光谱地物分类的特征提取不合理、分类精度不稳定和训练时间长等问题,提出了基于3D密集全卷积(3D-DSFCN)的高光谱图像(HSI)分类算法。算法通过密集模块中的3D卷积核分别提取光谱特征和空间特征,采用特征映射模块替换传统网络中的池化层和全连接层,最后通过softmax分类器进行分类。实验结果表明,基于3D-DSFCN的HSI分类方法提高了地物分类的准确率、增强了低频标签的分类稳定性。  相似文献   

14.

Denoising of hyperspectral images (HSIs) is an important preprocessing step to enhance the performance of its analysis and interpretation. In reality, a remotely sensed HSI experiences disturbance from different sources and therefore gets affected by multiple noise types. However, most of the existing denoising methods concentrates in removal of a single noise type ignoring their mixed effect. Therefore, a method developed for a particular noise type doesn’t perform satisfactorily for other noise types. To address this limitation, a denoising method is proposed here, that effectively removes multiple frequently encountered noise patterns from HSI including their combinations. The proposed dual branch deep neural network based architecture works on wavelet transformed bands. The first branch of the network uses deep convolutional skip connected layers with residual learning for extracting local and global noise features. The second branch includes layered autoencoder together with subpixel upsampling that performs repeated convolution in each layer to extract prominent noise features from the image. Two hyperspectral datasets are used in the experiment to evaluate the performance of the proposed method for denoising of Gaussian, stripe and mixed noises. Experimental results demonstrate the superior performance of the proposed network compared to other state-of-the-art denoising methods with PSNR 36.74, SSIM 0.97 and overall accuracy 94.03?%.

  相似文献   

15.
Ma  You  Liu  Zhi  Chen Chen  C. L. Philip 《Applied Intelligence》2022,52(3):2801-2812

Hyperspectral images (HSIs) classification have aroused a great deal of attention recently due to their wide range of practical prospects in numerous fields. Spatial-spectral fusion feature is widely used in HSI classification to get better performance. These methods are mostly based on a simple linear addition with the combined hyper-parameter to fuse the spatial and spectral information. It is necessary to fuse the features in a more suitable method. To solve this problem, we propose a novel HSI classification approach based on Hybrid spatial-spectral feature in broad learning system (HSFBLS). First, we employ an adaptive weighted mean filter to obtain spatial feature. Computing the weights of spatial and spectral channels in hybrid module by two BLS and uniting them with a weighted linear function. Then, we fuse the spectral-spatial feature by sparse autoencoder to get weighted fusion feature as the feature nodes to classify HSI data in BLS. By a two-stage fusion of spatial and spectral information, it can increase the classification accuracy contrast to simple combination. Very satisfactory classification results on typical HSI datasets illustrate the availability of proposed HSFBLS. Moreover, HSFBLS also reduce training time greatly contrast to time-consuming network.

  相似文献   

16.
In hyperspectral image (HSI) processing, the inclusion of both spectral and spatial features, e.g. morphological features, shape features, has shown great success in classification of hyperspectral data. Nevertheless, there exist two main issues to address: (1) The multiple features are often treated equally and thus the complementary information among them is neglected. (2) The features are often degraded by a mixture of various kinds of noise, leading to the classification accuracy decreased. In order to address these issues, a novel robust discriminative multiple features extraction (RDMFE) method for HSI classification is proposed. The proposed RDMFE aims to project the multiple features into a common low-rank subspace, where the specific contributions of different types of features are sufficiently exploited. With low-rank constraint, RDMFE is able to uncover the intrinsic low-dimensional subspace structure of the original data. In order to make the projected features more discriminative, we make the learned representations optimal for classification. With intrinsic information preserving and discrimination capabilities, the learned projection matrix works well in HSI classification tasks. Experimental results on three real hyperspectral datasets confirm the effectiveness of the proposed method.  相似文献   

17.
针对传统显著性目标检测方法在检测不同尺度的多个显著性目标方面的不足,提出了一种多尺度特征深度复用的显著性目标检测算法,网络模型由垂直堆叠的双向密集特征聚合模块和水平堆叠的多分辨率语义互补模块组成。首先,双向密集特征聚合模块基于ResNet骨干网络提取不同分辨率语义特征;然后,依次在top-down和bottom-up两条通路上进行自适应融合,以获取不同层次多尺度表征特征;最后,通过多分辨率语义互补模块对两个相邻层次的多尺度特征进行融合,以消除不同层次上特征之间的相互串扰来增强预测结果的一致性。在五个基准数据集上进行的实验结果表明,该方法在Fmax、Sm、MAE最高能达到0.939、0.921、0.028,且检测速率可达74.6 fps,与其他对比算法相比有着更好的检测性能。  相似文献   

18.
黄有达  周大可  杨欣 《计算机应用研究》2021,38(7):2175-2178,2187
针对三维人脸重建和密集对齐算法精度不足的问题,引入密集连接的多尺度特征融合模块和残差注意力机制设计了一种性能强大的网络.在编码器结构前,引入密集连接的多尺度特征融合模块获得多尺度融合特征,使编码器获得更丰富的信息;在解码器模块中引入残差注意力机制,强化网络对重要特征的关注同时抑制不必要的噪声.实验结果表明,相较其他算法,该算法取得了较显著的改进;相对PRNet,该算法以更少的参数量在各项指标上取得7.7%~12.1%的性能提升.  相似文献   

19.

Fault detection and diagnosis (FDD) framework is one of safety aspects that is important to the industrial sector to ensure its high-quality production and processes. However, the development of FDD system in chemical process systems could have difficulties, e.g. highly nonlinear correlation within the variables, highly complex process, and an enormous number of sensors to be monitored. These issues have encouraged the development of various approaches to increase the effectiveness and robustness of the FDD framework, such as the wavelet transform analysis, where it has the advantage in extracting the significant features in both time and frequency domain. It has motivated us to propose an extension work of the multi-scale KFDA method, where we have modified it with the implementation of Parseval’s theorem and the application of ANFIS method to improve the performance of the fault classification. In this work, through the implementation of Parseval’s theorem, the observation of fault features via the energy spectrum and effective reduction in DWT analysis data quantity can be accomplished. The extracted features from the multi-scale KFDA method are used for fault diagnosis and classification, where multiple ANFIS models were developed for each designated fault pattern to increase the classification accuracy and reduce the diagnosis error rate. The fault classification performance of the proposed framework has been evaluated using a benchmarked Tennessee Eastman process. The results indicated that the proposed multi-scale KFDA-ANFIS framework has shown the improvement with an average of 87.02% in classification accuracy over the multi-scale PCA-ANFIS (78.90%) and FDA-ANFIS (70.80%).

  相似文献   

20.
目的 基于非负矩阵分解的高光谱图像无监督解混算法普遍存在着目标函数对噪声敏感、在低信噪比条件下端元提取和丰度估计性能不佳的缺点。因此,提出一种基于稳健非负矩阵分解的高光谱图像混合像元分解算法。方法 首先在传统基于非负矩阵分解的解混算法基础上,对目标函数加以改进,用更加稳健的L1范数作为重建误差项,提高算法对噪声的适应能力,得到新的无监督解混目标函数。针对新目标函数的非凸特性,利用梯度下降法对端元矩阵和丰度矩阵交替迭代求解,进而完成优化求解,得到端元和丰度估计值。结果 分别利用模拟和真实高光谱数据,对算法性能进行定性和定量分析。在模拟数据集中,将本文算法与具有代表性的5种无监督解混算法进行比较,相比于对比算法中最优者,本文算法在典型信噪比20 dB下,光谱角距离(spectral angle distance,SAD)增大了10.5%,信号重构误差(signal to reconstruction error,SRE)减小了9.3%;在真实数据集中,利用光谱库中的地物光谱特征验证本文算法端元提取质量,并利用真实地物分布定性分析丰度估计结果。结论 提出的基于稳健非负矩阵分解的高光谱无监督解混算法,在低信噪比条件下,能够获得较好的端元提取和丰度估计精度,解混效果更好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号