The hyperspectral image (HSI) denoising has been widely utilized to improve HSI qualities. Recently, learning-based HSI denoising methods have shown their effectiveness, but most of them are based on synthetic dataset and lack the generalization capability on real testing HSI. Moreover, there is still no public paired real HSI denoising dataset to learn HSI denoising network and quantitatively evaluate HSI methods. In this paper, we mainly focus on how to produce realistic dataset for learning and evaluating HSI denoising network. On the one hand, we collect a paired real HSI denoising dataset, which consists of short-exposure noisy HSIs and the corresponding long-exposure clean HSIs. On the other hand, we propose an accurate HSI noise model which matches the distribution of real data well and can be employed to synthesize realistic dataset. On the basis of the noise model, we present an approach to calibrate the noise parameters of the given hyperspectral camera. Besides, on the basis of observation of high signal-to-noise ratio of mean image of all spectral bands, we propose a guided HSI denoising network with guided dynamic nonlocal attention, which calculates dynamic nonlocal correlation on the guidance information, i.e., mean image of spectral bands, and adaptively aggregates spatial nonlocal features for all spectral bands. The extensive experimental results show that a network learned with only synthetic data generated by our noise model performs as well as it is learned with paired real data, and our guided HSI denoising network outperforms state-of-the-art methods under both quantitative metrics and visual quality.
相似文献Camouflaged people like soldiers on the battlefield or even camouflaged objects in the natural environments are hard to be detected because of the strong resemblances between the hidden target and the background. That’s why seeing these hidden objects is a challenging task. Due to the nature of hidden objects, identifying them require a significant level of visual perception. To overcome this problem, we present a new end-to-end framework via a multi-level attention network in this paper. We design a novel inception module to extract multi-scale receptive fields features aiming at enhancing feature representation. Furthermore, we use a dense feature pyramid taking advantage of multi-scale semantic features. At last, to locate and distinguish the camouflaged target better from the background, we develop a multi-attention module that generates more discriminative feature representation and combines semantic information with spatial information from different levels. Experiments on the camouflaged people dataset show that our approach outperformed all state-of-the-art methods.
相似文献In hyperspectral image (HSI) analysis, high-dimensional data may contain noisy, irrelevant and redundant information. To mitigate the negative effect from these information, feature selection is one of the useful solutions. Unsupervised feature selection is a data preprocessing technique for dimensionality reduction, which selects a subset of informative features without using any label information. Different from the linear models, the autoencoder is formulated to nonlinearly select informative features. The adjacency matrix of HSI can be constructed to extract the underlying relationship between each data point, where the latent representation of original data can be obtained via matrix factorization. Besides, a new feature representation can be also learnt from the autoencoder. For a same data matrix, different feature representations should consistently share the potential information. Motivated by these, in this paper, we propose a latent representation learning based autoencoder feature selection (LRLAFS) model, where the latent representation learning is used to steer feature selection for the autoencoder. To solve the proposed model, we advance an alternative optimization algorithm. Experimental results on three HSI datasets confirm the effectiveness of the proposed model.
相似文献Rainy images severely degrade the visibility and make many computer vision algorithms invalid. Hence, it is necessary to remove rain streaks from single image. In this paper, we propose a novel network to handle with single image de-raining, which includes two modules: (a) multi-scale kernels de-raining layer and (b) multi-scale feature maps de-raining layer. Specifically, as spatial contextual information is important for single image de-raining, we develop a multi-scale kernels de-raining layer, which can utilize the multi-scale kernel that has receptive fields with different sizes to further capture the contextual information and these features are fused to learn the primary rain streaks structures. Moreover, we illustrate that convolution layers at different scales have similar structure of rain streaks by statistical pixel histogram and they can be processed in the same operation. So, we deal with the rain streaks information at different scales by using multi-scale kernels de-raining layers with shared parameters, where we call this operation as multi-scale feature maps de-raining layer. Finally, we employ dense connections to connect multi-scale feature maps de-raining layers to maximize the information flow along features from different levels. Quantitative and qualitative experimental results demonstrate the superiority of proposed method compared with several state-of-the-art de-raining methods, while the parameters of our proposed method are greatly reduced that benefits from the proposed shared parameters strategy at different scales
相似文献Classification of remotely sensed hyperspectral images (HSI) is a challenging task due to the presence of a large number of spectral bands and due to the less available data of remotely sensed HSI. The use of 3D-CNN and 2D-CNN layers to extract spectral and spatial features shows good test results. The recently introduced HybridSN model for the classification of remotely sensed hyperspectral images is the best to date compared to the other state-of-the-art models. But the test performance of the HybridSN model decreases significantly with the decrease in training data or number of training epochs. In this paper, we have considered cyclic learning for training of the HybridSN model, which shows a significant increase in the test performance of the HybridSN model with 10%, 20%, and 30% training data and limited number of training epochs. Further, we introduce a new cyclic function (ncf) whose training and test performance is comparable to the existing cyclic learning rate policies. More precisely, the proposed HybridSN(ncf ) model has higher average accuracy compared to HybridSN model by 19.47%, 1.81% and 8.33% for Indian Pines, Salinas Scene and University of Pavia datasets respectively in case of 10% training data and limited number of training epochs.
相似文献Denoising of hyperspectral images (HSIs) is an important preprocessing step to enhance the performance of its analysis and interpretation. In reality, a remotely sensed HSI experiences disturbance from different sources and therefore gets affected by multiple noise types. However, most of the existing denoising methods concentrates in removal of a single noise type ignoring their mixed effect. Therefore, a method developed for a particular noise type doesn’t perform satisfactorily for other noise types. To address this limitation, a denoising method is proposed here, that effectively removes multiple frequently encountered noise patterns from HSI including their combinations. The proposed dual branch deep neural network based architecture works on wavelet transformed bands. The first branch of the network uses deep convolutional skip connected layers with residual learning for extracting local and global noise features. The second branch includes layered autoencoder together with subpixel upsampling that performs repeated convolution in each layer to extract prominent noise features from the image. Two hyperspectral datasets are used in the experiment to evaluate the performance of the proposed method for denoising of Gaussian, stripe and mixed noises. Experimental results demonstrate the superior performance of the proposed network compared to other state-of-the-art denoising methods with PSNR 36.74, SSIM 0.97 and overall accuracy 94.03?%.
相似文献Hyperspectral images (HSIs) classification have aroused a great deal of attention recently due to their wide range of practical prospects in numerous fields. Spatial-spectral fusion feature is widely used in HSI classification to get better performance. These methods are mostly based on a simple linear addition with the combined hyper-parameter to fuse the spatial and spectral information. It is necessary to fuse the features in a more suitable method. To solve this problem, we propose a novel HSI classification approach based on Hybrid spatial-spectral feature in broad learning system (HSFBLS). First, we employ an adaptive weighted mean filter to obtain spatial feature. Computing the weights of spatial and spectral channels in hybrid module by two BLS and uniting them with a weighted linear function. Then, we fuse the spectral-spatial feature by sparse autoencoder to get weighted fusion feature as the feature nodes to classify HSI data in BLS. By a two-stage fusion of spatial and spectral information, it can increase the classification accuracy contrast to simple combination. Very satisfactory classification results on typical HSI datasets illustrate the availability of proposed HSFBLS. Moreover, HSFBLS also reduce training time greatly contrast to time-consuming network.
相似文献Fault detection and diagnosis (FDD) framework is one of safety aspects that is important to the industrial sector to ensure its high-quality production and processes. However, the development of FDD system in chemical process systems could have difficulties, e.g. highly nonlinear correlation within the variables, highly complex process, and an enormous number of sensors to be monitored. These issues have encouraged the development of various approaches to increase the effectiveness and robustness of the FDD framework, such as the wavelet transform analysis, where it has the advantage in extracting the significant features in both time and frequency domain. It has motivated us to propose an extension work of the multi-scale KFDA method, where we have modified it with the implementation of Parseval’s theorem and the application of ANFIS method to improve the performance of the fault classification. In this work, through the implementation of Parseval’s theorem, the observation of fault features via the energy spectrum and effective reduction in DWT analysis data quantity can be accomplished. The extracted features from the multi-scale KFDA method are used for fault diagnosis and classification, where multiple ANFIS models were developed for each designated fault pattern to increase the classification accuracy and reduce the diagnosis error rate. The fault classification performance of the proposed framework has been evaluated using a benchmarked Tennessee Eastman process. The results indicated that the proposed multi-scale KFDA-ANFIS framework has shown the improvement with an average of 87.02% in classification accuracy over the multi-scale PCA-ANFIS (78.90%) and FDA-ANFIS (70.80%).
相似文献