共查询到20条相似文献,搜索用时 15 毫秒
1.
In the field of hyperspectral image processing, anomaly detection (AD) is a deeply investigated task whose goal is to find objects in the image that are anomalous with respect to the background. In many operational scenarios, detection, classification and identification of anomalous spectral pixels have to be performed in real time to quickly furnish information for decision-making. In this framework, many studies concern the design of computationally efficient AD algorithms for hyperspectral images in order to assure real-time or nearly real-time processing. In this work, a sub-class of anomaly detection algorithms is considered, i.e., those algorithms aimed at detecting small rare objects that are anomalous with respect to their local background. Among such techniques, one of the most established is the Reed–Xiaoli (RX) algorithm, which is based on a local Gaussian assumption for background clutter and locally estimates its parameters by means of the pixels inside a window around the pixel under test (PUT). In the literature, the RX decision rule has been employed to develop computationally efficient algorithms tested in real-time systems. Initially, a recursive block-based parameter estimation procedure was adopted that makes the RX processing and the detection performance differ from those of the original RX. More recently, an update strategy has been proposed which relies on a line-by-line processing without altering the RX detection statistic. In this work, the above-mentioned RX real-time oriented techniques have been improved using a linear algebra-based strategy to efficiently update the inverse covariance matrix thus avoiding its computation and inversion for each pixel of the hyperspectral image. The proposed strategy has been deeply discussed pointing out the benefits introduced on the two analyzed architectures in terms of overall number of elementary operations required. The results show the benefits of the new strategy with respect to the original architectures. 相似文献
2.
Journal of Real-Time Image Processing - Dynamic range compression has become an important function used in modern digital video cameras to improve visual quality of color images suffered from low... 相似文献
3.
This paper presents a new near lossless compression algorithm for hyperspectral images based on distributed source coding. The algorithm is performed on blocks that have the same location and size in each band. Because the importance varies from block to block along the spectral orientation, an adaptive rate allocation algorithm that weights the energy of each block under the target rate constraints is introduced. A simple linear prediction model is employed to construct the side information of each block for Slepian–Wolf coding. The relationship between the quantized step size and the allocated rate of each block is determined under the condition of correct reconstruction with the side information at the Slepian–Wolf decoder. Slepian–Wolf coding is performed on the quantized version of each block. Experimental results show that the performance of the proposed algorithm is competitive with that of state-of-the-art compression algorithms, making it appropriate for on-board compression. 相似文献
4.
In order to support immediate decision-making in critical circumstances such as military reconnaissance and disaster rescue, real-time onboard implementation of target detection is greatly desired. In this paper, a real-time thresholding method (RT-THRES) is proposed to obtain the constant false alarm rate (CFAR) thresholds for target detection in real-time circumstances. RT-THRES utilizes Gaussian mixture model (GMM) to track and fit the distribution of the target detector’s outputs. GMM is an extension to Gaussian probability density function, which could approximate any distribution smoothly. In this method, GMM is utilized to model the detector’s output, and then the detection threshold is calculated to achieve a CFAR detection. The conventional GMM’s parameter estimation by Expectation-Maximization (EM) requires all data samples in the dataset to be involved during the procedure and the the parameters would be re-estimated when new data samples available. Thus, GMM is difficult to be applied in real-time processing when newly observed data samples coming progressively. To improve GMM’s application availability in time-critical circumstance, an optimization strategy is proposed by introducing the Incremental GMM (IGMM) which allows GMM’s parameter to be estimated online incrementally. Experiments on real hyperspectral image and synthetic dataset suggest that RT-THRES can track and model the detection outputs’ distribution accurately which ensures the accuracy of the calculation of CFAR thresholds. Moreover, by applying the optimization strategy the computational consumption of RT-THRES maintains relatively low. 相似文献
5.
Band selection plays an important role in identifying the most useful and valuable information contained in the hyperspectral images for further data analysis such as classification, clustering, etc. Memetic algorithm (MA), among other metaheuristic search methods, has been shown to achieve competitive performances in solving the NP-hard band selection problem. In this paper, we propose a formal probabilistic memetic algorithm for band selection, which is able to adaptively control the degree of global exploration against local exploitation as the search progresses. To verify the effectiveness of the proposed probabilistic mechanism, empirical studies conducted on five well-known hyperspectral images against two recently proposed state-of-the-art MAs for band selection are presented. 相似文献
6.
The Journal of Supercomputing - The high computational cost of the superpixel segmentation algorithms for hyperspectral remote sensing images makes them ideal candidates for parallel computation.... 相似文献
7.
Hyperspectral image contains various wavelength channels and the corresponding imagery processing requires a computation platform with high performance. Target and anomaly detection on hyperspectral image has been concerned because of its practicality in many real-time detection fields while wider applicability is limited by the computing condition and low processing speed. The field programmable gate arrays (FPGAs) offer the possibility of on-board hyperspectral data processing with high speed, low-power consumption, reconfigurability and radiation tolerance. In this paper, we develop a novel FPGA-based technique for efficient real-time target detection algorithm in hyperspectral images. The collaborative representation is an efficient target detection (CRD) algorithm in hyperspectral imagery, which is directly based on the concept that the target pixels can be approximately represented by its spectral signatures, while the other cannot. To achieve high processing speed on FPGAs platform, the CRD algorithm reduces the dimensionality of hyperspectral image first. The Sherman–Morrison formula is utilized to calculate the matrix inversion to reduce the complexity of overall CRD algorithm. The achieved results demonstrate that the proposed system may obtains shorter processing time of the CRD algorithm than that on 3.40 GHz CPU. 相似文献
8.
The paper presents a multi-processor architecture for real-time and low-power image and video enhancement applications. Differently from other state-of-the-art parallel architectures the proposed solution is composed of heterogeneous tiles. The tiles have computational and memory capabilities, support different algorithmic classes and are connected by a novel Network-on-Chip (NoC) infrastructure. The proposed packet-switched data transfer scheme avoids communication bottlenecks when more tiles are working concurrently. The functional performances of the NoC-based multi-processor architecture are assessed by presenting the achieved results when the platform is programmed to support different enhancement algorithms for still images or videos. The implementation complexity of the NoC-based multi-tile platform, integrated in 65 nm CMOS technology, is reported and discussed. 相似文献
9.
The characteristics of SpecTIR and AVIRIS 16-bit hyperspectral images are analyzed. The requirements for compression of such images are formulated. The aspects of using the hierarchical compression algorithm in hyperspectral images storage are studied. Spectral component approximation algorithms are considered that allow both an increased compression ratio and retrieval of particular components. Interpolation algorithms are considered and a rank interpolator is offered for hyperspectral images compression. Real 16-bit hyperspectral images are used in computational experiments to investigate the efficiency of the proposed algorithms. The best parameters of these algorithms are found experimentally and general recommendations on how to tune the proposed hierarchical compression algorithm to suit hyperspectral images storage problems are given. 相似文献
10.
This paper describes a new methodology to detect small anomalies in high resolution hyperspectral imagery, which involves successively: (1) a multivariate statistical analysis (principal component analysis, PCA) of all spectral bands; (2) a geostatistical filtering of noise and regional background in the first principal components using factorial kriging; and finally (3) the computation of a local indicator of spatial autocorrelation to detect local clusters of high or low reflectance values and anomalies. The approach is illustrated using 1 m resolution data collected in and near northeastern Yellowstone National Park. Ground validation data for tarps and for disturbed soils on mine tailings demonstrate the ability of the filtering procedure to reduce the proportion of false alarms (i.e., pixels wrongly classified as target), and its robustness under low signal to noise ratios. In almost all scenarios, the proposed approach outperforms traditional anomaly detectors (i.e., RX detector which computes the Mahalanobis distance between the vector of spectral values and the vector of global means), and fewer false alarms are obtained when using a novel statistic S2 (average absolute deviation of p-values from 0.5 through all spectral bands) to summarize information across bands. Image degradation through addition of noise or reduction of spectral resolution tends to blur the detection of anomalies, increasing false alarms, in particular for the identification of the least pure pixels. Results from a mine tailings site demonstrate the approach performs reasonably well for highly complex landscape with multiple targets of various sizes and shapes. By leveraging both spectral and spatial information, the technique requires little or no input from the user, and hence can be readily automated. 相似文献
11.
A challenging problem in spectral unmixing is how to determine the number of endmembers in a given scene. One of the most popular ways to determine the number of endmembers is by estimating the virtual dimensionality (VD) of the hyperspectral image using the well-known Harsanyi–Farrand–Chang (HFC) method. Due to the complexity and high dimensionality of hyperspectral scenes, this task is computationally expensive. Reconfigurable field-programmable gate arrays (FPGAs) are promising platforms that allow hardware/software codesign and the potential to provide powerful onboard computing capabilities and flexibility at the same time. In this paper, we present the first FPGA design for the HFC-VD algorithm. The proposed method has been implemented on a Virtex-7 XC7VX690T FPGA and tested using real hyperspectral data collected by NASA’s Airborne Visible Infra-Red Imaging Spectrometer over the Cuprite mining district in Nevada and the World Trade Center in New York. Experimental results demonstrate that our hardware version of the HFC-VD algorithm can significantly outperform an equivalent software version, which makes our reconfigurable system appealing for onboard hyperspectral data processing. Most important, our implementation exhibits real-time performance with regard to the time that the hyperspectral instrument takes to collect the image data. 相似文献
12.
Multimedia Tools and Applications - With the advancement in technology, hyperspectral images have potential applications in the field of remote sensing due to their high spectral resolution.... 相似文献
13.
Multimedia Tools and Applications - A new supervised feature extraction method appropriate for small sample size situations is proposed in this work. The proposed method is based on the first-order... 相似文献
14.
提出一种针对超光谱图像压缩的基于小波变换的、嵌入式的3维块分割编码算法。通过3维小波变换,将超光谱图像的空间冗余和谱间冗余同时去除。针对变换域内的小波系数,将集合分割嵌入式块SPECK编码算法扩展为3维,构造一种3维SPECK编码算法,对小波系数进行量化编码。实验证明,3维SPECK编码算法具有良好的率失真性能,其压缩效果优于采用SPECK方法对每一波段图像做压缩编码的效果,并具有计算复杂度低和嵌入式的特性。 相似文献
15.
Multimedia Tools and Applications - With the fast growing technologies in the field of remote sensing, hyperspectral image analysis has made a great breakthrough. It provides accurate and detailed... 相似文献
16.
Hyperspectral cameras provide useful discriminants for human face recognition that cannot be obtained by other imaging methods. We examine the utility of using near-infrared hyperspectral images for the recognition of faces over a database of 200 subjects. The hyperspectral images were collected using a CCD camera equipped with a liquid crystal tunable filter to provide 31 bands over the near-infrared (0.7 /spl mu/m-1.0 /spl mu/m). Spectral measurements over the near-infrared allow the sensing of subsurface tissue structure which is significantly different from person to person, but relatively stable over time. The local spectral properties of human tissue are nearly invariant to face orientation and expression which allows hyperspectral discriminants to be used for recognition over a large range of poses and expressions. We describe a face recognition algorithm that exploits spectral measurements for multiple facial tissue types. We demonstrate experimentally that this algorithm can be used to recognize faces over time in the presence of changes in facial pose and expression. 相似文献
17.
分析了MATLAB并行计算工具箱中各部件的关系,在Windows环境下搭建了并行计算集群;采用滑动邻域操作对声纳图像进行了对比度增强处理;重点介绍了MATLAB下的数据并行编程,利用分布式数组设计了集群环境下的图像增强并行算法.实验结果表明,MATLAB强大的内部函数使得并行计算易于实现,有效地提高了图像处理的实时性. 相似文献
18.
Hyperspectral imagery affords researchers all discriminating details needed for fine delineation of many material classes. This delineation is essential for scientific research ranging from geologic to environmental impact studies. In a data mining scenario, one cannot blindly discard information because it can destroy discovery potential. In a supervised classification scenario, however, the preselection of classes presents one with an opportunity to extract a reduced set of meaningful features without degrading classification performance. Given the complex correlations found in hyperspectral data and the potentially large number of classes, meaningful feature extraction is a difficult task. We turn to the recent neural paradigm of generalized relevance learning vector quantization (GRLVQ) [B. Hammer and T. Villmann, Neural Networks vol. 15, pp. 1059-1068, 2002], which is based on, and substantially extends, learning vector quantization (LVQ) [T. Kohonen, Self-Organizing Maps, Berlin, Germany: Springer-Verlag, 2001] by learning relevant input dimensions while incorporating classification accuracy in the cost function. By addressing deficiencies in GRLVQ, we produce an improved version, GRLVQI, which is an effective analysis tool for high-dimensional data such as remotely sensed hyperspectral data. With an independent classifier, we show that the spectral features deemed relevant by our improved GRLVQI result in a better classification for a predefined set of surface materials than using all available spectral channels. 相似文献
19.
The compressed sensing (CS) theorem is a novel sampling approach that breaks through the conventional Nyquist sampling limit and brings a revolution in the field of signal processing. This article investigates the compression technique for CS hyperspectral images so as to illustrate the superiority provided by this new theorem. First, several comparative experiments are used to reveal that the drawback of prior compression techniques, designed for the data acquired by the conventional hyperspectral imaging system, is either low compression ratio or a waste of sampling resource. After a condensed analysis, we state that the CS theorem provides the probability of avoiding such defects. Then a straightforward scheme, which takes advantage of spectral correlation, is proposed to compress the CS hyperspectral images to reduce the data size further. Moreover, a flexible recovery strategy is designed to speed up the reconstruction of original bands from the corresponding CS images. The experimental results based on the actual hyperspectral images have demonstrated the efficiency of this proposed technique. 相似文献
20.
In this article we present new lossless compression methods by combining existing methods and compare them using AVIRIS images. These methods include the Self-Organizing Map (SOM), Principal Component Analysis (PCA), and the three-dimensional Wavelet Transform combined with traditional lossless encoding methods. The two-dimensional JPEG2000 and SPIHT compression methods were applied to the eigenimages produced by the PCA. The bit allocation for the compression of eigenimages was based on the amount of information in each eigenimage. In bit rate calculation we used the exponential entropy formula, which gave better results than the original linear version. The information loss from the compression was measured by the Signal-to-Noise Ratio (SNR) and Peak-Signal-to-Noise Ratio (PSNR). To get more illustrative and practical error measures, classification of spectra was performed using unsupervised K-means clustering combined with spectral matching. Spectral matching methods include Euclidean distance, Spectral Similarity Value (SSV), and Spectral Angle Mapper (SAM). We used two test images, which both were AVIRIS images with 224 bands and 512 lines in 614 columns. The PCA in the spectral dimension combined with JPEG2000 or SPIHT in the spatial dimension was the best method in terms of the image quality and compression speed. 相似文献
|