首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
李旭  林伟  史彩云  温金环 《计算机应用》2010,30(5):1415-1417
针对极化SAR图像分类存在的问题,提出了基于SAR目标的极化特征的二维谱聚类方法。该方法可以充分考虑目标的极化相似性特征,利用二维的谱聚类方法实现极化SAR图像的分类。它以两目标散射的极化相似性参数图像作为输入特征,用二维图权函数代替一维图权函数求权值,使采样点分类和特征矢量分类相一致,从而实现极化SAR图像的分类。实验结果表明,该方法具有更好的分类结果,明显优于K均值分类。  相似文献   

2.
极化合成是极化SAR图像处理的一种重要方法,它能在成像处理后,利用已获得的Sinclair矩阵重新生成任意极化方式下的雷达接收功率图像,并能通过选取收发天线极化状态相同或正交,分别得到描述目标散射特性的共极化特征图和交叉极化特征图。根据极化合成理论和极化特征图的概念,可以获取目标的最佳极化。将其作为分类器的输入特征量,提出了一种基于极化合成的目标分类算法,并对实测极化SAR数据进行了分类实验。结果表明,该算法对于从极化SAR数据中获取目标的最佳极化,进而对目标进行分类是可行和有效的。  相似文献   

3.
On-line signature verification   总被引:11,自引:0,他引:11  
We describe a method for on-line handwritten signature verification. The signatures are acquired using a digitizing tablet which captures both dynamic and spatial information of the writing. After preprocessing the signature, several features are extracted. The authenticity of a writer is determined by comparing an input signature to a stored reference set (template) consisting of three signatures. The similarity between an input signature and the reference set is computed using string matching and the similarity value is compared to a threshold. Several approaches for obtaining the optimal threshold value from the reference set are investigated. The best result yields a false reject rate of 2.8% and a false accept rate of 1.6%. Experiments on a database containing a total of 1232 signatures of 102 individuals show that writer-dependent thresholds yield better results than using a common threshold.  相似文献   

4.
The insights gained from present land cover classification activities suggest integration of multiangle data into classification attempts for future progress. Land cover types that exhibit distinct signatures in the space of remote sensing data facilitate unambiguous identification of cover types. In this first part, we develop a theme for consistency between cover type definitions, uniqueness of their signatures, and physics of the remote sensing data. The idea of angular signatures in spectral space is proposed to provide a cogent synthesis of information from spectral and angular domains. Three new metrics, angular signature slope (ASSI), length (ASLI), and intercept indices, are introduced to characterize biome signatures. The statistical analyses with these indices confirm the idea that incorporation of the directional variable should improve biome classification result. The consistency principle is tested with the Multiangle Imaging SpectroRadiometer (MISR) leaf area index (LAI) algorithm by examining retrievals when both unique and nonunique signatures are input together with a land cover map. It is shown that this requirement guarantees valid retrievals. Part II provides a theoretical basis for these concepts [Zhang et al., Remote Sens. Environ., in press.].  相似文献   

5.
6.
The main goal of this paper is to propose an innovative technique for anomaly detection in hyperspectral imageries. This technique allows anomalies to be identified whose signatures are spectrally distinct from their surroundings, without any a priori knowledge of the target spectral signature. It is based on an one-dimensional projection pursuit with the Legendre index as the measure of interest. The index optimization is performed with a simulated annealing over a simplex in order to bypass local optima which could be sub-optimal in certain cases. It is argued that the proposed technique could be considered as seeking a projection to depart from the normal distribution, and unfolding the outliers as a consequence. The algorithm is tested with AHS and HYDICE hyperspectral imageries, where the results show the benefits of the approach in detecting a great variety of objects whose spectral signatures have sufficient deviation from the background. The technique proves to be automatic in the sense that there is no need for parameter tuning, giving meaningful results in all cases. Even objects of sub-pixel size, which cannot be made out by the human naked eye in the original image, can be detected as anomalies. Furthermore, a comparison between the proposed approach and the popular RX technique is given. The former outperforms the latter demonstrating its ability to reduce the proportion of false alarms.  相似文献   

7.
利用随机共振(SR)机制,在传输相关信号的非线性系统中加入噪声,在输出端信号可被增强。本文提出一种基于动态饱和非线性系统随机共振的图像去噪算法,首先将图像重采样为一维信号,并调节动态饱和非线性系统的参数,使之达到最优,使动态饱和非线性系统能够产生随机共振。相比一维随机共振,二维随机共振的图像复原效果更接近于原图,输出图像的直方图和峰值信噪比(PSNR),也明显优于一维随机共振。相比于传统滤波方法,饱和系统的去噪效果更好,同时对于噪声强度的变化具有较好的鲁棒性。  相似文献   

8.
In this paper, we propose a new method of extracting affine invariant texture signatures for content-based affine invariant image retrieval (CBAIR). The algorithm discussed in this paper exploits the spectral signatures of texture images. Based on spectral representation of affine transform, anisotropic scale invariant signatures of orientation spectrum distributions are extracted. Peaks distribution vector (PDV) obtained from signature distributions captures texture properties invariant to affine transform. The PDV is used to measure the similarity between textures. Extensive experimental results are included to demonstrate the performance of the method in texture classification and CBAIR.  相似文献   

9.
Magnetic sensors can be applied in vehicle recognition. Most of the existing vehicle recognition algorithms use one sensor node to measure a vehicle’s signature. However, vehicle speed variation and environmental disturbances usually cause errors during such a process. In this paper we propose a method using multiple sensor nodes to accomplish vehicle recognition. Based on the matching result of one vehicle’s signature obtained by different nodes, this method determines vehicle status and corrects signature segmentation. The co-relationship between signatures is also obtained, and the time offset is corrected by such a co-relationship. The corrected signatures are fused via maximum likelihood estimation, so as to obtain more accurate vehicle signatures. Examples show that the proposed algorithm can provide input parameters with higher accuracy. It improves the av-erage accuracy of vehicle recognition from 94.0%to 96.1%, and especially the bus recognition accuracy from 77.6%to 92.8%.  相似文献   

10.
This paper proposes a hybrid opto-electronic method for the fast automatic verification of handwritten signatures. This method combines several statistical classifiers and consists of three steps. The first step aims to transform the original signatures using the identity and four Gabor transforms. For each image transform, the second step is to intercorrelate the analysed signature with the similarly transformed signatures of the learning database. Finally, the third step performs the verification of the authenticity of signatures by fusing the decisions related to each transform. Image transforms and intercorrelations can be computed in real time using a high-speed optical correlator. The different decisions and their fusion are then digitally performed. The opto-electronic implementation of the proposed method has been simulated on a large database, taking into account the specific constraints of the optical implementation. Satisfactory results have been obtained. Indeed, the proposed system allows the rejection of 62.4% of the forgeries used for the experiments when 99% of genuine signatures are correctly recognized.Received: 19 November 2002, Accepted: 15 June 2004, Published online: 12 August 2004 Correspondence to: J.-B. Fasquel  相似文献   

11.
针对现有的时域模态参数识别方法大多存在难定阶和抗噪性差的问题,提出一种无监督学习的卷积神经网络(CNN)的振动信号模态识别方法。该算法在卷积神经网络的基础上进行改进。首先,将应用于二维图像处理的卷积神经网络改成处理一维信号的卷积神经网络,其中输入层改成待提取模态参数的振动信号集合,中间层改成若干一维卷积层、抽样层,输出层得到的为信号对应的N阶模态参数集合;然后,在误差评估中,对网络计算结果(N阶模态参数集)进行振动信号重构;最后,将重构信号和输入信号之间差的平方和作为网络学习误差,使得网络变成无监督学习网络,避免模态参数提取算法的定阶难题。实验结果表明,当所构建的卷积神经网络应用于模态参数提取时,与随机子空间识别(SSI)算法及其局部线性嵌入(LLE)算法对比,在噪声干扰下,构建的卷积神经网络识别精度要高于SSI算法与LLE算法,具有抗噪声强、避免了定阶难题的优点。  相似文献   

12.
Domain decomposition by nested dissection for concurrent factorization and storage (CFS) of asymmetric matrices is coupled with finite element and spectral element discretizations and with Newton's method to yield an algorithm for parallel solution of nonlinear initial-and boundary-value problem. The efficiency of the CFS algorithm implemented on a MIMD computer is demonstrated by analysis of the solution of the two-dimensional, Poisson equation discretized using both finite and spectral elements. Computation rates and speedups for the LU-decomposition algorithm, which is the most time consuming portion of the solution algorithm, scale with the number of processors. The spectral element discretization with high-order interpolating polynomials yields especially high speedups because the ratio of communication to computation is lower than for low-order finite element discretizations. The robustness of the parallel implementation of the finite-element/Newton algorithm is demonstrated by solution of steady and transient natural convection in a two-dimensional cavity, a standard test problem for low Prandtl number convection. Time integration is performed using a fully implicit algorithm with a modified Newton's method for solution of nonlinear equations at each time step. The efficiency of the CFS version of the finite-element/Newton algorithm compares well with a spectral element algorithm implemented on a MIMD computer using iterative matrix methods.Submitted toJ. Scientific Computing, August 25, 1994.  相似文献   

13.
This paper investigates the current automatics methods used to generate efficient and accurate signatures to create countermeasures against attacks by polymorphic worms. These strategies include autograph, polygraph and Simplified Regular Expression (SRE). They rely on network-based signature detection and filtering content network traffic, as the signature generated by these methods can be read by Intrusion Prevention systems and firewalls. In this paper, we also present the architecture and evaluation of each method, and the implementation used as patterns by SRE mechanism to extract accurate signatures. Such implementation was accomplished through use of the Needleman–Wunsch algorithm, which was inadequate to manage the invariant parts and distances restrictions of the polymorphic worm. Consequently, an Enhanced Contiguous Substring Rewarded (ECSR) algorithm is developed to improve the result extraction from the Needleman–Wunsch algorithm and generate accurate signatures. The signature generation by SRE is found to be more accurate and efficient as it preserves all the important features of polymorphic worms. The evaluation results show that the signature contains conjunctions of tokens, or token subsequence can produce a loss of vital information such as ignoring one byte token or neglecting the restriction distances. Furthermore, the Simplified Regular Expression needs to be updated and accurate when compared with autograph and polygraph methods.  相似文献   

14.
Hyperspectral imaging, which records a detailed spectrum of light arriving in each pixel, has many potential uses in remote sensing as well as other application areas. Practical applications will typically require real-time processing of large data volumes recorded by a hyperspectral imager. This paper investigates the use of graphics processing units (GPU) for such real-time processing. In particular, the paper studies a hyperspectral anomaly detection algorithm based on normal mixture modelling of the background spectral distribution, a computationally demanding task relevant to military target detection and numerous other applications. The algorithm parts are analysed with respect to complexity and potential for parallellization. The computationally dominating parts are implemented on an Nvidia GeForce 8800 GPU using the Compute Unified Device Architecture programming interface. GPU computing performance is compared to a multi-core central processing unit implementation. Overall, the GPU implementation runs significantly faster, particularly for highly data-parallelizable and arithmetically intensive algorithm parts. For the parts related to covariance computation, the speed gain is less pronounced, probably due to a smaller ratio of arithmetic to memory access. Detection results on an actual data set demonstrate that the total speedup provided by the GPU is sufficient to enable real-time anomaly detection with normal mixture models even for an airborne hyperspectral imager with high spatial and spectral resolution.  相似文献   

15.
Hyperspectral image contains various wavelength channels and the corresponding imagery processing requires a computation platform with high performance. Target and anomaly detection on hyperspectral image has been concerned because of its practicality in many real-time detection fields while wider applicability is limited by the computing condition and low processing speed. The field programmable gate arrays (FPGAs) offer the possibility of on-board hyperspectral data processing with high speed, low-power consumption, reconfigurability and radiation tolerance. In this paper, we develop a novel FPGA-based technique for efficient real-time target detection algorithm in hyperspectral images. The collaborative representation is an efficient target detection (CRD) algorithm in hyperspectral imagery, which is directly based on the concept that the target pixels can be approximately represented by its spectral signatures, while the other cannot. To achieve high processing speed on FPGAs platform, the CRD algorithm reduces the dimensionality of hyperspectral image first. The Sherman–Morrison formula is utilized to calculate the matrix inversion to reduce the complexity of overall CRD algorithm. The achieved results demonstrate that the proposed system may obtains shorter processing time of the CRD algorithm than that on 3.40 GHz CPU.  相似文献   

16.
On-line signature verification using LPC cepstrum and neuralnetworks   总被引:2,自引:0,他引:2  
An on-line signature verification scheme based on linear prediction coding (LPC) cepstrum and neural networks is proposed. Cepstral coefficients derived from linear predictor coefficients of the writing trajectories are calculated as the features of the signatures. These coefficients are used as inputs to the neural networks. A number of single-output multilayer perceptrons (MLPs), as many as the number of words in the signature, are equipped for each registered person to verify the input signature. If the summation of output values of all MLPs is larger than the verification threshold, the input signature is regarded as a genuine signature; otherwise, the input signature is a forgery. Simulations show that this scheme can detect the genuineness of the input signatures from a test database with an error rate as low as 4%  相似文献   

17.
提高红外目标模拟器校准数据的拟合精度,对于红外目标的辐射照度等辐射特性的测量有着重要意义;针对校准数据具有很强的非线性,传统的拟合算法精度不高的问题,引入一种基于粒子群算法优化的极限学习机算法(PSO-ELM),以标准黑体辐射温度作为输入因子,以MCT探测器实际测量出的辐射照度作为输出因子,建立PSO-ELM模型,利用粒子群算法(PSO)对连接隐藏神经元和输入层的权值和隐藏神经元阈值进行优化,拟合出输入参数和输出参数之间的非线性关系;这两个参数的优化提高了极限学习机算法(ELM)的性能,该方法的主要优点是具有较强的容错性、较好的对复杂非线性数据处理性能和ELM算法参数设置上的优化机制;通过与GA-ELM模型、ELM模型进行对,验证了与传统数据拟合方法相比,基于PSO-ELM的方法拟合精度有了很大提高,为红外目标模拟器校准数据拟合提供了新的方法。  相似文献   

18.
The general method of analysing mixed pixel spectral response is to decompose the actual spectra into several pure spectral components representing the signatures of the endmembers. This work suggests a reverse engineering of standardizing the mixed pixel spectrum for a certain spatial distribution of endmembers by synthesizing spectral signatures with varying proportions of standard spectral library data and matching them with the experimentally obtained mixed pixel signature. The idea is demonstrated with hyperspectral ultraviolet–visible–near-infrared (UV–vis–NIR) reflectance measurements on laboratory-generated model mixed pixels consisting of different endmember surfaces: concrete, soil, brick and vegetation and hyperspectral signatures derived from Hyperion satellite images consisting of concrete, soil and vegetation in different proportions. The experimental reflectance values were compared with the computationally generated spectral variations assuming linear mixing of pure spectral signatures. Good matching in the nature of spectral variation was obtained in most cases. It is hoped that using the present concept, hyperspectral signatures of mixed pixels can be synthesized from the available spectral libraries and matched with those obtained from satellite images, even with fewer bands. Thus enhancing the computational job in the laboratory can moderate the keen requirement of high accuracy of remote-sensor and band resolution, thereby reducing data volume and transmission bandwidth.  相似文献   

19.
A decimation-in-time radix-2 fast Fourier transform (FFT) algorithm is considered here for implementation in multiprocessors with shared bus, multistage interconnection network (MIN), and in mesh connected computers. Results are derived for data allocation, interprocessor communication, approximate computation time, and speedup of an N point FFT on any P available processing elements (PE's). Further generalization is obtained for a radix-r FFT algorithm. An N X N point two-dimensional discrete Fourier transform (DFT) implementation is also considered when one or more rows of the input data matrix are allocated to each PE.  相似文献   

20.
We define similar video content as video sequences with almost identical content but possibly compressed at different qualities, reformatted to different sizes and frame-rates, undergone minor editing in either spatial or temporal domain, or summarized into keyframe sequences. Building a search engine to identify such similar content in the World-Wide Web requires: 1) robust video similarity measurements; 2) fast similarity search techniques on large databases; and 3) intuitive organization of search results. In a previous paper, we proposed a randomized technique called the video signature (ViSig) method for video similarity measurement. In this paper, we focus on the remaining two issues by proposing a feature extraction scheme for fast similarity search, and a clustering algorithm for identification of similar clusters. Similar to many other content-based methods, the ViSig method uses high-dimensional feature vectors to represent video. To warrant a fast response time for similarity searches on high dimensional vectors, we propose a novel nonlinear feature extraction scheme on arbitrary metric spaces that combines the triangle inequality with the classical Principal Component Analysis (PCA). We show experimentally that the proposed technique outperforms PCA, Fastmap, Triangle-Inequality Pruning, and Haar wavelet on signature data. To further improve retrieval performance, and provide better organization of similarity search results, we introduce a new graph-theoretical clustering algorithm on large databases of signatures. This algorithm treats all signatures as an abstract threshold graph, where the distance threshold is determined based on local data statistics. Similar clusters are then identified as highly connected regions in the graph. By measuring the retrieval performance against a ground-truth set, we show that our proposed algorithm outperforms simple thresholding, single-link and complete-link hierarchical clustering techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号