首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A prerequisite for target detection in synthetic aperture radar and moving target imaging radars is an ability to classify background clutter in an optimal manner. Such radar clutter can frequently be modelled as a correlated nonGaussian process with, for example, Weibull or K statistics. Maximum likelihood (ML) provides an optimum classification scheme but cannot always be formulated when correlations are present. In such circumstances, nonlinear, adaptive filters are required which can learn to classify the clutter types: a role to which neural networks are particularly suited. The authors investigate how closely neural networks can approach optimum classification. To this end, a factorisation technique is presented which aids convergence to the best possible solution obtainable from the training data. The performances of factorised networks are compared with the ML performance and the performances of various intuitive and approximate classification schemes when applied to uncorrelated K distributed images. Furthermore, preliminary results are presented for the classification of correlated processes. It is seen that factorised neural networks can produce an accurate numerical approximation to the ML solution and will thus be of great benefit in radar clutter classification  相似文献   

2.
Electroencephalography (EEG) signals arise as mixtures of various neural processes which occur in particular spatial, frequency, and temporal brain locations. In classification paradigms, algorithms are developed that can distinguish between these processes. In this work, we apply tensor factorisation to a set of EEG data from a group of epileptic patients and factorise the data into three modes; space, time, and frequency with each mode containing a number of components or signatures. We train separate classifiers on various feature sets corresponding to complementary combinations of those modes and components and test the classification accuracy for each set. The relative influence on the classification accuracy of the respective spatial, temporal, or frequency signatures can then be analysed and useful interpretations can be made. Additionaly, we show that through tensor factorisation we can perform dimensionality reduction by evaluating the classification performance with regards to the number of components in each mode and also by rejecting components with insignificant contribution to the classification accuracy.  相似文献   

3.
Human faces can convey substantial information about a person, such as his or her age, race, identity, gender, and emotions. Such facial information can be obtained through techniques like human facial tracking and detection, facial recognition, gender classification, emotion recognition, as well as age estimation. Of these, gender classification is particularly important due to its diverse applications in the fields such as video surveillance and commercial advertising. In this thesis, we propose a method of gender classification based on run-length histograms. The proposed method uses a run-length histogram to record the position information of pixels, thereby efficiently improves the recognition rate and makes the technique suitable for a big-data multimedia database. The experimental results show that the proposed method can achieve better accuracy than a multi-scale based method can.  相似文献   

4.
针对视频分类中普遍面临的类内离散度和类间相似性较大而制约分类性能的问题,该文提出一种基于深度度量学习的视频分类方法。该方法设计了一种深度网络,网络包含特征学习、基于深度度量学习的相似性度量,以及分类3个部分。其中相似性度量的工作原理为:首先,计算特征间的欧式距离作为样本之间的语义距离;其次,设计一个间隔分配函数,根据语义距离动态分配语义间隔;最后,根据样本语义间隔计算误差并反向传播,使网络能够学习到样本间语义距离的差异,自动聚焦于难分样本,以充分学习难分样本的特征。该网络在训练过程中采用多任务学习的方法,同时学习相似性度量和分类任务,以达到整体最优。在UCF101和HMDB51上的实验结果表明,与已有方法相比,提出的方法能有效提高视频分类精度。  相似文献   

5.
针对经典卷积神经网络(convolutional neural network,CNN) 的高光谱影像分类方法存在关键细节特征表现不足、训练需要大量样本等问题,提出一种基于多尺度特征与双注意力机制的高光谱影像分类方法。首先,利用三维卷积提取影像的空谱特征,并采用转置卷积获得特征的细节信息;然后,通过不同尺寸的卷积核运算提取多尺度特征并实现不同感受野下多尺度特征的融合;最后,设计双注意力机制抑制混淆的区域特征,同时突出区分性特征。在两幅高光谱影像上进行的实验结果表明:分别在每类地物中 随机选取10%和0.5%的样本作为训练样本,提出模型的总体分类精度分别提高到99.44%和98.86%;对比一些主流深度学习分类模型,提出模型能够关注于对分类任务贡献最大的关键特征,可以获取更高的分类精度。  相似文献   

6.
坐标下降(Coordinate Descent, CD)方法是求解大规模数据分类问题的有效方法,具有简单操作流程和快速收敛速率。为了提高罗杰斯特回归分类器(Logistic Regression Classifier, LRC)的泛化性能,受v-软间隔支持向量机的启发,该文提出一种v-软间隔罗杰斯特回归分类机(v-Soft Margin Logistic Regression Classifier, v-SMLRC),证明了v-SMLRC对偶为一等式约束对偶坐标下降CDdual并由此提出了适合于大规模数据的v-SMLRC-CDdual。所提出的v-SMLRC-CDdual既能最大化类间间隔,又能有效提高LRC的泛化性能。大规模文本数据集实验表明,v-SMLRC-CDdual分类性能优于或等同于相关方法。  相似文献   

7.
李秋生  谢维信  黄敬雄 《信号处理》2013,29(9):1091-1097
低分辨雷达飞机回波的扩展分形特性提供了对目标回波在不同尺度下的粗糙度的精细描述,为防空雷达飞机目标的分类和识别提供了一种新的途径。该文在介绍扩展分形理论的基础上,首先利用扩展分形分析手段,对从某VHF波段防空警戒雷达上录取的飞机目标回波数据的扩展分形特性进行了分析,然后从模式识别的角度出发,提出了基于扩展分形特征的防空雷达飞机目标分类方法,最后采用不同类型飞机目标的实测回波数据进行分类识别实验,对该方法的性能进行了对比和分析。分类识别实验的结果表明,广义Hurst指数等扩展分形特征参数可以作为飞机目标分类和识别的有效特征,所提出的方法具有良好的分类识别性能,是一种有效的目标分类方法。   相似文献   

8.
针对现有的浅海水声目标深度分类方法存在的频率适用范围有限、信噪比要求较高等问题,在已获得有效的测距结果的前提下,该文提出一种基于新匹配量的浅海水声目标深度分类算法。通过分析模态互相关项的深度分布特征,建立以垂直复声强为匹配量的目标深度分类模型。接收深度不同时,算法虽均使用垂直复声强为匹配量,但实际上影响深度分类效果的模态互相关项是不同的。根据目标深度分类需求的不同,通过指定双矢量传感器的接收深度,实现目标深度分类模型匹配量的优化选取,从而实现目标深度分类算法性能的提升。仿真结果表明:该算法适用于目标频率激发3阶简正波的情况,算法的频率适用范围得以扩展。算法在低信噪比(SNR)情况下(SNR=0 dB)、复杂海洋波导中可以获得有价值的深度分类结果。  相似文献   

9.
针对数据分类问题的局限,提出一种基于改进型深度数据流形的数据分类算法并将其应用到人脸识别中。首先,通过采集人脸图像的深度信息,利用稀疏表示对其进行去噪处理;再结合图像的颜色信息,重新生成三维人脸信息数据库,通过对人脸数据的流形分析得到最优的降维结果,按十字十乘交叉验证法的原则选取训练集和测试集,将训练集输入支持向量机算法建立数据分类器;最后,将测试集输入训练完成的分类器中,实现人脸数据分类。选取ORL,Yale两类人脸图像标准数据库与传统人脸识别算法进行交叉对比实验,验证算法的优越性和可行性。实验结果表明:所提出的算法有较高的分类准确率,可有效地完成人脸识别。  相似文献   

10.
一种新的基于分类的音频流分割方法   总被引:1,自引:1,他引:0  
很多传统的音频流分割方法都是基于小尺度音频分类的,它们普遍存在虚假分割点过多的缺点,严重影响了实际应用的效果.我们的研究表明,大尺度音频片段的分类正确率明显高于小尺度音频片段的分类正确率.基于这个事实和减少虚假分割点的目的,我们提出了一种新的基于分类的音频流分割方法.首先,采用基于大尺度分类的分割方法对音频流进行粗分割,然后采用基于小尺度分类的细分割步骤在边界区域中进一步精确定位分割点.理论分析和实验结果均表明,当处理类别变换频率较低的音频流时,这种分割方法在保持真实分割点检测率的同时能够大幅降低虚假分割率.  相似文献   

11.
纹理图像的特征提取和分类   总被引:7,自引:4,他引:3  
文章提出了一种纹理图像特征提取的有效算法.该算法利用纹理信息的频域分布以及尺度特性,并在此基础上进行纹理分类.这里采用了分类性能良好的支撑矢量机作为分类器,实验结果表明该方法提取的特征向量稳定,在类别数目比较大时也能得到较高的分类精度.  相似文献   

12.
In many classification tasks, multiple images that form image set may be available rather than a single image for object. For image set classification, crucial issues include how to simply and efficiently represent the image sets and deal with outliers. In this paper, we develop a novel method, called image set-based classification using collaborative exemplars representation, which can achieve the data compression by finding exemplars that have a clear physical meaning and remove the outliers that will significantly degrade the classification performance. Specifically, for each gallery set, we explicitly select its exemplars that can appropriately describe this image set. For probe set, we can represent it collaboratively over all the gallery sets formed by exemplars. The distance between the query set and each gallery set can then be evaluated for classification after resolving representation coefficients. Experimental results show that our method outperforms the state-of-the-art methods on three public face datasets, while for object classification, our result is very close to the best result.  相似文献   

13.
Proposes a new probabilistic neural network (NN) that can estimate the a-posteriori probability for a pattern classification problem. The structure of the proposed network is based on a statistical model composed by a mixture of log-linearized Gaussian components. However, the forward calculation and the backward learning rule can be defined in the same manner as the error backpropagation NN. In this paper, the proposed network is applied to the electroencephalogram (EEG) pattern classification problem. In the experiments described, two types of a photic stimulation, which are caused by eye opening/closing and artificial light, are used to collect the data to be classified. It is shown that the EEG signals can be classified successfully and that the classification rates change depending on the amount of training data and the dimension of the feature vectors  相似文献   

14.
Count Models for Software Quality Estimation   总被引:1,自引:0,他引:1  
Identifying which software modules, during the software development process, are likely to be faulty is an effective technique for improving software quality. Such an approach allows a more focused software quality & reliability enhancement endeavor. The development team may also like to know the number of faults that are likely to exist in a given program module, i.e., a quantitative quality prediction. However, classification techniques such as the logistic regression model (lrm) cannot be used to predict the number of faults. In contrast, count models such as the Poisson regression model (prm), and the zero-inflated Poisson (zip) regression model can be used to obtain both a qualitative classification, and a quantitative prediction for software quality. In the case of the classification models, a classification rule based on our previously developed generalized classification rule is used. In the context of count models, this study is the first to propose a generalized classification rule. Case studies of two industrial software systems are examined, and for each we developed two count models, (prm, and zip), and a classification model (lrm). Evaluating the predictive capabilities of the models, we concluded that the prm, and the zip models have similar classification accuracies as the lrm. The count models are also used to predict the number of faults for the two case studies. The zip model yielded better fault prediction accuracy than the prm. As compared to other quantitative prediction models for software quality, such as multiple linear regression (mlr), the prm, and zip models have a unique property of yielding the probability that a given number of faults will occur in any module  相似文献   

15.
针对半导体排产控制问题,提出一种基于多叉树随机森林的数据分类综合排产算法。首先,以周计划投产品种为输入,采用多叉树随机森林数据分类方法,以品种名称、投产量、交货期和所属类别作为半导体排产的特征信息进行数据分类;其次,根据分类结果,以降低"改机"时间为目的,进而确定日投产品种和数量;最后,通过应用研究验证算法的可行性。实验结果表明:所提出的算法有效降低"改机"时间,具有一定的有效性和优越性。  相似文献   

16.
Neuro-fuzzy classification systems offer a means of obtaining fuzzy classification rules by a learning algorithm. Although it is usually no problem to find a suitable fuzzy classifier by learning from data, it can, however, be hard to obtain a classifier that can be interpreted conveniently. There is usually a trade-off between accuracy and readability. This paper discusses NEFCLASS — a neuro-fuzzy approach for classification problems — and its implementation NEFCLASS-X. It is shown how a readable fuzzy classifier can be obtained by a learning process and how interactive strategies for pruning rules and variables from a trained classifier can enhance its interpretability.  相似文献   

17.
Chromosomes are essential genomic information carriers. Chromosome classification constitutes an important part of routine clinical and cancer cytogenetics analysis. Cytogeneticists perform visual interpretation of banded chromosome images according to the diagrammatic models of various chromosome types known as the ideograms, which mimic artists' depiction of the chromosomes. In this paper, we present a subspace-based approach for automated prototyping and classification of chromosome images. We show that 1) prototype chromosome images can be quantitatively synthesized from a subspace to objectively represent the chromosome images of a given type or population, and 2) the transformation coefficients (or projected coordinate values of sample chromosomes) in the subspace can be utilized as the extracted feature measurements for classification purposes. We examine in particular the formation of three well-known subspaces, namely the ones derived from principal component analysis (PCA), Fisher's linear discriminant analysis, and the discrete cosine transform (DCT). These subspaces are implemented and evaluated for prototyping two-dimensional (2-D) images and for classification of both 2-D images and one-dimensional profiles of chromosomes. Experimental results show that previously unseen prototype chromosome images of high visual quality can be synthesized using the proposed subspace-based method, and that PCA and the DCT significantly outperform the well-known benchmark technique of weighted density distribution functions in classifying 2-D chromosome images.  相似文献   

18.
Multisensor approach to automated classification of sea ice image data   总被引:3,自引:0,他引:3  
A multisensor data fusion algorithm based on a multilayer neural network is presented for sea ice classification in the winter period. The algorithm uses European Remote Sensing (ERS), RADARSAT synthetic aperture radar (SAR), and low-resolution television camera images and image texture features. Based on a set of in situ observations made at the Kara Sea, a neural network is trained, and its structure is optimized using a pruning method. The performance of the algorithm with different combinations of input features (sensors) is assessed and compared with the performance of a linear discriminant analysis (LDA)-based algorithm. We show that for both algorithms a substantial improvement can be gained by fusion of the three different types of data (91.2% for the neural network) as compared with single-source ERS (66.0%) and RADARSAT (70.7%) SAR image classification. Incorporation of texture increases classification accuracy. This positive effect of texture becomes weaker with increasing number of sensors (from 8.4 to 6.4 percent points for the use of two and three sensors, respectively). In view of the short training time and smaller number of adjustable parameters, this result suggests that semiparametric classification methods can be considered as a good alternative to the neural networks and traditional parametric statistical classifiers applied for the sea ice classification.  相似文献   

19.
A wrapper-based approach to image segmentation and classification.   总被引:1,自引:0,他引:1  
The traditional processing flow of segmentation followed by classification in computer vision assumes that the segmentation is able to successfully extract the object of interest from the background image. It is extremely difficult to obtain a reliable segmentation without any prior knowledge about the object that is being extracted from the scene. This is further complicated by the lack of any clearly defined metrics for evaluating the quality of segmentation or for comparing segmentation algorithms. We propose a method of segmentation that addresses both of these issues, by using the object classification subsystem as an integral part of the segmentation. This will provide contextual information regarding the objects to be segmented, as well as allow us to use the probability of correct classification as a metric to determine the quality of the segmentation. We view traditional segmentation as a filter operating on the image that is independent of the classifier, much like the filter methods for feature selection. We propose a new paradigm for segmentation and classification that follows the wrapper methods of feature selection. Our method wraps the segmentation and classification together, and uses the classification accuracy as the metric to determine the best segmentation. By using shape as the classification feature, we are able to develop a segmentation algorithm that relaxes the requirement that the object of interest to be segmented must be homogeneous in some low-level image parameter, such as texture, color, or grayscale. This represents an improvement over other segmentation methods that have used classification information only to modify the segmenter parameters, since these algorithms still require an underlying homogeneity in some parameter space. Rather than considering our method as, yet, another segmentation algorithm, we propose that our wrapper method can be considered as an image segmentation framework, within which existing image segmentation algorithms may be executed. We show the performance of our proposed wrapper-based segmenter on real-world and complex images of automotive vehicle occupants for the purpose of recognizing infants on the passenger seat and disabling the vehicle airbag. This is an interesting application for testing the robustness of our approach, due to the complexity of the images, and, consequently, we believe the algorithm will be suitable for many other real-world applications.  相似文献   

20.
AdaBoost, a machine learning technique, is employed for supervised classification of land-cover categories of geostatistical data. We introduce contextual classifiers based on neighboring pixels. First, posterior probabilities are calculated at all pixels. Then, averages of the log posteriors are calculated in different neighborhoods and are then used as contextual classification functions. Weights for the classification functions can be determined by minimizing the empirical risk with multiclass. Finally, a convex combination of classification functions is obtained. The classification is performed by a noniterative maximization procedure. The proposed method is applied to artificial multispectral images and benchmark datasets. The performance of the proposed method is excellent and is similar to the Markov-random-field-based classifier, which requires an iterative maximization procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号