首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
给出了一种新的类条件密度函数估计的σPNN模型,它基于模式层共享的PNN和模式 层分离的PNN,即每个类不仅拥有一组只属于自己的模式层,还拥有所有类都共享的几个模式 层,这里共享意味着每个核函数对所有类的条件密度估计都有贡献,新模型的训练采用最大似然 准则,并改进了EM算法来调整模型参数.闭集文本自由说话人辨认试验证明了提出的模型及其 算法的正确性.  相似文献   

2.
L. Dugard  I.D. Landau 《Automatica》1980,16(5):443-462
Several recurisve algorithms for parametric identification of discrete time systems derived from Model Reference Adaptive System (M.R.A.S.) techniques are analysed. All these algorithms belong to the class of output error methods which have been very little discussed previously in the identification literature. These algorithms are analysed in the deterministic and stochastic environment using the Equivalent Feedback Representation (E.F.R.) and Ordinary Differential Equation (O.D.E.) methods respectively. A comparative evaluation of these algorithms is presented. This comparison is made also with respect to various widely used recursive algorithms belonging to the ‘equation error method’ (extended least squares, approximate maximum likelihood).  相似文献   

3.
ABSTRACT

In a rational model, some terms of the information vector are correlated with the noise, which makes the traditional least squares based iterative algorithms biased. In order to overcome this shortcoming, this paper develops two recursive algorithms for estimating the rational model parameters. These two algorithms, based on the maximum likelihood principle, have three integrated key features: (1) to establish two unbiased maximum likelihood recursive algorithms, (2) to develop a maximum likelihood recursive least squares (ML-RLS) algorithm to decrease the computational efforts, (3) to update the parameter estimates by the ML-RLS based particle swarm optimisation (ML-RLS-PSO) algorithm when the noise-to-output ratio is large. Comparative studies demonstrate that (1) the ML-RLS algorithm is only valid for rational models when the noise-to-output ratio is small, (2) the ML-RLS-PSO algorithm is effective for rational models with random noise-to-output ratio, but at the cost of heavy computational efforts. Furthermore, the simulations provide cases for potential expansion and applications of the proposed algorithms.  相似文献   

4.
基于最大似然模型插值的快速说话人自适应算法   总被引:6,自引:2,他引:6  
本文提出了一种新的说话人自适应算法——最大似然模型插值。其基本思想是,利用语音单元间的相关性,根据最大似然准则由一组说话人相关模型的线性组合得到测试者的说话人自适应模型。接着介绍了此插值框架下的两种具体自适应算法:均值线性插值算法和矩阵线性插值算法。实验证明上述算法有良好的收敛性,在只有3句自适应数据时便能使识别系统的性能有较大提高。  相似文献   

5.
The block or simultaneous clustering problem on a set of objects and a set of variables is embedded in the mixture model. Two algorithms have been developed: block EM as part of the maximum likelihood and fuzzy approaches, and block CEM as part of the classification maximum likelihood approach. A unified framework for obtaining different variants of block EM is proposed. These variants are studied and their performances evaluated in comparison with block CEM, two-way EM and two-way CEM, i.e EM and CEM applied separately to the two sets.  相似文献   

6.
The problem of image segmentation is considered in the context of a mixture of probability distributions. The segments fall into classes. A probability distribution is associated with each class of segment. Parametric families of distributions are considered, a set of parameter values being associated with each class. With each observation is associated an unobservable label, indicating from which class the observation arose. Segmentation algorithms are obtained by applying a method of iterated maximum likelihood to the resulting likelihood function. A numerical example is given. Choice of the number of classes, using Akaike's information criterion (AIC) for model identification, is illustrated.  相似文献   

7.
This paper examines in detail the estimation errors of two algorithms proposed by Koopmans [1] and Levin [2] for identifying linear systems described by annth-order scalar difference equation. Necessary and sufficient conditions are established for the strong consistency of the estimates that these algorithms generate. A priori error bounds on estimation error are obtained to provide a quantitative basis for comparing these algorithms in relation to the maximum likelihood estimates. Computational results are also presented to supplement the theoretical discussions.  相似文献   

8.
Mixture of experts classification using a hierarchical mixture model   总被引:1,自引:0,他引:1  
A three-level hierarchical mixture model for classification is presented that models the following data generation process: (1) the data are generated by a finite number of sources (clusters), and (2) the generation mechanism of each source assumes the existence of individual internal class-labeled sources (subclusters of the external cluster). The model estimates the posterior probability of class membership similar to a mixture of experts classifier. In order to learn the parameters of the model, we have developed a general training approach based on maximum likelihood that results in two efficient training algorithms. Compared to other classification mixture models, the proposed hierarchical model exhibits several advantages and provides improved classification performance as indicated by the experimental results.  相似文献   

9.
不同精度冗余数据的融合   总被引:3,自引:0,他引:3  
针对融合误差的最大值和数学期望,提出了一个评判数据融合方法优劣的标准.随后,提出了一种新的数据融合方法,扩展加权平均法.当待融合数据为两个时,通过理论分析得到了计算融合参数的公式.当有更多的数据参与融合时,通过数值仿真得到了该方法的各个融合参数.该方法能解决最大似然估计法所难以解决的均匀分布数据的融合问题,且具有比包括最大似然估计法在内的其它三种有代表性的数据融合方法更高的精度.  相似文献   

10.
MIMO雷达最大似然参数估计   总被引:1,自引:0,他引:1  
多输入多输出(MIMO)雷达使用多个天线同时发射多个独立探测信号,并使用多个天线接收目标回波信号.本文考虑了发射空域分集、相干接收MIMO雷达模型及其最大似然(ML)参数估计方法.基于最大似然准则,本文推导了两种渐近最大似然算法.仿真实验的结果表明,在均匀噪声模型中,其中一种渐近算法与基于延迟求和波束形成的最大似然算法性能接近,而另一种渐近算法性能略差,但具有较低的计算复杂度.而在非均匀噪声模型中,本文所提出的两种渐近最大似然算法的性能均优于基于延迟求和波束形成的最大似然算法.  相似文献   

11.
This paper presents an attempt at using the syntactic structure in natural language for improved language models for speech recognition. The structured language model merges techniques in automatic parsing and language modeling using an original probabilistic parameterization of a shift-reduce parser. A maximum likelihood re-estimation procedure belonging to the class of expectation-maximization algorithms is employed for training the model. Experiments on the Wall Street Journal and Switchboard corpora show improvement in both perplexity and word error rate—word lattice rescoring—over the standard 3-gram language model.  相似文献   

12.
An encompassing, self-contained introduction to the foundations of the broad field of fuzzy clustering is presented. The fuzzy cluster partitions are introduced with special emphasis on the interpretation of the two most encountered types of gradual cluster assignments: the fuzzy and the possibilistic membership degrees. A systematic overview of present fuzzy clustering methods is provided, highlighting the underlying ideas of the different approaches. The class of objective function-based methods, the family of alternating cluster estimation algorithms, and the fuzzy maximum likelihood estimation scheme are discussed. The latter is a fuzzy relative of the well-known expectation maximization algorithm and it is compared to its counterpart in statistical clustering. Related issues are considered, concluding with references to selected developments in the area.  相似文献   

13.
Witkin proposed a maximum likelihood model for the recovery of surface orientation from image texture. We develop two efficient algorithms for solving this shape from texture problem and compare the results of these algorithms with the algorithm described in [1].  相似文献   

14.
Two classification approaches were investigated for the mapping of tropical forests from Landsat-TM data of a region north of Manaus in the Brazilian state of Amazonas. These incorporated textural information and made use of fuzzy approaches to classification. In eleven class classifications the texture-based classifiers (based on a Markov random field model) consistently provided higher classification accuracies than conventional per-pixel maximum likelihood and minimum distance classifications, indicating that they are more able to characterize accurately several regenerating forest classes. Measures of the strength of class memberships derived from three classification algorithms (based on the probability density function, a posteriori probability and the Mahalanobis distance) could be used to derive fuzzy image classifications and be used in post-classification processing. The latter, involving either the summation of class memberships over a local neighbourhood or the application of homogeneity measures, were found to increase classification accuracy by some 10 per cent in comparison with a conventional maximum likelihood classification, a result of comparable accuracy to that derived from the texture-based classifications.  相似文献   

15.
We obtain maximum likelihood and optimal (Bayesian) algorithms for detection and measurement of moments of appearance and disappearance of a signal having arbitrary shape and observed in additive white Gaussian noise. Asymptotic expressions for characteristics of the maximum likelihood algorithms are obtained. By means of computer modeling, characteristics of the Bayesian algorithms are found.  相似文献   

16.
Abstract

The maximum likelihood and the nearest neighbour classification algorithms are reviewed, particularly from the point of view of user/analyst requirements. The two algorithms were put to use for the classification or Landsat TM data of agricultural scenes and accuracy with respect to ‘ground truth’ was evaluated using different parametric settings. Results show that within the maximum likelihood classification, accuracies and errors can vary to a considerable degree depending on the formation of the statistical classes from the training data. More interestingly, it was found that the nearest neighbour algorithm produced higher accuracies and was judged to be more robust, but it has computer implementation problems with high data dimensionality.  相似文献   

17.
A new likelihood based AR approximation is given for ARMA models. The usual algorithms for the computation of the likelihood of an ARMA model require O(n) flops per function evaluation. Using our new approximation, an algorithm is developed which requires only O(1) flops in repeated likelihood evaluations. In most cases, the new algorithm gives results identical to or very close to the exact maximum likelihood estimate (MLE). This algorithm is easily implemented in high level quantitative programming environments (QPEs) such as Mathematica, MatLab and R. In order to obtain reasonable speed, previous ARMA maximum likelihood algorithms are usually implemented in C or some other machine efficient language. With our algorithm it is easy to do maximum likelihood estimation for long time series directly in the QPE of your choice. The new algorithm is extended to obtain the MLE for the mean parameter. Simulation experiments which illustrate the effectiveness of the new algorithm are discussed. Mathematica and R packages which implement the algorithm discussed in this paper are available [McLeod, A.I., Zhang, Y., 2007. Online supplements to “Faster ARMA Maximum Likelihood Estimation”, 〈http://www.stats.uwo.ca/faculty/aim/2007/faster/〉]. Based on these package implementations, it is expected that the interested researcher would be able to implement this algorithm in other QPEs.  相似文献   

18.
We propose iterative proportional scaling (IPS) via decomposable submodels for maximizing the likelihood function of a hierarchical model for contingency tables. In ordinary IPS the proportional scaling is performed by cycling through the members of the generating class of a hierarchical model. We propose the adjustment of more marginals at each step. This is accomplished by expressing the generating class as a union of decomposable submodels and cycling through the decomposable models. We prove the convergence of our proposed procedure, if the amount of scaling is adjusted properly at each step. We also analyze the proposed algorithms around the maximum likelihood estimate (MLE) in detail. The faster convergence of our proposed procedure is illustrated by numerical examples.  相似文献   

19.
This paper discusses learning algorithms of layered neural networks from the standpoint of maximum likelihood estimation. At first we discuss learning algorithms for the most simple network with only one neuron. It is shown that Fisher information of the network, namely minus expected values of Hessian matrix, is given by a weighted covariance matrix of input vectors. A learning algorithm is presented on the basis of Fisher's scoring method which makes use of Fisher information instead of Hessian matrix in Newton's method. The algorithm can be interpreted as iterations of weighted least squares method. Then these results are extended to the layered network with one hidden layer. Fisher information for the layered network is given by a weighted covariance matrix of inputs of the network and outputs of hidden units. Since Newton's method for maximization problems has the difficulty when minus Hessian matrix is not positive definite, we propose a learning algorithm which makes use of Fisher information matrix, which is non-negative, instead of Hessian matrix. Moreover, to reduce the computation of full Fisher information matrix, we propose another algorithm which uses only block diagonal elements of Fisher information. The algorithm is reduced to an iterative weighted least squares algorithm in which each unit estimates its own weights by a weighted least squares method. It is experimentally shown that the proposed algorithms converge with fewer iterations than error back-propagation (BP) algorithm.  相似文献   

20.
Gibbsian fields or Markov random fields are widely used in Bayesian image analysis, but learning Gibbs models is computationally expensive. The computational complexity is pronounced by the recent minimax entropy (FRAME) models which use large neighborhoods and hundreds of parameters. In this paper, we present a common framework for learning Gibbs models. We identify two key factors that determine the accuracy and speed of learning Gibbs models: The efficiency of likelihood functions and the variance in approximating partition functions using Monte Carlo integration. We propose three new algorithms. In particular, we are interested in a maximum satellite likelihood estimator, which makes use of a set of precomputed Gibbs models called "satellites" to approximate likelihood functions. This algorithm can approximately estimate the minimax entropy model for textures in seconds in a HP workstation. The performances of various learning algorithms are compared in our experiments  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号