首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A method is provided for classifying finite-duration signals with narrow instantaneous bandwidth and dynamic instantaneous frequency (IF). In this method, events are partitioned into nonoverlapping segments, and each segment is modeled as a linear chirp, forming a piecewise-linear IF model. The start frequency, chirp rate, signal energy, and noise energy are estimated in each segment. The resulting sequences of frequency and rate features for each event are classified by evaluating their likelihood under the probability density function (PDF) corresponding to each narrowband class hypothesis. The class-conditional PDFs are approximated using continuous-state hidden Gauss-Markov models (HGMMs), whose parameters are estimated from labeled training data. Previous HGMM algorithms are extended by dynamically weighting the output covariance matrix by the ratio of the estimated signal and noise energies from each segment. This covariance weighting discounts spurious features from segments with low signal-to-noise ratio (SNR), making the algorithm more robust in the presence of dynamic noise levels and fading signals. The classification algorithm is applied in a simulated three-class cross-validation experiment, for which the algorithm exhibits percent correct classification greater than 97% as low as -7 dB SNR.  相似文献   

2.
Gauss mixtures have gained popularity in statistics and statistical signal processing applications for a variety of reasons, including their ability to well approximate a large class of interesting densities and the availability of algorithms such as the Baum–Welch or expectation-maximization (EM) algorithm for constructing the models based on observed data. We here consider a quantization approach to Gauss mixture design based on the information theoretic view of Gaussian sources as a “worst case” for robust signal compression. Results in high-rate quantization theory suggest distortion measures suitable for Lloyd clustering of Gaussian components based on a training set of data. The approach provides a Gauss mixture model and an associated Gauss mixture vector quantizer which is locally robust. We describe the quantizer mismatch distortion and its relation to other distortion measures including the traditional squared error, the Kullback–Leibler (relative entropy) and minimum discrimination information, and the log-likehood distortions. The resulting Lloyd clustering algorithm is demonstrated by applications to image vector quantization, texture classification, and North Atlantic pipeline image classification.  相似文献   

3.
A method of integrating the Gibbs distributions (GDs) into hidden Markov models (HMMs) is presented. The probabilities of the hidden state sequences of HMMs are modeled by GDs in place of the transition probabilities. The GDs offer a general way in modeling neighbor interactions of Markov random fields where the Markov chains in HMMs are special cases. An algorithm for estimating the model parameters is developed based on Baum reestimation, and an algorithm for computing the probability terms is developed using a lattice structure. The GD models were used for experiments in speech recognition on the TI speaker-independent, isolated digit database. The observation sequences of the speech signals were modeled by mixture Gaussian autoregressive densities. The energy functions of the GDs were developed using very few parameters and proved adequate in hidden layer modeling. The results of the experiments showed that the GD models performed at least as well as the HMM models  相似文献   

4.
Modeling error sources in digital channels   总被引:1,自引:0,他引:1  
  相似文献   

5.
The authors demonstrate the effectiveness of phonemic hidden Markov models with Gaussian mixture output densities (mixture HMMs) for speaker-dependent large-vocabulary word recognition. Speech recognition experiments show that for almost any reasonable amount of training data, recognizers using mixture HMMs consistently outperform those employing unimodal Gaussian HMMs. With a sufficiently large training set (e.g. more than 2500 words), use of HMMs with 25-component mixture distributions typically reduces recognition errors by about 40%. It is also found that the mixture HMMs outperform a set of unimodal generalized triphone models having the same number of parameters. Previous attempts to employ mixture HMMs for speech recognition proved discouraging because of the high complexity and computational cost in implementing the Baum-Welch training algorithm. It is shown how mixture HMMs can be implemented very simply in unimodal transition-based frameworks by allowing multiple transitions from one state to another  相似文献   

6.
Ariki  Y. Jack  M.A. 《Electronics letters》1989,25(13):824-825
The use of enhanced time duration constraints for subword (phoneme) recognition in continuous speech is reported. Here the time duration constraints are modelled by a Gaussian probability distribution in the conventional Baum-Welch learning algorithm and are statistically enhanced to obtain the most probable path in the Viterbi decoding process. Experimental results to validate this approach are included.<>  相似文献   

7.
An iterative approach for minimum-discrimination-information (MDI) hidden Markov modeling of information sources is proposed. The approach is developed for sources characterized by a given set of partial covariance matrices and for hidden Markov models (HMMs) with Gaussian autoregressive output probability distributions (PDs). The approach aims at estimating the HMM which yields the MDI with respect to all sources that could have produced the given set of partial covariance matrices. Each iteration of the MDI algorithm generates a new HMM as follows. First, a PD for the source is estimated by minimizing the discrimination information measure with respect to the old model over all PDs which satisfy the given set of partial covariance matrices. Then a new model that decreases the discrimination information measure between the estimated PD of the source and the PD of the old model is developed. The problem of estimating the PD of the source is formulated as a standard constrained minimization problem in the Euclidean space. The estimation of a new model given the PD of the source is done by a procedure that generalizes the Baum algorithm. The MDI approach is shown to be a descent algorithm for the discrimination information measure, and its local convergence is proved  相似文献   

8.
本文给出了一种基于小波变换和隐Markov模型(HMM)的声调识别方法。根据小波变换检测信号突变的性质,充分利用多分辨率分析,准确可靠地实现了基音检测;采用分划Gauss混合(PGM)概率密度函数的HMM进行汉语声调识别,推导出用PGM函数的Viterbi算法的简化递推式。在匹配计算量大大减小的情况下,特定人的四声识别率为97.22%,非特定人达到94.47%。  相似文献   

9.
本文研究了TCM信号序列的自由空间距离的计算问题,提出一种新的算法矩阵算法,同时在理论上解决了计算TCM信号序列的自由空间距离所需状态转移次数问题。此矩阵算法的推导基于Viterbi算法,它是Viterbi算法的矩阵实现。与已有的算法相比,此算法的优点在于:(1)给出了显式解,使得计算的复杂度相对减小。(2)对空间距离的变化具有更强的适应性。作为实例,给出了高斯信道和衰落信道上的一些TCM信号序列的自由空间距离的计算结果。  相似文献   

10.
Wavelet-based statistical signal processing techniques such as denoising and detection typically model the wavelet coefficients as independent or jointly Gaussian. These models are unrealistic for many real-world signals. We develop a new framework for statistical signal processing based on wavelet-domain hidden Markov models (HMMs) that concisely models the statistical dependencies and non-Gaussian statistics encountered in real-world signals. Wavelet-domain HMMs are designed with the intrinsic properties of the wavelet transform in mind and provide powerful, yet tractable, probabilistic signal models. Efficient expectation maximization algorithms are developed for fitting the HMMs to observational signal data. The new framework is suitable for a wide range of applications, including signal estimation, detection, classification, prediction, and even synthesis. To demonstrate the utility of wavelet-domain HMMs, we develop novel algorithms for signal denoising, classification, and detection  相似文献   

11.
Sequential or online hidden Markov model (HMM) signal processing schemes are derived, and their performance is illustrated by simulation. The online algorithms are sequential expectation maximization (EM) schemes and are derived by using stochastic approximations to maximize the Kullback-Leibler information measure. The schemes can be implemented either as filters or fixed-lag or sawtooth-lag smoothers. They yield estimates of the HMM parameters including transition probabilities, Markov state levels, and noise variance. In contrast to the offline EM algorithm (Baum-Welch scheme), which uses the fixed-interval forward-backward scheme, the online schemes have significantly reduced memory requirements and improved convergence, and they can estimate HMM parameters that vary slowly with time or undergo infrequent jump changes. Similar techniques are used to derive online schemes for extracting finite-state Markov chains imbedded in a mixture of white Gaussian noise (WGN) and deterministic signals of known functional form with unknown parameters  相似文献   

12.
In this paper we propose algorithms for parameter estimation of fast-sampled homogeneous Markov chains observed in white Gaussian noise. Our algorithms are obtained by the robust discretization of stochastic differential equations involved in the estimation of continuous-time hidden Markov models (HMM's) via the EM algorithm. We present two algorithms: the first is based on the robust discretization of continuous-time filters that were recently obtained by Elliott to estimate quantities used in the EM algorithm; the second is based on the discretization of continuous-time smoothers, yielding essentially the well-known Baum-Welch re-estimation equations. The smoothing formulas for continuous-time HMM's are new, and their derivation involves two-sided stochastic integrals. The choice of discretization results in equations which are identical to those obtained by deriving the results directly in discrete time. The filter-based EM algorithm has negligible memory requirements; indeed, independent of the number of observations. In comparison the smoother-based discrete-time EM algorithm requires the use of the forward-backward algorithm, which is a fixed-interval smoothing algorithm and has memory requirements proportional to the number of observations. On the other hand, the computational complexity of the filter-based EM algorithm is greater than that of the smoother-based scheme. However, the filters may be suitable for parallel implementation. Using computer simulations we compare the smoother-based and filter-based EM algorithms for HMM estimation. We provide also estimates for the discretization error  相似文献   

13.
We present a discriminative training algorithm, that uses support vector machines (SVMs), to improve the classification of discrete and continuous output probability hidden Markov models (HMMs). The algorithm uses a set of maximum-likelihood (ML) trained HMM models as a baseline system, and an SVM training scheme to rescore the results of the baseline HMMs. It turns out that the rescoring model can be represented as an unnormalized HMM. We describe two algorithms for training the unnormalized HMM models for both the discrete and continuous cases. One of the algorithms results in a single set of unnormalized HMMs that can be used in the standard recognition procedure (the Viterbi recognizer), as if they were plain HMMs. We use a toy problem and an isolated noisy digit recognition task to compare our new method to standard ML training. Our experiments show that SVM rescoring of hidden Markov models typically reduces the error rate significantly compared to standard ML training.  相似文献   

14.
Signal processing based on hidden Markov models (HMM's) has been applied recently to the characterization of single ion channel currents as recorded with the patch clamp technique from living cells. The estimation of HMM parameters using the traditional forward-backward and Baum-Welch algorithms can be performed at signal-to-noise ratios (SNR's) that are too low for conventional analysis; however, the application of these algorithms relies on the assumption that the background noise is white. In this paper, the observed single channel current is modeled as a vector hidden Markov process. An extension of the forward-backward and Baum-Welch algorithms is described to model ion channel kinetics under conditions of colored noise like that seen in patch clamp recordings. Using simulated data, we demonstrate that the traditional algorithms result in biased estimates and that the vector HMM approach provides unbiased estimates of the parameters of the underlying hidden Markov scheme  相似文献   

15.
Implementing the Viterbi algorithm   总被引:1,自引:0,他引:1  
The Viterbi algorithm, an application of dynamic programming, is widely used for estimation and detection problems in digital communications and signal processing. It is used to detect signals in communication channels with memory, and to decode sequential error-control codes that are used to enhance the performance of digital communication systems. The Viterbi algorithm is also used in speech and character recognition tasks where the speech signals or characters are modeled by hidden Markov models. The article explains the basics of the Viterbi algorithm as applied to systems in digital communication systems, and speech and character recognition. It also focuses on the operations and the practical memory requirements to implement the Viterbi algorithm in real-time  相似文献   

16.
针对网络功能虚拟化(NFV)环境下,现有服务功能链部署方法无法在优化映射代价的同时保证服务路径时延的问题,该文提出一种基于IQGA-Viterbi学习算法的服务功能链优化部署方法。在隐马尔可夫模型参数训练过程中,针对传统Baum-Welch算法训练网络参数容易陷入局部最优的缺陷,改进量子遗传算法对模型参数进行训练优化,在每一迭代周期内通过等比例复制适应度最佳种群的方式,保持可行解多样性和扩大空间搜索范围,进一步提高模型参数的精确度。在隐马尔科夫链求解过程中,针对隐含序列无法直接观测这一难点,利用Viterbi算法能精确求解隐含序列的优势,解决有向图网络中服务路径的优化选择问题。仿真实验结果表明,与其它部署算法相比,所提IQGA-Viterbi学习算法能有效降低网络时延和映射代价的同时,提高了网络服务的请求接受率。  相似文献   

17.
Adaptive algorithms for designing two-category linear pattern classifiers have been developed and studied in recent years. When the pattern sets are nonseparable, the adaptive algorithms do not directly minimize the number of classification errors, which is the usual goal in pattern classifier design: furthermore, they also are not minimum-error optimal, i.e., they do not generally minimize the probability of error for the classifier. However, the least-mean-square (LMS) adaptive algorithm has been shown to yield classifiers that are asymptotically minimum-error optimal for patterns from Gaussian equal-covariance distributions. A technique is also known for designing asymptotically minimum-error optimal linear classifiers for patterns from Gaussian distributions with unequal covariance matrices. This paper shows that classifiers designed with the "error-correction" algorithms have these same asymptotic properties: the error-correction algorithms are asymptotically minimum-error optimal for patterns drawn from Gaussian equal-covariance distributions and they can be used to design asymptotically minimum-error optimal linear classifiers for patterns from Gaussian distributions with unequal covariance matrices. In addition, because the error-correction algorithms use only part of the patterns in determining the classifier weights, they are asymptotically minimum-error optimal for patterns from distributions that have only Gaussian tails in the regions where their patterns are misclassified or close to misclassified, and that are almost arbitrary elsewhere.  相似文献   

18.
Hidden Markov processes   总被引:12,自引:0,他引:12  
An overview of statistical and information-theoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discrete-time finite-state homogeneous Markov chain observed through a discrete-time memoryless invariant channel. In recent years, the work of Baum and Petrie (1966) on finite-state finite-alphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximum-likelihood (ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related topics are reviewed  相似文献   

19.
石雪  李玉  赵泉华 《电子学报》2020,48(1):131-136
为了实现自动确定类别数的高精度遥感影像分割,提出一种自适应类别的层次高斯混合模型(Hierarchical Gaussian Mixture Model,HGMM)遥感影像分割算法.提出算法采用多个高斯混合模型加权和定义HGMM,用于建模具有非对称,重尾和多峰等复杂特性的影像统计模型.采用期望最大化算法(Expectation Maximization,EM)求解模型参数.为了实现自动确定类别数,采用贝叶斯信息准则(Bayesian Information Criterion,BIC)求解最优类别数,其中惩罚项采用加权像素数定义.为了验证提出算法可行性和有效性,对模拟和全色遥感影像进行分割实验,并对分割结果进行定性和定量分析.结果表明HGMM具有准确建模复杂统计分布的能力,提出算法具有高精度和高效率,同时可自动确定最优类别数.  相似文献   

20.
高斯噪声中的参数盲估计   总被引:1,自引:0,他引:1       下载免费PDF全文
王惠刚  李志舜 《电子学报》2003,31(7):974-976
盲信号处理方法中常忽略噪声的影响,而实际问题中噪声的影响是存在的.本文主要讨论了在协方差矩阵未知的加性高斯噪声中混合系数的盲估计问题.本文以最大似然估计为基础,提出一种求解参数的最优化算法,给出了混合矩阵和协方差矩阵的计算式.采用高斯混合模型(GMM)来逼近源信号的概率密度函数,简化了算法中的积分,导出了一种基于EM算法的迭代式.仿真表明,算法不仅能稳定收敛,而且在低信噪比下也能获得良好性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号