首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
求解具有间断系数的SH波方程的分步差分方法何柏荣(南开大学)冯德益,聂永安(天津市地震局)THEFRACTIONALSTEPDIFFERENCEMETHODSFORSOLVINGTHESHWAVEEQUATIONWITHDISCONTINUOUSCO...  相似文献   

2.
本文讨论了用矢量量化/隐马尔可夫模型(VQ/HMM)法实现的语音识别系统,重点阐述了如何利用HMM对多训练序列的迭代公式使系统具有自学习功能。经实测证明,该系统基本达到了预期性能。  相似文献   

3.
CHOLESKY分解求解大型稀疏线性方程组的并行算法王思群,魏紫銮(中国科学院计算数学与科学工程计算研究所)APARALLELALGORITHMFORSOLVINGLARGESPARSELINEARSYSTEMSOFEQUATIONSVIACHOLE...  相似文献   

4.
本文论述了一个能分析干扰自由手写数字识别的神经网络和专家系统模型。它的基本识别器是一个神经系统网络,能解决大部分问题,但在一定的干扰影响下会失败。专家系统是第二个识别器,由神经网络分析产生的干扰。神经网络分类器由改进的自组织图形(MSOM)和矢量化学习(LVQ)组成。实验在自组织图形上进行,并应用了MSOM、SOM&LVQ以及MSOM&LVQ技术。实验表明,采用自由手写体数据库的样本,按照这些两层  相似文献   

5.
带状(块)Toeplitz方程组的快速并行算法   总被引:3,自引:0,他引:3  
带状(块)Toeplitz方程组的快速并行算法成礼智,蒋增荣(国防科技大学)FASTANDPARALLELALGORITHMSFORSOLVINGBAND(BLOCK)TOEPLITZSYSTEMSOFEQUATIONS¥ChengLi-zhi;Ji...  相似文献   

6.
解非线性最小二乘的并行连续极小化算法及其数值试验   总被引:2,自引:0,他引:2  
解非线性最小二乘的并行连续极小化算法及其数值试验李庆扬,朱鹏(清华大学)APARALLELCONTINUOUSMINIMIZATIONALGORITHMFORSOLVINGNONLINEARLEAST-SQUARESPROBLEMSANDNUMERI...  相似文献   

7.
一种POCSAG码的软件解码方案   总被引:1,自引:0,他引:1  
将输入捕捉TCAP技术与输出比较TCMP技术组合运用,解决了POCSAG码的速率识别、位(bit)、字同步问题。同时给出生成多项式g(x)模2快速除法及BCH(31,21)监督位算法的软件实施方案。为读者提供一种无线寻呼系统增值服务POCSAG码软件解码方法。  相似文献   

8.
CQ—网络BP机     
CQ—网络BP机相信凡是上网有一段时间的朋友,或多或少都会结识了几个网友吧?当你某天忽然想找朋友说说话(IPHONE);聊聊天(CHAT);想给朋友留个话(MESSAGE);分享某个好地址(URL);或者是想开个小小的网上PARTY,邀上4、5个网友...  相似文献   

9.
MC68HC708MP16单片机及其在家电变频控制中的应用深圳大学信息工程学院(518060)张华薛丽萍庆成企业有限公司(518014)李远辉MOTOROLA新近推广的单片机MC68HC708MP16,是一个十分适用于电机变频控制的单片机。其最大特点...  相似文献   

10.
解线代数方程组的游动点估计量零方差MonteCarlo迭代格式冯庭桂(北京应用物理与计算数学研究所)AZEROVARIANCEITERATIONSCHEMEOFCOLLISIONESTIMATORFORSOLVINGLINEARALGEBRAICEQ...  相似文献   

11.
12.
The objective of this paper is to present experiments and discussions of how some neural network algorithms can help to improve phoneme recognition using mixture density hidden Markov models (MDHMMs). In MDHMMs, the modelling of the stochastic observation processes associated with the states is based on the estimation of the probability density function of the short-time observations in each state as a mixture of Gaussian densities. The Learning Vector Quantization (LVQ) is used to increase the discrimination between different phoneme models both during the initialization of the Gaussian codebooks and during the actual MDHMM training. The Self-Organizing Map (SOM) is applied to provide a suitably smoothed mapping of the training vectors to accelerate the convergence of the actual training. The codebook topology which is obtained can also be exploited in the recognition phase to speed up the calculations to approximate the observation probabilities. The experiments with LVQ and SOMs show reductions both in the average phoneme recognition error rate and in the computational load compared to the maximum likelihood training and the Generalized Probabilistic Descent (GPD). The lowest final error rate, however, is obtained by using several training algorithms successively. Additional reductions from the online system of about 40% in the error rate are obtained by using the same training methods, but with advanced and higher dimensional feature vectors.  相似文献   

13.
An unsupervised competitive neural network for efficient clustering of Gaussian probability density function (GPDF) data of continuous density hidden Markov models (CDHMMs) is proposed in this paper. The proposed unsupervised competitive neural network, called the divergence-based centroid neural network (DCNN), employs the divergence measure as its distance measure and utilizes the statistical characteristics of observation densities in the HMM for speech recognition problems. While the conventional clustering algorithms used for the vector quantization (VQ) codebook design utilize only the mean values of the observation densities in the HMM, the proposed DCNN utilizes both the mean and the covariance values. When compared with other conventional unsupervised neural networks, the DCNN successfully allocates more code vectors to the regions where GPDF data are densely distributed while it allocates fewer code vectors to the regions where GPDF data are sparsely distributed. When applied to Korean monophone recognition problems as a tool to reduce the size of the codebook, the DCNN reduced the number of GPDFs used for code vectors by 65.3% while preserving recognition accuracy. Experimental results with a divergence-based k-means algorithm and a divergence-based self-organizing map algorithm are also presented in this paper for a performance comparison.  相似文献   

14.
无监督学习矢量量化(LVQ)是一类基于最小化风险函数的聚类方法,文中通过对无监督LVQ风险函数的研究,提出了无监督LVQ算法的广义形式,在此基础上将当前典型的LVQ算法表示为基于不同尺度函数的LVQ算法,极大地方便了学习矢量量化神经网络的推广与应用。通过对无监督LVQ神经网络的改造,得到了基于无监督聚类算法的有监督LVQ神经网络,并将其应用于说话人辨认,取得了满意的结果并比较了几种典型聚类算法的优劣。  相似文献   

15.
研究适用于隐马尔可夫模型(HMM)结合多层感知器(MLP)的小词汇量混合语音识别系统的一种简化神经网络结构。利用小词汇量混合语音识别系统中的HMM状态所形成的规则的二维阵列,对状态观测概率进行分解。基于这种利用HMM的二维结构特性的方法,实现了用一种由多个简单的MLP所组成的简化神经网络结构来估计状态观测概率。理论分析和语音识别实验的结果都表明,这种简化神经网络结构在性能上优于Franco等人提出的简化神经网络结构。  相似文献   

16.
基于方差归一化失真测度的改进的LBG算法   总被引:3,自引:1,他引:2  
矢量量化(VQ)技术在话者识别系统中得到了广泛的应用。 VQ码本的产生通常采用 LBG算法,失真测度则为对矢量的各分量等权重的欧氏距离。在话者识别系统中特征矢量的各个分量的分布是有差别的,且对于不同的话者,这种差别的程度又是不一样的。由于不同分布的各维参数对话者识别的有效性各不相同,因此,文章提出了一种能反映这种有效性差别的失真测度,即:方差归一化失真测度。以该失真测度为基础,并结合时序相关的初始码本设计方法及有效的零胞腔处理技术,文章提出了改进的LBG算法,同时利用该算法训练出改进的VQ话者模型,并进行了话者识别实验。  相似文献   

17.
基于HMM与RBF的混合语音识别新方法   总被引:5,自引:0,他引:5  
提出了一种隐马尔可夫模型(HMM)和径向基函数神经网络(RBF)相结合的语音识别新方法。该方法首先利用HMM生成最佳语音状态序列,然后用函数逼近技术产生对最佳状态序列进行时间规正,最后通过RBF神经网络进行分类识别。理论和实验结果表明,该系统比HMM具有更好的识别效果,特别对提高易混淆词的识别性能尤为显著。  相似文献   

18.
为降低声学特征在语音识别系统中的音素识别错误率,提高系统性能,提出一种子空间高斯混合模型和深度神经网络结合提取特征的方法,分析了子空间高斯混合模型的参数规模并在减少计算复杂度后将其与深度神经网络串联进一步提高音素识别率。把经过非线性特征变换的语音数据输入模型,找到深度神经网络结构的最佳配置,建立学习与训练更可靠的网络模型进行特征提取,通过比较音素识别错误率来判断系统性能。实验仿真结果证明,基于该系统提取的特征明显优于传统声学模型。  相似文献   

19.
基于动态贝叶斯网络的语音识别及音素切分研究*   总被引:1,自引:1,他引:0  
研究了一种基于动态贝叶斯网络(dynamic bayesian networks, DBN)的语音识别建模方法,利用GMTK(graphical model tool kits)工具构建音素级音频流DBN语音训练和识别模型,同时与传统的基于隐马尔可夫的语音识别结果进行比较,并给出词与音素的切分结果.实验表明,在各种信噪比测试条件下,基于DBN的语音识别结果与基于HMM的语音识别结果相当,并表现出一定的抗噪性,音素的切分结果也比较准确.  相似文献   

20.
A modified counter-propagation (CP) algorithm with supervised learning vector quantizer (LVQ) and dynamic node allocation has been developed for rapid classification of molecular sequences. The molecular sequences were encoded into neural input vectors using an n–gram hashing method for word extraction and a singular value decomposition (SVD) method for vector compression. The neural networks used were three-layered, forward-only CP networks that performed nearest neighbor classification. Several factors affecting the CP performance were evaluated, including weight initialization, Kohonen layer dimensioning, winner selection and weight update mechanisms. The performance of the modified CP network was compared with the back-propagation (BP) neural network and the k–nearest neighbor method. The major advantages of the CP network are its training and classification speed and its capability to extract statistical properties of the input data. The combined BP and CP networks can classify nucleic acid or protein sequences with a close to 100% accuracy at a rate of about one order of magnitude faster than other currently available methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号