首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 18 毫秒
1.
LVCSR systems are usually based on continuous density HMMs, which are typically implemented using Gaussian mixture distributions. Such statistical modeling systems tend to operate slower than real-time, largely because of the heavy computational overhead of the likelihood evaluation. The objective of our research is to investigate approximate methods that can substantially reduce the computational cost in likelihood evaluation without obviously degrading the recognition accuracy. In this paper, the most common techniques to speed up the likelihood computation are classified into three categories, namely machine optimization, model optimization, and algorithm optimization. Each category is surveyed and summarized by describing and analyzing the basic ideas of the corresponding techniques. The distribution of the numerical values of Gaussian mixtures within a GMM model are evaluated and analyzed to show that computations of some Gaussians are unnecessary and can thus be eliminated. Two commonly used techniques for likelihood approximation, namely VQ-based Gaussian selection and partial distance elimination, are analyzed in detail. Based on the analyses, a fast likelihood computation approach called dynamic Gaussian selection (DGS) is proposed. DGS approach is a one-pass search technique which generates a dynamic shortlist of Gaussians for each state during the procedure of likelihood computation. In principle, DGS is an extension of both techniques of partial distance elimination and best mixture prediction, and it does not require additional memory for the storage of Gaussian shortlists. DGS algorithm has been implemented by modifying the likelihood computation procedure in HTK 3.4 system. Experimental results on TIMIT and WSJ0 corpora indicate that this approach can speed up the likelihood computation significantly without introducing apparent additional recognition error.  相似文献   

2.
Fast orthogonal forward selection algorithm for feature subset selection   总被引:2,自引:0,他引:2  
Feature selection is an important issue in pattern classification. In the presented study, we develop a fast orthogonal forward selection (FOFS) algorithm for feature subset selection. The FOFS algorithm employs an orthogonal transform to decompose correlations among candidate features, but it performs the orthogonal decomposition in an implicit way. Consequently, the fast algorithm demands less computational effort as compared with conventional orthogonal forward selection (OFS).  相似文献   

3.

In dynamic ensemble selection (DES) techniques, only the most competent classifiers, for the classification of a specific test sample, are selected to predict the sample’s class labels. The key in DES techniques is estimating the competence of the base classifiers for the classification of each specific test sample. The classifiers’ competence is usually estimated according to a given criterion, which is computed over the neighborhood of the test sample defined on the validation data, called the region of competence. A problem arises when there is a high degree of noise in the validation data, causing the samples belonging to the region of competence to not represent the query sample. In such cases, the dynamic selection technique might select the base classifier that overfitted the local region rather than the one with the best generalization performance. In this paper, we propose two modifications in order to improve the generalization performance of any DES technique. First, a prototype selection technique is applied over the validation data to reduce the amount of overlap between the classes, producing smoother decision borders. During generalization, a local adaptive K-Nearest Neighbor algorithm is used to minimize the influence of noisy samples in the region of competence. Thus, DES techniques can better estimate the classifiers’ competence. Experiments are conducted using 10 state-of-the-art DES techniques over 30 classification problems. The results demonstrate that the proposed scheme significantly improves the classification accuracy of dynamic selection techniques.

  相似文献   

4.
开放、动态的Internet环境下,网构软件面临可信性的重大挑战。运用模糊理论,提出了一种满足最贴近用户可信度期望的构件选择方法。该方法中,定义了网构软件环境下构件的6种可信属性,介绍了一种多因素的构件可信度模糊综合评价方法,以此为基础,建立了一种满足用户可信度期望的关键词:网构软件;可信计算;构件选择;模糊综合评价;动态聚类网构软件模型,为实现候选构件与抽象构件的映射,应用基于模糊等价关系的动态聚类实现可信构件的选择。结合案例说明了方法的有效性。  相似文献   

5.
In handwritten pattern recognition, the multiple classifier system has been shown to be useful for improving recognition rates. One of the most important tasks in optimizing a multiple classifier system is to select a group of adequate classifiers, known as an Ensemble of Classifiers (EoC), from a pool of classifiers. Static selection schemes select an EoC for all test patterns, and dynamic selection schemes select different classifiers for different test patterns. Nevertheless, it has been shown that traditional dynamic selection performs no better than static selection. We propose four new dynamic selection schemes which explore the properties of the oracle concept. Our results suggest that the proposed schemes, using the majority voting rule for combining classifiers, perform better than the static selection method.  相似文献   

6.
We describe a scheme for implementing dynamic casts suitable for systems where the performance and predictability of performance is essential. A dynamic cast from a base class to a derived class in an object‐oriented language can be performed quickly by having the linker assign an integer type ID to each class. A simple integer arithmetic operation verifies whether the cast is legal at run time. The type ID scheme presented uses the modulo function to check that one class derives from another. A 64‐bit type ID is sufficient to handle class hierarchies of large size at least nine levels of derivation deep. We also discuss the pointer adjustments required for a C++ dynamic_cast. All examples will be drawn from the C++ language. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
高效视频编码(High Efficiency Video Coding,HEVC)作为下一代新的视频编码标准,旨在有限网络带宽下传输高质量的网络视频。与现有的视频编码标准相比,高效视频编码具有更高的灵活性和压缩率。编码单元(Coding Unit,CU)是视频编码处理的基本单元,原有的算法通过四叉树递归获取最佳CU深度,在提高视频压缩性能的同时引入了较高的计算复杂度。针对该问题,提出了一种快速编码深度选择算法,该算法利用相邻CU的深度信息计算一个深度预测特征值,通过该特征值进行深度选择,以避免不必要的计算,降低计算复杂度。实验结果表明,该算法在保证视频压缩效果的同时有效降低了计算复杂度。  相似文献   

8.
支持向量机是一种基于核的学习方法,核函数及核参数的选择直接影响到SVM的泛化能力。传统的参数选择方法如网格搜索法,由于其计算量大,训练过程十分耗时,提出了一种新的快速选择最优核参数方法,该方法通过计算各类别在特征空间的可分性度量值来决定最优核参数,不需训练相应SVM分类模型,从而大大缩减了训练时间,提高了训练速度,且分类精度与传统方法相比,具有相当的竞争力。实验证明,该算法是可行有效的。  相似文献   

9.
We propose a simple and computationally efficient construction algorithm for two class linear-in-the-parameters classifiers. In order to optimize model generalization, a forward orthogonal selection (OFS) procedure is used for minimizing the leave-one-out (LOO) misclassification rate directly. An analytic formula and a set of forward recursive updating formula of the LOO misclassification rate are developed and applied in the proposed algorithm. Numerical examples are used to demonstrate that the proposed algorithm is an excellent alternative approach to construct sparse two class classifiers in terms of performance and computational efficiency.  相似文献   

10.
针对标准人工蜂群(ABC)算法易陷入局部极值的问题,对标准ABC算法的轮盘赌选择机制进行了修改,提出了一种基于动态评价选择策略的改进人工蜂群(DSABC)算法。首先,根据到当前为止一定迭代次数内蜜源位置的连续更新或停滞次数,对每个蜜源位置进行动态评价;然后,利用所得的评价函数值为蜜源招募跟随蜂。在6个经典测试函数上的实验结果表明:与标准ABC算法相比,动态评价选择策略改进了标准ABC算法的选择机制,使得DSABC算法的求解精度有较大幅度提高,特别是对于两种不同维数的Rosenbrock函数,所得最优值的绝对误差分别由0.0017和0.0013减小到0.000049和0.000057;而且,DSABC算法克服了进化后期因群体位置多样性丢失较快而产生的早熟收敛现象,提高了整个种群的收敛精度及解的稳定性,从而为函数优化问题提供了一种高效可靠的求解方法。  相似文献   

11.
Probing network devices using the Traceroute tool is a most popular network monitoring technique for troubleshooting and network mapping. Distributed scanning systems perform Traceroute periodically in order to detect routing anomalies. We propose in this paper a six-step methodology for creating a more efficient profile-based probing strategy that reduces both the number of probes and the time needed to map the network topology. Our proposed methodology takes the existence of load balancers into account when building the profile-based strategies and thus overcomes any inconveniences that these load balancers may cause. The basic idea behind our methodology is to examine how often routing changes occur in the routing path at the different times of the day. This insight will be used to provide a high probing weight for parts of the routing path that change frequently and a lower probing weight for parts that change rarely during the different time periods of the day. Since routing changes may occur more frequently at certain periods of the day, we propose an approach to determine the duration of the periods when the routing changes occur with a similar frequency. A profile-based probing strategy is later assigned to each one of these periods separately in order to further reduce the required number of probes. The experimental results show that our approach exploits temporal regularities in the routing changes and achieves great savings in the number of probes. In fact, our approach achieves a 66% reduction in the number of probes as compared to classical Traceroute that is launched periodically. This is an important enhancement for systems that scan the Internet repeatedly. Furthermore, we show another enhancement that cuts down scanning time by 90%. This is achieved by scanning multiple hop levels at the same time after marking the probes in order to match them with the received responses.  相似文献   

12.
M Kearns  D Ron 《Neural computation》1999,11(6):1427-1453
In this article we prove sanity-check bounds for the error of the leave-one-out cross-validation estimate of the generalization error: that is, bounds showing that the worst-case error of this estimate is not much worse than that of the training error estimate. The name sanity check refers to the fact that although we often expect the leave-one-out estimate to perform considerably better than the training error estimate, we are here only seeking assurance that its performance will not be considerably worse. Perhaps surprisingly, such assurance has been given only for limited cases in the prior literature on cross-validation. Any nontrivial bound on the error of leave-one-out must rely on some notion of algorithmic stability. Previous bounds relied on the rather strong notion of hypothesis stability, whose application was primarily limited to nearest-neighbor and other local algorithms. Here we introduce the new and weaker notion of error stability and apply it to obtain sanity-check bounds for leave-one-out for other classes of learning algorithms, including training error minimization procedures and Bayesian algorithms. We also provide lower bounds demonstrating the necessity of some form of error stability for proving bounds on the error of the leave-one-out estimate, and the fact that for training error minimization algorithms, in the worst case such bounds must still depend on the Vapnik-Chervonenkis dimension of the hypothesis class.  相似文献   

13.
Once the decision to outsource has been made, outsourcing vendors supplying the product or service have to be selected. This paper focuses on dynamic strategic vendor selection. Existing approaches supporting vendor selection neglect the interdependencies in time arising from investment costs of selecting a new vendor and costs of switching from an existing vendor to a new one. These shortcomings of current approaches motivate the research presented in this paper. A dynamic decision making approach for strategic vendor selection based on the principles of hierarchical planning will be proposed.  相似文献   

14.
针对克隆选择算法抗体群多样性有限和容易早熟等问题,提出了快速收敛的克隆选择算法.引入新型克隆算子,维持了抗体间促进与抑制的平衡;为了跳出局部最优,结合云模型的特征,给出了云自适应变异算子,与抗体重组算子合作,有效地增加了抗体的多样性,进而增强了算法的全局和局部搜索能力.对标准测试函数进行了仿真实验,并与其它算法进行了比较,比较结果表明,该算法寻优精度高、鲁棒性好、收敛速度快、时间复杂度不高.  相似文献   

15.
在对传统的卫星信号捕获算法分析的基础上,提出了一种惯性导航系统(INS)辅助的基于部分匹配滤波器和快速傅里叶变换(FFT)相结合的捕获算法。在此算法中,利用惯性导航设备提供的信息计算多普勒频率,通过部分匹配滤波器和 FFT 相结合的算法并行搜索载波频率和码相位。该算法不仅能缩小多普勒频率的搜索范围,而且能够快速搜索码相位。仿真结果表明,此算法能够在高动态环境下成功捕获 COMPASS 卫星信号,并且明显减少捕获时间。  相似文献   

16.
A fast and accurate computational scheme for simulating nonlinear dynamic systems is presented. The scheme assumes that the system can be represented by a combination of components of only two different types: first-order low-pass filters and static nonlinearities. The parameters of these filters and nonlinearities may depend on system variables, and the topology of the system may be complex, including feedback. Several examples taken from neuroscience are given: phototransduction, photopigment bleaching, and spike generation according to the Hodgkin-Huxley equations. The scheme uses two slightly different forms of autoregressive filters, with an implicit delay of zero for feedforward control and an implicit delay of half a sample distance for feedback control. On a fairly complex model of the macaque retinal horizontal cell, it computes, for a given level of accuracy, one to two orders of magnitude faster than the fourth-order Runge-Kutta. The computational scheme has minimal memory requirements and is also suited for computation on a stream processor, such as a graphical processing unit.  相似文献   

17.
A novel search principle for optimal feature subset selection using the Branch & Bound method is introduced. Thanks to a simple mechanism for predicting criterion values, a considerable amount of time can be saved by avoiding many slow criterion evaluations. We propose two implementations of the proposed prediction mechanism that are suitable for use with nonrecursive and recursive criterion forms, respectively. Both algorithms find the optimum usually several times faster than any other known Branch & Bound algorithm. As the algorithm computational efficiency is crucial, due to the exponential nature of the search problem, we also investigate other factors that affect the search performance of all Branch & Bound algorithms. Using a set of synthetic criteria, we show that the speed of the Branch & Bound algorithms strongly depends on the diversity among features, feature stability with respect to different subsets, and criterion function dependence on feature set size. We identify the scenarios where the search is accelerated the most dramatically (finish in linear time), as well as the worst conditions. We verify our conclusions experimentally on three real data sets using traditional probabilistic distance criteria.  相似文献   

18.
快速AVS2帧内预测选择算法   总被引:2,自引:0,他引:2  
针对目前数字音视频编解码技术标准(AVS2)中帧内预测模式判断过程计算较为复杂,而如今超高清视频的普及给编解码系统带来很大压力的问题,提出了一种快速帧内预测选择算法.该算法先对最底层最小编码单元(SCU)进行预测模式删选,减少了底层SCU的计算量;再通过下层编码单元(CU)的预测模式得到上层CU的预测模式,从而减少了上层CU的计算量.实验表明,该算法对压缩效率的影响很小,并且编码时间平均下降超过15%,并可有效地降低帧内编码的复杂度.  相似文献   

19.
The authors describe experiments using a genetic algorithm for feature selection in the context of neural network classifiers, specifically, counterpropagation networks. They present the novel techniques used in the application of genetic algorithms. First, the genetic algorithm is configured to use an approximate evaluation in order to reduce significantly the computation required. In particular, though the desired classifiers are counterpropagation networks, they use a nearest-neighbor classifier to evaluate features sets and show that the features selected by this method are effective in the context of counterpropagation networks. Second, a method called the training set sampling in which only a portion of the training set is used on any given evaluation, is proposed. Computational savings can be made using this method, i.e., evaluations can be made over an order of magnitude faster. This method selects feature sets that are as good as and occasionally better for counterpropagation than those chosen by an evaluation that uses the entire training set.  相似文献   

20.
帧内预测是H.264视频编码标准的重要组成部分。为了保证在图像质量和编码比特率大致不变的前提下,减少帧内预测模式选择的编码时间,提出了一种基于4′4块的纹理方向性和4′4块与16′16块之间的相关性并通过设置双阈值进行选择预测模式的快速算法。实验结果表明该算法与帧内预测编码的全搜索算法相比,编码时间节约了35%~40%,信噪比损失了0.03~0.1dB。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号