首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Chunyu  Jie   《Pattern recognition》2008,41(8):2656-2664
Online classification is important for real time data sequence classification. Its most challenging problem is that the class priors may vary for non-stationary data sequences. Most of the current online-data-sequence-classification algorithms assume that the class labels of some new-arrived data samples are known and retrain the classifier accordingly. Unfortunately, such assumption is often violated in real applications. But if we were able to estimate the class priors on the test data sequence accurately, we could adjust the classifier without retraining it while preserving a reasonable accuracy. There has been some work on the class priors estimation to classify static data sets using the offline iterative EM algorithm, which has been proved to be quite effective to adjust the classifier. Inspired by the offline iterative EM algorithm for static data sets, in this paper, we propose an online incremental EM algorithm to estimate the class priors along the data sequence. The classifier is adjusted accordingly to keep pace with the varying distribution. The proposed online algorithm is more computationally efficient because it scans the sequence only once. Experimental results show that the proposed algorithm indeed performs better than the conventional offline iterative EM algorithm when the class priors are non-stationary.  相似文献   

2.
The problem of designing a classifier when prior probabilities are not known or are not representative of the underlying data distribution is discussed in this paper. Traditional learning approaches based on the assumption that class priors are stationary lead to sub-optimal solutions if there is a mismatch between training and future (real) priors. To protect against this uncertainty, a minimax approach may be desirable. We address the problem of designing a neural-based minimax classifier and propose two different algorithms: a learning rate scaling algorithm and a gradient-based algorithm. Experimental results show that both succeed in finding the minimax solution and it is also pointed out the differences between common approaches to cope with this uncertainty in priors and the minimax classifier.  相似文献   

3.
Objective priors from maximum entropy in data classification   总被引:1,自引:0,他引:1  
Lack of knowledge of the prior distribution in classification problems that operate on small data sets may make the application of Bayes’ rule questionable. Uniform or arbitrary priors may provide classification answers that, even in simple examples, may end up contradicting our common sense about the problem. Entropic priors (EPs), via application of the maximum entropy (ME) principle, seem to provide good objective answers in practical cases leading to more conservative Bayesian inferences. EP are derived and applied to classification tasks when only the likelihood functions are available. In this paper, when inference is based only on one sample, we review the use of the EP also in comparison to priors that are obtained from maximization of the mutual information between observations and classes. This last criterion coincides with the maximization of the KL divergence between posteriors and priors that for large sample sets leads to the well-known reference (or Bernardo’s) priors. Our comparison on single samples considers both approaches in prospective and clarifies differences and potentials. A combinatorial justification for EP, inspired by Wallis’ combinatorial argument for entropy definition, is also included.The application of the EP to sequences (multiple samples) that may be affected by excessive domination of the class with the maximum entropy is also considered with a solution that guarantees posterior consistency. An explicit iterative algorithm is proposed for EP determination solely from knowledge of the likelihood functions. Simulations that compare EP with uniform priors on short sequences are also included.  相似文献   

4.
小数据集条件下基于数据再利用的BN参数学习   总被引:1,自引:0,他引:1  
杨宇  高晓光  郭志高 《自动化学报》2015,41(12):2058-2071
着重研究了小数据集条件下结合凸约束的离散贝叶斯网络(Bayesian network, BN)参数学习问题, 主要任务是用先验知识弥补数据的不足以提高参数学习精度. 已有成果认为数据和先验知识是独立的, 在参数学习算法中仅将二者机械结合. 经过理论研究后, 本文认为数据和先验知识并不独立, 原有算法浪费了这部分有用信息. 本文立足于数据信息分类, 深入挖掘数据和先验知识之间的约束信息来提高参数学习精度, 提出了新的BN 参数学习算法 --凸约束条件下基于数据再利用的贝叶斯估计. 通过仿真实验展示了所提算法在精度和其他性能上的优势, 进一步证明数据和先验知识不独立思想的合理性.  相似文献   

5.
In this paper, an improved method of model complexity selection for nonnative speech recognition is proposed by using maximum a posteriori (MAP) estimation of bias distributions. An algorithm is described for estimating hyper-parameters of the priors of the bias distributions, and an automatic accent classification algorithm is also proposed for integration with dynamic model selection and adaptation. Experiments were performed on the WSJ1 task with American English speech, British accented speech, and mandarin Chinese accented speech. Results show that the use of prior knowledge of accents enabled more reliable estimation of bias distributions with very small amounts of adaptation speech, or without adaptation speech. Recognition results show that the new approach is superior to the previous maximum expected likelihood (MEL) method, especially when adaptation data are very limited.  相似文献   

6.
王林  郭娜娜 《计算机应用》2017,37(4):1032-1037
针对传统分类技术对不均衡电信客户数据集中流失客户识别能力不足的问题,提出一种基于差异度的改进型不均衡数据分类(IDBC)算法。该算法在基于差异度分类(DBC)算法的基础上改进了原型选择策略。在原型选择阶段,利用改进型的样本子集优化方法从整体数据集中选择最具参考价值的原型集,从而避免了随机选择所带来的不确定性;在分类阶段,分别利用训练集和原型集、测试集和原型集样本之间的差异性构建相应的特征空间,进而采用传统的分类预测算法对映射到相应特征空间内的差异度数据集进行学习。最后选用了UCI数据库中的电信客户数据集和另外6个普通的不均衡数据集对该算法进行验证,相对于传统基于特征的不均衡数据分类算法,DBC算法对稀有类的识别率平均提高了8.3%,IDBC算法对稀有类的识别率平均提高了11.3%。实验结果表明,所提IDBC算法不受类别分布的影响,而且对不均衡数据集中稀有类的识别能力优于已有的先进分类技术。  相似文献   

7.
Epipolar geometry estimation is fundamental to many computer vision algorithms. It has therefore attracted a lot of interest in recent years, yielding high quality estimation algorithms for wide baseline image pairs. Currently many types of cameras such as smartphones produce geo-tagged images containing pose and internal calibration data. These include a GPS receiver, which estimates the position, a compass, accelerometers, and gyros, which estimate the orientation, and the focal length. Exploiting this information as part of an epipolar geometry estimation algorithm may be useful but not trivial, since the pose measurement may be quite noisy. We introduce SOREPP (Soft Optimization method for Robust Estimation based on Pose Priors), a novel estimation algorithm designed to exploit pose priors naturally. It sparsely samples the pose space around the measured pose and for a few promising candidates applies a robust optimization procedure. It uses all the putative correspondences simultaneously, even though many of them are outliers, yielding a very efficient algorithm whose runtime is independent of the inlier fraction. SOREPP was extensively tested on synthetic data and on hundreds of real image pairs taken by smartphones. Its ability to handle challenging scenarios with extremely low inlier fractions of less than 10% was demonstrated. It outperforms current state-of-the-art algorithms that do not use pose priors as well as others that do.  相似文献   

8.
朴素贝叶斯算法(NB)在处理分类问题时通常假设训练样本的数值型连续属性满足正态分布,其分类精度也受到训练数据完整性的影响,而实际采样数据很难满足上述要求。针对数据缺失问题,基于期望最大值算法(EM),将朴素贝叶斯分类器利用已有的不完整数据进行参数学习;针对样本数值型连续属性非正态分布的情况,基于核密度估计,利用其分布密度(Distribution Density)和新的分析计算方法来求最大后验分布,同时用标准数据集的分类实验验证了改进的有效性。将改良的算法EM-DNB应用在生物工程蛋白质纯化工艺预测中,实验结果表明,预测精度有所提高。  相似文献   

9.
This paper presents an a priori probability density function (pdf)-based time-of-arrival (TOA) source localization algorithms. Range measurements are used to estimate the location parameter for TOA source localization. Previous information on the position of the calibrated source is employed to improve the existing likelihood-based localization method. The cost function where the prior distribution was combined with the likelihood function is minimized by the adaptive expectation maximization (EM) and space-alternating generalized expectation–maximization (SAGE) algorithms. The variance of the prior distribution does not need to be known a priori because it can be estimated using Bayes inference in the proposed adaptive EM algorithm. Note that the variance of the prior distribution should be known in the existing three-step WLS method [1]. The resulting positioning accuracy of the proposed methods was much better than the existing algorithms in regimes of large noise variances. Furthermore, the proposed algorithms can also effectively perform the localization in line-of-sight (LOS)/non-line-of-sight (NLOS) mixture situations.  相似文献   

10.
目前大多的域自适应算法在源域与目标域具有相同类别的场景下,利用标签丰富的源域信息对标签稀少且分布相似的目标域数据进行迁移学习,取得了很多成果.然而,由于现实场景的复杂性和开放性,源域和目标域在类别空间上不尽相同,往往会各自包含一些类别未知且超出现有类别设定的样本.对于这样具有挑战性的开放集场景,传统的域自适应算法将无能...  相似文献   

11.
杨帅  王浩  俞奎  曹付元 《软件学报》2023,34(7):3206-3225
稳定学习的目标是利用单一的训练数据构造一个鲁棒的预测模型,使其可以对任意与训练数据具有相似分布的测试数据进行精准的分类.为了在未知分布的测试数据上实现精准预测,已有的稳定学习算法致力于去除特征与类标签之间的虚假相关关系.然而,这些算法只能削弱特征与类标签之间部分虚假相关关系并不能完全消除虚假相关关系;此外,这些算法在构建预测模型时可能导致过拟合问题.为此,提出一种基于实例加权和双分类器的稳定学习算法,所提算法通过联合优化实例权重和双分类器来学习一个鲁棒的预测模型.具体而言,所提算法从全局角度平衡混杂因子对实例进行加权来去除特征与类标签之间的虚假相关关系,从而更好地评估每个特征对分类的作用.为了完全消除数据中部分不相关特征与类标签之间的虚假相关关系以及弱化不相关特征对实例加权过程的干扰,所提算法在实例加权之前先进行特征选择筛除部分不相关特征.为了进一步提高模型的泛化能力,所提算法在训练预测模型时构建两个分类器,通过最小化两个分类器的参数差异来学习一个较优的分类界面.在合成数据集和真实数据集上的实验结果表明了所提方法的有效性.  相似文献   

12.
关联分类及较多的改进算法很难同时既具有较高的整体准确率又有较好的小类分类性能。针对此问题,提出了一种基于类支持度阈值独立挖掘的关联分类改进算法—ACCS。ACCS算法的主要特点是:(1)根据训练集中各类数量大小给出每个类类支持度阈值的设定方法,并基于各类的类支持度阈值独立挖掘该类的关联分类规则,尽量使小类生成更多高置信度的规则;(2)采用类支持度对置信度相同的规则排序,提高小类规则的优先级;(3)用综合考虑置信度和提升度的新的规则度量预测未知实例。在多个数据集上的实验结果表明,相比多种关联分类改进算法,ACCS算法有更高的整体分类准确率,且在不平衡数据上也能取得较好的小类分类性能。  相似文献   

13.
One-class learning algorithms are used in situations when training data are available only for one class, called target class. Data for other class(es), called outliers, are not available. One-class learning algorithms are used for detecting outliers, or novelty, in the data. The common approach in one-class learning is to use density estimation techniques or adapt standard classification algorithms to define a decision boundary that encompasses only the target data. In this paper, we introduce OneClass-DS learning algorithm that combines rule-based classification with greedy search algorithm based on density of features. Its performance is tested on 25 data sets and compared with eight other one-class algorithms; the results show that it performs on par with those algorithms.  相似文献   

14.
周期信号的盲抽取在医学等领域具有重要的研究价值. 对于此类信号的抽取需要利用先验知识对其周期进行较为准确的估计,多数算法对于目标信号周期估计误差十分敏感. 文章针对此问题提出一种改进的周期特征信号盲抽取算法,首先通过先验知识估算最佳时延的存在区间并利用改进的时域相关函数对提取矩阵进行初始估计,再使用负熵作为目标函数并引入自相关函数确保抽取顺序,使用牛顿迭代法进行优化,达到最优结果. 仿真和实际ECG数据测试表明,对比几种现行算法本文提出的算法抽取效果较好,并增强了周期估计误差的鲁棒性.  相似文献   

15.
针对KNN算法在中文文本分类时的两个不足:训练样本分布不均,分类时计算开销大的问题,在已有改进算法的基础上进行了更深入的研究,提出多级分类KNN算法。算法首先引入基于密度的思想对训练样本进行调整,通过样本裁减技术使样本分布更趋于理想的均匀状态,同时计算各类别的类中心向量。在保证类中心向量准确性的前提条件下,使分类阶段的复杂计算提前到分类器的训练过程中。最后一级选用合适的m值(预选类别个数),根据最近邻思想对待分类文本进行所属类别判定。实验结果表明,该算法在不损失分类精度的情况下,不仅降低了计算复杂度,而且显著提高了分类速度。  相似文献   

16.
李志恒 《计算机应用研究》2021,38(2):591-594,599
针对机器学习中训练样本和测试样本概率分布不一致的问题,提出了一种基于dropout正则化的半监督域自适应方法来实现将神经网络的特征表示从标签丰富的源域转移到无标签的目标域。此方法从半监督学习的角度出发,在源域数据中添加少量带标签的目标域数据,使得神经网络在学习到源域数据特征分布的同时也能学习到目标域数据的特征分布。由于有了先验知识的指导,即使没有丰富的标签信息,神经网络依然可以很好地拟合目标域数据。实验结果表明,此算法在几种典型的数字数据集SVHN、MNIST和USPS的域自适应任务上的性能优于现有的其他算法,并且在涵盖广泛自然类别的真实数据集CIFAR-10和STL-10的域自适应任务上有较好的鲁棒性。  相似文献   

17.
In this paper, we propose a novel supervised dimension reduction algorithm based on K-nearest neighbor (KNN) classifier. The proposed algorithm reduces the dimension of data in order to improve the accuracy of the KNN classification. This heuristic algorithm proposes independent dimensions which decrease Euclidean distance of a sample data and its K-nearest within-class neighbors and increase Euclidean distance of that sample and its M-nearest between-class neighbors. This algorithm is a linear dimension reduction algorithm which produces a mapping matrix for projecting data into low dimension. The dimension reduction step is followed by a KNN classifier. Therefore, it is applicable for high-dimensional multiclass classification. Experiments with artificial data such as Helix and Twin-peaks show ability of the algorithm for data visualization. This algorithm is compared with state-of-the-art algorithms in classification of eight different multiclass data sets from UCI collection. Simulation results have shown that the proposed algorithm outperforms the existing algorithms. Visual place classification is an important problem for intelligent mobile robots which not only deals with high-dimensional data but also has to solve a multiclass classification problem. A proper dimension reduction method is usually needed to decrease computation and memory complexity of algorithms in large environments. Therefore, our method is very well suited for this problem. We extract color histogram of omnidirectional camera images as primary features, reduce the features into a low-dimensional space and apply a KNN classifier. Results of experiments on five real data sets showed superiority of the proposed algorithm against others.  相似文献   

18.

在类别不均衡的数据中, 类间和类内不均衡性问题都是导致分类性能下降的重要因素. 为了提高不均衡数据集下分类算法的性能, 提出一种基于概率分布估计的混合采样算法. 该算法依据数据概率分别对每个子类进行采样以保证类内的均衡性; 并扩大少数类的潜在决策域和减少多数类的冗余信息, 从而同时从全局和局部两个角度改善数据的平衡性. 实验结果表明, 该算法提高了传统分类算法在不均衡数据下的分类性能.

  相似文献   

19.
陆宇  赵凌云  白斌雯  姜震 《计算机应用》2022,42(12):3750-3755
不平衡分类的相关算法是机器学习领域的研究热点之一,其中的过采样通过重复抽取或者人工合成来增加少数类样本,以实现数据集的再平衡。然而当前的过采样方法大部分是基于原有的样本分布进行的,难以揭示更多的数据集分布特征。为了解决以上问题,首先,提出一种改进的半监督聚类算法来挖掘数据的分布特征;其次,基于半监督聚类的结果,在属于少数类的簇中选择置信度高的无标签数据(伪标签样本)加入原始训练集,这样做除了实现数据集的再平衡外,还可以利用半监督聚类获得的分布特征来辅助不平衡分类;最后,融合半监督聚类和分类的结果来预测最终的类别标签,从而进一步提高算法的不平衡分类性能。选择G-mean和曲线下面积(AUC)作为评价指标,将所提算法与TU、CDSMOTE等7个基于过采样或欠采样的不平衡分类算法在10个公开数据集上进行了对比分析。实验结果表明,与TU、CDSMOTE相比,所提算法在AUC指标上分别平均提高了6.7%和3.9%,在G-mean指标上分别平均提高了7.6%和2.1%,且在两个评价指标上相较于所有对比算法都取得了最高的平均结果。可见所提算法能够有效地提高不平衡分类性能。  相似文献   

20.
We describe approaches for positive data modeling and classification using both finite inverted Dirichlet mixture models and support vector machines (SVMs). Inverted Dirichlet mixture models are used to tackle an outstanding challenge in SVMs namely the generation of accurate kernels. The kernels generation approaches, grounded on ideas from information theory that we consider, allow the incorporation of data structure and its structural constraints. Inverted Dirichlet mixture models are learned within a principled Bayesian framework using both Gibbs sampler and Metropolis-Hastings for parameter estimation and Bayes factor for model selection (i.e., determining the number of mixture’s components). Our Bayesian learning approach uses priors, which we derive by showing that the inverted Dirichlet distribution belongs to the family of exponential distributions, over the model parameters, and then combines these priors with information from the data to build posterior distributions. We illustrate the merits and the effectiveness of the proposed method with two real-world challenging applications namely object detection and visual scenes analysis and classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号