首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 328 毫秒
1.
In this paper we propose a Gaussian-kernel-based online kernel density estimation which can be used for applications of online probability density estimation and online learning. Our approach generates a Gaussian mixture model of the observed data and allows online adaptation from positive examples as well as from the negative examples. The adaptation from the negative examples is realized by a novel concept of unlearning in mixture models. Low complexity of the mixtures is maintained through a novel compression algorithm. In contrast to the existing approaches, our approach does not require fine-tuning parameters for a specific application, we do not assume specific forms of the target distributions and temporal constraints are not assumed on the observed data. The strength of the proposed approach is demonstrated with examples of online estimation of complex distributions, an example of unlearning, and with an interactive learning of basic visual concepts.  相似文献   

2.
Visual context provides cues about an object’s presence, position and size within the observed scene, which should be used to increase the performance of object detection techniques. However, in computer vision, object detectors typically ignore this information. We therefore present a framework for visual-context-aware object detection. Methods for extracting visual contextual information from still images are proposed, which are then used to calculate a prior for object detection. The concept is based on a sparse coding of contextual features, which are based on geometry and texture. In addition, bottom-up saliency and object co-occurrences are exploited, to define auxiliary visual context. To integrate the individual contextual cues with a local appearance-based object detector, a fully probabilistic framework is established. In contrast to other methods, our integration is based on modeling the underlying conditional probabilities between the different cues, which is done via kernel density estimation. This integration is a crucial part of the framework which is demonstrated within the detailed evaluation. Our method is evaluated using a novel demanding image data set and compared to a state-of-the-art method for context-aware object detection. An in-depth analysis is given discussing the contributions of the individual contextual cues and the limitations of visual context for object detection.  相似文献   

3.
对工具及其功用性部件的认知是共融机器人智能提升的重要研究方向.本文针对家庭日常工具的功用性部件建模与检测问题展开研究,提出了一种基于条件随机场(Conditional random field,CRF)和稀疏编码联合学习的家庭日常工具功用性部件检测算法.首先,从工具深度图像提取表征工具功用性部件的几何特征;然后,分析CRF和稀疏编码之间的耦合关系并进行公式化表示,将特征稀疏化后作为潜变量构建初始条件随机场模型,并进行稀疏字典和CRF的协同优化:一方面,将特征的稀疏表示作为CRF的随机变量条件及权重参数选择器;另一方面,在CRF调控下对稀疏字典进行更新.随后使用自适应时刻估计(Adaptive moment estimation,Adam)方法实现模型解耦与求解.最后,给出了基于联合学习的工具功用性部件模型离线构建算法,以及基于该模型的在线检测方法.实验结果表明,相较于使用传统特征提取和模型构建方法,本文方法对功用性部件的检测精度和效率均得到提升,且能够满足普通配置机器人对工具功用性认知的需要.  相似文献   

4.
We describe approaches for positive data modeling and classification using both finite inverted Dirichlet mixture models and support vector machines (SVMs). Inverted Dirichlet mixture models are used to tackle an outstanding challenge in SVMs namely the generation of accurate kernels. The kernels generation approaches, grounded on ideas from information theory that we consider, allow the incorporation of data structure and its structural constraints. Inverted Dirichlet mixture models are learned within a principled Bayesian framework using both Gibbs sampler and Metropolis-Hastings for parameter estimation and Bayes factor for model selection (i.e., determining the number of mixture’s components). Our Bayesian learning approach uses priors, which we derive by showing that the inverted Dirichlet distribution belongs to the family of exponential distributions, over the model parameters, and then combines these priors with information from the data to build posterior distributions. We illustrate the merits and the effectiveness of the proposed method with two real-world challenging applications namely object detection and visual scenes analysis and classification.  相似文献   

5.
人体行为识别是计算机视觉的研究难点和热点,主流的研究框架包括行为特征提取、人体行为表示和识别算法3个方面,目前简单场景下的人体简单动作的识别已基本得到解决,而复杂场景下的行为识别仍面临很多困难。对近几年人体行为识别的发展做了比较详细的研究,从人体行为识别的研究范畴、特征提取以及行为模型等方面综述了目前复杂场景下人体行为识别的研究方法。与已有的相关综述文献不同的是,文中结合了近三年国内外人体行为识别领域中新的研究热点和成果,如姿态特征的提取和表示、基于稀疏编码和卷积神经网络的人体行为表示方法等。最后阐述了该领域目前存在的困难以及可能的发展趋向。  相似文献   

6.
Neural tree density estimation for novelty detection   总被引:1,自引:0,他引:1  
In this paper, a neural competitive learning tree is introduced as a computationally attractive scheme for adaptive density estimation and novelty detection. The learning rule yields equiprobable quantization of the input space and provides an adaptive focusing mechanism capable of tracking time-varying distributions. It is shown by simulation that the neural tree performs reasonably well while being much faster than any of the other competitive learning algorithms.  相似文献   

7.
This paper proposes a technique for jointly quantizing continuous features and the posterior distributions of their class labels based on minimizing empirical information loss such that the quantizer index of a given feature vector approximates a sufficient statistic for its class label. Informally, the quantized representation retains as much information as possible for classifying the feature vector correctly. We derive an alternating minimization procedure for simultaneously learning codebooks in the euclidean feature space and in the simplex of posterior class distributions. The resulting quantizer can be used to encode unlabeled points outside the training set and to predict their posterior class distributions, and has an elegant interpretation in terms of lossless source coding. The proposed method is validated on synthetic and real data sets and is applied to two diverse problems: learning discriminative visual vocabularies for bag-of-features image classification and image segmentation.  相似文献   

8.
Exponential principal component analysis (e-PCA) has been proposed to reduce the dimension of the parameters of probability distributions using Kullback information as a distance between two distributions. It also provides a framework for dealing with various data types such as binary and integer for which the Gaussian assumption on the data distribution is inappropriate. In this paper, we introduce a latent variable model for the e-PCA. Assuming the discrete distribution on the latent variable leads to mixture models with constraint on their parameters. This provides a framework for clustering on the lower dimensional subspace of exponential family distributions. We derive a learning algorithm for those mixture models based on the variational Bayes (VB) method. Although intractable integration is required to implement the algorithm for a subspace, an approximation technique using Laplace's method allows us to carry out clustering on an arbitrary subspace. Combined with the estimation of the subspace, the resulting algorithm performs simultaneous dimensionality reduction and clustering. Numerical experiments on synthetic and real data demonstrate its effectiveness for extracting the structures of data as a visualization technique and its high generalization ability as a density estimation model.   相似文献   

9.
Sparse coding is a popular technique in image denoising. However, owing to the ill-posedness of denoising problems, it is difficult to obtain an accurate estimation of the true code. To improve denoising performance, we collect the sparse coding errors of a dataset on a principal component analysis dictionary, make an assumption on the probability of errors and derive an energy optimization model for image denoising, called adaptive sparse coding on a principal component analysis dictionary (ASC-PCA). The new method considers two aspects. First, with a PCA dictionary-related observation of the probability distributions of sparse coding errors on different dimensions, the regularization parameter balancing the fidelity term and the nonlocal constraint can be adaptively determined, which is critical for obtaining satisfying results. Furthermore, an intuitive interpretation of the constructed model is discussed. Second, to solve the new model effectively, a filter-based iterative shrinkage algorithm containing the filter-based back-projection and shrinkage stages is proposed. The filter in the back-projection stage plays an important role in solving the model. As demonstrated by extensive experiments, the proposed method performs optimally in terms of both quantitative and visual measurements.  相似文献   

10.
Most of the current approaches to mixture modeling consider mixture components from a few families of probability distributions, in particular from the Gaussian family. The reasons of these preferences can be traced to their training algorithms, typically versions of the Expectation-Maximization (EM) method. The re-estimation equations needed by this method become very complex as the mixture components depart from the simplest cases. Here we propose to use a stochastic approximation method for probabilistic mixture learning. Under this method it is straightforward to train mixtures composed by a wide range of mixture components from different families. Hence, it is a flexible alternative for mixture learning. Experimental results are presented to show the probability density and missing value estimation capabilities of our proposal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号