首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
视频运动对象分割是计算机视觉和视频处理的基本问题。在摄像机存在全局运动的动态场景下,准确分割运动对象依然是难点和热点问题。本文提出一种基于全局运动补偿和核密度检测的动态场景下视频运动对象分割算法。首先,提出匹配加权的全局运动估计补偿算法,消除动态场景下背景运动对运动对象分割的影响;其次,采用非参数核密度估计方法分别估计各像素属于前景与背景的概率密度,通过比较属于前景和属于背景的概率及形态学处理得到运动对象分割结果。实验结果证明,该方法实现简单,有效地提高了动态场景下运动对象分割的准确性。  相似文献   

2.
Many vision algorithms depend on the estimation of a probability density function from observations. Kernel density estimation techniques are quite general and powerful methods for this problem, but have a significant disadvantage in that they are computationally intensive. In this paper, we explore the use of kernel density estimation with the fast Gauss transform (FGT) for problems in vision. The FGT allows the summation of a mixture of ill Gaussians at N evaluation points in O(M+N) time, as opposed to O(MN) time for a naive evaluation and can be used to considerably speed up kernel density estimation. We present applications of the technique to problems from image segmentation and tracking and show that the algorithm allows application of advanced statistical techniques to solve practical vision problems in real-time with today's computers.  相似文献   

3.
Most methods that address computer vision problems require powerful visual features. Many successful approaches apply techniques motivated from nonparametric statistics. The channel representation provides a framework for nonparametric distribution representation. Although early work has focused on a signal processing view of the representation, the channel representation can be interpreted in probabilistic terms, e.g., representing the distribution of local image orientation. In this paper, a variety of approximative channel-based algorithms for probabilistic problems are presented: a novel efficient algorithm for density reconstruction, a novel and efficient scheme for nonlinear gridding of densities, and finally a novel method for estimating Copula densities. The experimental results provide evidence that by relaxing the requirements for exact solutions, efficient algorithms are obtained.  相似文献   

4.
H. Yserentant 《Computing》2006,78(3):195-209
Sparse grid methods represent a powerful and efficient technique for the representation and approximation of functions and particularly the solutions of partial differential equations in moderately high space dimensions. To extend the approach to truly high-dimensional problems as they arise in quantum chemistry, an additional property has to be brought into play, the symmetry or antisymmetry of the functions sought there. In the present article, an adaptive sparse grid refinement scheme is developed that takes full advantage of such symmetry properties and for which the amount of work and storage remains strictly proportional to the number of degrees of freedom. To overcome the problems with the approximation of the inherently complex antisymmetric functions, augmented sparse grid spaces are proposed.  相似文献   

5.
While most previous work in the subject of Bayesian Fault diagnosis and control loop diagnosis use discretized evidence for performing diagnosis (an example of evidence being a monitor reading), discretizing continuous evidence can result in information loss. This paper proposes the use of kernel density estimation, a non-parametric technique for estimating the density functions of continuous random variables. Kernel density estimation requires the selection of a bandwidth parameter, used to specify the degree of smoothing, and a number of bandwidth selection techniques (optimal Gaussian, sample-point adaptive, and smoothed cross-validation) are discussed and compared. Because kernel density estimation is known to have reduced performance in high dimensions, this paper also discusses a number of existing preprocessing methods that can be used to reduce the dimensionality (grouping according to dependence, and independent component analysis). Bandwidth selection and dimensionality reduction techniques are tested on a simulation and an industrial process.  相似文献   

6.
Fast retrieval methods are critical for many large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sublinear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several data sets, and show that it enables accurate and fast performance for several vision problems, including example-based object classification, local feature matching, and content-based retrieval.  相似文献   

7.
In many data stream mining applications, traditional density estimation methods such as kernel density estimation, reduced set density estimation can not be applied to the density estimation of data streams because of their high computational burden, processing time and intensive memory allocation requirement. In order to reduce the time and space complexity, a novel density estimation method Dm-KDE over data streams based on the proposed algorithm m-KDE which can be used to design a KDE estimator with the fixed number of kernel components for a dataset is proposed. In this method, Dm-KDE sequence entries are created by algorithm m-KDE instead of all kernels obtained from other density estimation methods. In order to further reduce the storage space, Dm-KDE sequence entries can be merged by calculating their KL divergences. Finally, the probability density functions over arbitrary time or entire time can be estimated through the obtained estimation model. In contrast to the state-of-the-art algorithm SOMKE, the distinctive advantage of the proposed algorithm Dm-KDE exists in that it can achieve the same accuracy with much less fixed number of kernel components such that it is suitable for the scenarios where higher on-line computation about the kernel density estimation over data streams is required.We compare Dm-KDE with SOMKE and M-kernel in terms of density estimation accuracy and running time for various stationary datasets. We also apply Dm-KDE to evolving data streams. Experimental results illustrate the effectiveness of the proposed method.  相似文献   

8.
Mixture modeling is one of the most useful tools in machine learning and data mining applications. An important challenge when applying finite mixture models is the selection of the number of clusters which best describes the data. Recent developments have shown that this problem can be handled by the application of non-parametric Bayesian techniques to mixture modeling. Another important crucial preprocessing step to mixture learning is the selection of the most relevant features. The main approach in this paper, to tackle these problems, consists on storing the knowledge in a generalized Dirichlet mixture model by applying non-parametric Bayesian estimation and inference techniques. Specifically, we extend finite generalized Dirichlet mixture models to the infinite case in which the number of components and relevant features do not need to be known a priori. This extension provides a natural representation of uncertainty regarding the challenging problem of model selection. We propose a Markov Chain Monte Carlo algorithm to learn the resulted infinite mixture. Through applications involving text and image categorization, we show that infinite mixture models offer a more powerful and robust performance than classic finite mixtures for both clustering and feature selection.  相似文献   

9.
We propose a locally adaptive technique to address the problem of setting the bandwidth parameters for kernel density estimation. Our technique is efficient and can be performed in only two dataset passes. We also show how to apply our technique to efficiently solve range query approximation, classification and clustering problems for very large datasets. We validate the efficiency and accuracy of our technique by presenting experimental results on a variety of both synthetic and real datasets.  相似文献   

10.
基于稀疏贝叶斯分类器的汽车车型识别   总被引:2,自引:0,他引:2  
稀疏贝叶斯方法在处理分类问题上具有良好的推广性,并且使用较少的核函数,介绍了一个实时的车型识别系统.它以每点色彩信息的高斯混合模型来实现对视频图像的背景估计,从而实现对汽车的检测;利用稀疏贝叶斯分类器对检测到的汽车进行车型分类,实验结果表明稀疏贝叶斯分类器不仅具有支持向量机的性能,而且比SVM使用更少的核函数.实验取得了较好的分类效果.  相似文献   

11.
This paper addresses the problem of proportional data modeling and clustering using mixture models, a problem of great interest and of importance for many practical pattern recognition, image processing, data mining and computer vision applications. Finite mixture models are broadly applicable to clustering problems. But, they involve the challenging problem of the selection of the number of clusters which requires a certain trade-off. The number of clusters must be sufficient to provide the discriminating capability between clusters required for a given application. Indeed, if too many clusters are employed overfitting problems may occur and if few are used we have a problem of underfitting. Here we approach the problem of modeling and clustering proportional data using infinite mixtures which have been shown to be an efficient alternative to finite mixtures by overcoming the concern regarding the selection of the optimal number of mixture components. In particular, we propose and discuss the consideration of infinite Liouville mixture model whose parameter values are fitted to the data through a principled Bayesian algorithm that we have developed and which allows uncertainty in the number of mixture components. Our experimental evaluation involves two challenging applications namely text classification and texture discrimination, and suggests that the proposed approach can be an excellent choice for proportional data modeling.  相似文献   

12.
This paper gives two methods for the L1 analysis of sampled-data systems, by which we mean computing the L-induced norm of sampled-data systems. This is achieved by developing what we call the kernel approximation approach in the setting of sampled-data systems. We first consider the lifting treatment of sampled-data systems and give an operator theoretic representation of their input/output relation. We further apply the fast-lifting technique by which the sampling interval [0, h) is divided into M subintervals with an equal width, and provide methods for computing the L-induced norm. In contrast to a similar approach developed earlier called the input approximation approach, we use an idea of kernel approximation, in which the kernel function of an input operator and the hold function of an output operator are approximated by piecewise constant or piecewise linear functions. Furthermore, it is shown that the approximation errors in the piecewise constant approximation or piecewise linear approximation scheme converge to 0 at the rate of 1/M or 1/M2, respectively. In comparison with the existing input approximation approach, in which the input function (rather than the kernel function) of the input operator is approximated by piecewise constant or piecewise linear functions, we show that the kernel approximation approach gives improved computation results. More precisely, even though the convergence rates in the kernel approximation approach remain qualitatively the same as those in the input approximation approach, the newly developed former approach could lead to quantitatively improved approximation errors than the latter approach particularly when the piecewise linear approximation scheme is taken. Finally, a numerical example is given to demonstrate the effectiveness of the kernel approximation approach with this scheme.  相似文献   

13.
The heat kernel is a fundamental geometric object associated to every Riemannian manifold, used across applications in computer vision, graphics, and machine learning. In this article, we propose a novel computational approach to estimating the heat kernel of a statistically sampled manifold (e.g. meshes or point clouds), using its representation as the transition density function of Brownian motion on the manifold. Our approach first constructs a set of local approximations to the manifold via moving least squares. We then simulate Brownian motion on the manifold by stochastic numerical integration of the associated Ito diffusion system. By accumulating a number of these trajectories, a kernel density estimation method can then be used to approximate the transition density function of the diffusion process, which is equivalent to the heat kernel. We analyse our algorithm on the 2‐sphere, as well as on shapes in 3D. Our approach is readily parallelizable and can handle manifold samples of large size as well as surfaces of high co‐dimension, since all the computations are local. We relate our method to the standard approaches in diffusion geometry and discuss directions for future work.  相似文献   

14.
A Tensor Approximation Approach to Dimensionality Reduction   总被引:1,自引:0,他引:1  
Dimensionality reduction has recently been extensively studied for computer vision applications. We present a novel multilinear algebra based approach to reduced dimensionality representation of multidimensional data, such as image ensembles, video sequences and volume data. Before reducing the dimensionality we do not convert it into a vector as is done by traditional dimensionality reduction techniques like PCA. Our approach works directly on the multidimensional form of the data (matrix in 2D and tensor in higher dimensions) to yield what we call a Datum-as-Is representation. This helps exploit spatio-temporal redundancies with less information loss than image-as-vector methods. An efficient rank-R tensor approximation algorithm is presented to approximate higher-order tensors. We show that rank-R tensor approximation using Datum-as-Is representation generalizes many existing approaches that use image-as-matrix representation, such as generalized low rank approximation of matrices (GLRAM) (Ye, Y. in Mach. Learn. 61:167–191, 2005), rank-one decomposition of matrices (RODM) (Shashua, A., Levin, A. in CVPR’01: Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition, p. 42, 2001) and rank-one decomposition of tensors (RODT) (Wang, H., Ahuja, N. in ICPR ’04: ICPR ’04: Proceedings of the 17th international conference on pattern recognition (ICPR’04), vol. 1, pp. 44–47, 2004). Our approach yields the most compact data representation among all known image-as-matrix methods. In addition, we propose another rank-R tensor approximation algorithm based on slice projection of third-order tensors, which needs fewer iterations for convergence for the important special case of 2D image ensembles, e.g., video. We evaluated the performance of our approach vs. other approaches on a number of datasets with the following two main results. First, for a fixed compression ratio, the proposed algorithm yields the best representation of image ensembles visually as well as in the least squares sense. Second, proposed representation gives the best performance for object classification. A shorter version of this paper was published at IEEE CVPR 2005 (Wang and Ahuja 2005).  相似文献   

15.
In this paper, we propose a novel formulation extending convolutional neural networks (CNN) to arbitrary two-dimensional manifolds using orthogonal basis functions called Zernike polynomials. In many areas, geometric features play a key role in understanding scientific trends and phenomena, where accurate numerical quantification of geometric features is critical. Recently, CNNs have demonstrated a substantial improvement in extracting and codifying geometric features. However, the progress is mostly centred around computer vision and its applications where an inherent grid-like data representation is naturally present. In contrast, many geometry processing problems deal with curved surfaces and the application of CNNs is not trivial due to the lack of canonical grid-like representation, the absence of globally consistent orientation and the incompatible local discretizations. In this paper, we show that the Zernike polynomials allow rigourous yet practical mathematical generalization of CNNs to arbitrary surfaces. We prove that the convolution of two functions can be represented as a simple dot product between Zernike coefficients and the rotation of a convolution kernel is essentially a set of 2 × 2 rotation matrices applied to the coefficients. The key contribution of this work is in such a computationally efficient but rigorous generalization of the major CNN building blocks.  相似文献   

16.
The implementation of NP-SSO (non-parametric stochastic subset optimization) to general design under uncertainty problems and its enhancement through various soft computing techniques is discussed. NP-SSO relies on iterative simulation of samples of the design variables from an auxiliary probability density, and approximates the objective function through kernel density estimation (KDE) using these samples. To deal with boundary correction in complex domains, a multivariate boundary KDE based on local linear estimation is adopted in this work. Also, a non-parametric characterization of the search space at each iteration using a framework based on support vector machine is formulated. To further improve computational efficiency, an adaptive kernel sampling density formulation is integrated and an adaptive, iterative selection of the number of samples needed for the KDE implementation is established.  相似文献   

17.
图像中的异常检测是计算机视觉中非常重要的研究主题, 它可以定义为单分类问题;针对图像数据集的规模大,维度高等特性,一种新的深度卷积自编码器(Convolutional Autoencoder, CAE)与核近似单分类支持向量机(One Class Support Vector Machine, OCSVM)相结合的异常检测模型CAE-OCSVM被提出;模型中的深度卷积自编码器负责学习图像的本质特征表示,然后使用随机傅里叶特征对卷积自编码器学习本质特征进行核近似,核近似后输入线性单类支持向量机进行图像异常检测。核近似技术克服了核学习技术时间复杂度高的问题;同时深度卷积自编码器与核近似单类支持向量机通过梯度下降法实现了端到端的学习;模型的AUC性能在四个公开的图像基准数据集上进行了实验验证,同时模型与其它常用的异常检测模型在不同的异常率的情况下进行了性能对比;实验结果证实CAE-OCSVM模型在四个公开图像数据集上的性能都优于其它异常检测模型,表明了CAE-OCSVM模型更适合大规模高维数据集的异常检测  相似文献   

18.
In this paper, a kernel-based learning algorithm, kernel rank, is presented for improving the performance of semantic concept detection. By designing a classifier optimizing the receiver operating characteristic (ROC) curve using kernel rank, we provide a generic framework to optimize any differentiable ranking function using effective smoothing functions. kernel rank directly maximizes a 1-D quality measure of ROC, i.e., AUC (area under the ROC). It exploits the kernel density estimation to model the ranking score distributions and approximate the correct ranking count. The ranking metric is then derived and the learnable parameters are naturally embedded. To address the issues of computation and memory in learning, an efficient implementation is developed based on the gradient descent algorithm. We apply kernel rank with two types of kernel density functions to train the linear discriminant function and the Gaussian mixture model classifiers. From our experiments carried out on the development set for TREC Video Retrieval 2005, we conclude that (1) kernel rank is capable of training any differentiable classifier with various kernels; and (2) the learned ranking function performs better than traditional maximization likelihood or classification error minimization based algorithms in terms of AUC and average precision (AP).  相似文献   

19.
The popular Expectation Maximization technique suffers a major drawback when used to approximate a density function using a mixture of Gaussian components; that is the number of components has to be a priori specified. Also, Expectation Maximization by itself cannot estimate time-varying density functions. In this paper, a novel stochastic technique is introduced to overcome these two limitations. Kernel density estimation is used to obtain a discrete estimate of the true density of the given data. A Stochastic Learning Automaton is then used to select the number of mixture components that minimizes the distance between the density function estimated using the Expectation Maximization and discrete estimate of the density. The validity of the proposed approach is verified using synthetic and real univariate and bivariate observation data.  相似文献   

20.
Constrained clustering methods (that usually use must-link and/or cannot-link constraints) have been received much attention in the last decade. Recently, kernel adaptation or kernel learning has been considered as a powerful approach for constrained clustering. However, these methods usually either allow only special forms of kernels or learn non-parametric kernel matrices and scale very poorly. Therefore, they either learn a metric that has low flexibility or are applicable only on small data sets due to their high computational complexity. In this paper, we propose a more efficient non-linear metric learning method that learns a low-rank kernel matrix from must-link and cannot-link constraints and the topological structure of data. We formulate the proposed method as a trace ratio optimization problem and learn appropriate distance metrics through finding optimal low-rank kernel matrices. We solve the proposed optimization problem much more efficiently than SDP solvers. Additionally, we show that the spectral clustering methods can be considered as a special form of low-rank kernel learning methods. Extensive experiments have demonstrated the superiority of the proposed method compared to recently introduced kernel learning methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号