首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 358 毫秒
1.
We propose a novel system for designing and manufacturing surfaces that produce desired caustic images when illuminated by a light source. Our system is based on a nonnegative image decomposition using a set of possibly overlapping anisotropic Gaussian kernels. We utilize this decomposition to construct an array of continuous surface patches, each of which focuses light onto one of the Gaussian kernels, either through refraction or reflection. We show how to derive the shape of each continuous patch and arrange them by performing a discrete assignment of patches to kernels in the desired caustic. Our decomposition provides for high fidelity reconstruction of natural images using a small collection of patches. We demonstrate our approach on a wide variety of caustic images by manufacturing physical surfaces with a small number of patches.  相似文献   

2.
近年来使用高斯模型作为块先验的贝叶斯方法取得了优秀的图像去噪性能,但是这一方法在去噪之外的逆问题求解方面性能不太稳定。提出一种基于分层贝叶斯的高斯混合模型对图像块建模,对模型参数引入先验知识,利用Gaussian-Wishart分布对均值和协方差矩阵的概率分布建模,使得块估计过程更加稳定。基于邻近块的相干性,利用L2范数度量完成局部窗口中相似块的聚类,局部窗口相似块利用特定均值和协方差的多元高斯概率分布建模,利用累加平方图及快速傅里叶变换的数值优化方法,加快相似性度量的计算时间。使用基于马式距离的高斯分布相似度的聚合权重,结合图像上的空间域高斯相似度,更好地拟合自然图像的统计特性。通过实验验证了提出的模型在图像复原求解中的有效性。  相似文献   

3.
人群密度估计对于公共安全管理至关重要。针对视频监控系统下的人群密度估计问题,提出了一种基于改进混合高斯模型和像素统计的人群密度估计方法。通过计算图像的均值和偏差均值,提取高斯模型特征,在恒定的模型更新速率指导下,重建混合高斯背景图,从而获取人群二值图,最后,利用像素统计的方法实现人群密度快速估计。实验结果表明,较传统方法,该方法可以更准确有效地估计人群密度。  相似文献   

4.
We address the problem of probability density function estimation using a Gaussian mixture model updated with the expectation-maximization (EM) algorithm. To deal with the case of an unknown number of mixing kernels, we define a new measure for Gaussian mixtures, called total kurtosis, which is based on the weighted sample kurtoses of the kernels. This measure provides an indication of how well the Gaussian mixture fits the data. Then we propose a new dynamic algorithm for Gaussian mixture density estimation which monitors the total kurtosis at each step of the EM algorithm in order to decide dynamically on the correct number of kernels and possibly escape from local maxima. We show the potential of our technique in approximating unknown densities through a series of examples with several density estimation problems  相似文献   

5.
Point matching under large image deformations and illumination changes   总被引:6,自引:0,他引:6  
To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust M-estimation framework the traditional optical flow method and matching of local color distributions. These distributions are computed with spatially oriented kernels in the 5D joint spatial/color space. The estimation process is initiated at the third level of a Gaussian pyramid, uses only local information, and the illumination changes between the two images are also taken into account. Subpixel matching accuracy is achieved under large projective distortions significantly exceeding the performance of any of the two components alone. As an application, the correspondence algorithm is employed in oriented tracking of objects.  相似文献   

6.
Detection and delineation of lines is important for many applications. However, most of the existing algorithms have the shortcoming of high computational cost and can not meet the on-board real-time processing requirement. This paper presents a novel method for curvilinear structure extraction and delineation by using kernel-based density estimation. The method is based on efficient calculation of pixel-wise density estimation for an input feature image, which is termed as local weighted features (LWF). For gray and binary images, the LWF can be efficiently calculated by integral image and accumulated image, respectively. Detectors for small objects and centerlines based on LWF are developed and the selection of density estimation kernels is also illustrated. The algorithm is very fast and achieves 50 fps on a PIV2.4G processor. Evaluation results on a number of images and videos are given to demonstrate the satisfactory performances of the proposed method with its high stability and adaptability.  相似文献   

7.
Gaussian mean-shift is an EM algorithm   总被引:2,自引:0,他引:2  
The mean-shift algorithm, based on ideas proposed by Fukunaga and Hosteller, is a hill-climbing algorithm on the density defined by a finite mixture or a kernel density estimate. Mean-shift can be used as a nonparametric clustering method and has attracted recent attention in computer vision applications such as image segmentation or tracking. We show that, when the kernel is Gaussian, mean-shift is an expectation-maximization (EM) algorithm and, when the kernel is non-Gaussian, mean-shift is a generalized EM algorithm. This implies that mean-shift converges from almost any starting point and that, in general, its convergence is of linear order. For Gaussian mean-shift, we show: 1) the rate of linear convergence approaches 0 (superlinear convergence) for very narrow or very wide kernels, but is often close to 1 (thus, extremely slow) for intermediate widths and exactly 1 (sublinear convergence) for widths at which modes merge, 2) the iterates approach the mode along the local principal component of the data points from the inside of the convex hull of the data points, and 3) the convergence domains are nonconvex and can be disconnected and show fractal behavior. We suggest ways of accelerating mean-shift based on the EM interpretation  相似文献   

8.
The common paradigm employed for object detection is the sliding window (SW) search. This approach generates grid-distributed patches, at all possible positions and sizes, which are evaluated by a binary classifier: The tradeoff between computational burden and detection accuracy is the real critical point of sliding windows; several methods have been proposed to speed up the search such as adding complementary features. We propose a paradigm that differs from any previous approach since it casts object detection into a statistical-based search using a Monte Carlo sampling for estimating the likelihood density function with Gaussian kernels. The estimation relies on a multistage strategy where the proposal distribution is progressively refined by taking into account the feedback of the classifiers. The method can be easily plugged into a Bayesian-recursive framework to exploit the temporal coherency of the target objects in videos. Several tests on pedestrian and face detection, both on images and videos, with different types of classifiers (cascade of boosted classifiers, soft cascades, and SVM) and features (covariance matrices, Haar-like features, integral channel features, and histogram of oriented gradients) demonstrate that the proposed method provides higher detection rates and accuracy as well as a lower computational burden w.r.t. sliding window detection.  相似文献   

9.
High-dimensional density estimation is a fundamental problem in pattern recognition and machine learning areas. In this letter, we show that, for complete high-dimensional Gaussian density estimation, two widely used methods, probabilistic principal component analysis and a typical subspace method using eigenspace decomposition, actually give the same results. Additionally, we present a unified view from the aspect of robust estimation of the covariance matrix.  相似文献   

10.
石锐  陈中秋  刘晶淼 《计算机应用》2013,33(9):2588-2591
针对用矢量法对彩色图像进行降噪处理,算法复杂度较高,无法达到实时处理的问题,提出了基于改进高斯加权和自适应流形的高保真彩色图像降噪方法。首先,将彩色图像用非局部均值算法得到高维数据,使用改进的高斯内核对彩色图像进行加权计算;然后,采用抛雪球方法处理这些高维数据,以高斯距离为权值,投影每个像素点的颜色到自适应流形;接着,对流形进行平滑降维,采用迭代法实现图像平滑;最后,收集流形中的平滑值,将平滑值对所有像素进行插值,得到降噪后的图像数据。实验证明,该方法对彩色图像进行降噪处理后,能够很好地保留原图像的细节,不会掺杂周围像素的颜色,算法处理速度较快,能够达到实时处理效果,降噪效果与原算法相比峰值信噪比(PSNR)提高近2.0dB,结构相似度提高了1百分点以上。  相似文献   

11.
Use of high-dimensional feature spaces in a system has standard problems that must be addressed such as the high calculation costs, storage demands, and training requirements. To partially circumvent this problem, we propose the conjunction of the very high-dimensional feature space and image patches. This union allows for the image patches to be efficiently represented as sparse vectors while taking advantage of the high-dimensional properties. The key to making the system perform efficiently is the use of a sparse histogram representation for the color space which makes the calculations largely independent of the feature space dimension. The system can operate under multiple L p norms or mixed metrics which allows for optimized metrics for the feature vector. An optimal tree structure is also introduced for the approximate nearest neighbor tree to aid in patch classification. It is shown that the system can be applied to various applications and used effectively.  相似文献   

12.
Regularization is a well-known technique in statistics for model estimation which is used to improve the generalization ability of the estimated model. Some of the regularization methods can also be used for variable selection that is especially useful in high-dimensional problems. This paper studies the use of regularized model learning in estimation of distribution algorithms (EDAs) for continuous optimization based on Gaussian distributions. We introduce two approaches to the regularized model estimation and analyze their effect on the accuracy and computational complexity of model learning in EDAs. We then apply the proposed algorithms to a number of continuous optimization functions and compare their results with other Gaussian distribution-based EDAs. The results show that the optimization performance of the proposed RegEDAs is less affected by the increase in the problem size than other EDAs, and they are able to obtain significantly better optimization values for many of the functions in high-dimensional settings.  相似文献   

13.
In this paper, a mean shift-based clustering algorithm is proposed. The mean shift is a kernel-type weighted mean procedure. Herein, we first discuss three classes of Gaussian, Cauchy and generalized Epanechnikov kernels with their shadows. The robust properties of the mean shift based on these three kernels are then investigated. According to the mountain function concepts, we propose a graphical method of correlation comparisons as an estimation of defined stabilization parameters. The proposed method can solve these bandwidth selection problems from a different point of view. Some numerical examples and comparisons demonstrate the superiority of the proposed method including those of computational complexity, cluster validity and improvements of mean shift in large continuous, discrete data sets. We finally apply the mean shift-based clustering algorithm to image segmentation.  相似文献   

14.
Probabilistic visual learning for object representation   总被引:37,自引:0,他引:37  
We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a mixture-of-Gaussians model (for multimodal distributions). Those probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects, such as hands  相似文献   

15.
Discriminative human pose estimation is the problem of inferring the 3D articulated pose of a human directly from an image feature. This is a challenging problem due to the highly non-linear and multi-modal mapping from the image feature space to the pose space. To address this problem, we propose a model employing a mixture of Gaussian processes where each Gaussian process models a local region of the pose space. By employing the models in this way we are able to overcome the limitations of Gaussian processes applied to human pose estimation — their O(N3) time complexity and their uni-modal predictive distribution. Our model is able to give a multi-modal predictive distribution where each mode is represented by a different Gaussian process prediction. A logistic regression model is used to give a prior over each expert prediction in a similar fashion to previous mixture of expert models. We show that this technique outperforms existing state of the art regression techniques on human pose estimation data sets for ballet dancing, sign language and the HumanEva data set.  相似文献   

16.

In many Natural Language Processing problems the combination of machine learning and optimization techniques is essential. One of these problems is the estimation of the human effort needed to improve a text that has been translated using a machine translation method. Recent advances in this area have shown that Gaussian Processes can be effective in post-editing effort prediction. However, Gaussian Processes require a kernel function to be defined, the choice of which highly influences the quality of the prediction. On the other hand, the extraction of features from the text can be very labor-intensive, although recent advances in sentence embedding have shown that this process can be automated. In this paper, we use a Genetic Programming algorithm to evolve kernels for Gaussian Processes to predict post-editing effort based on sentence embeddings. We show that the combination of evolutionary optimization and Gaussian Processes removes the need for a-priori specification of the kernel choice, and, by using a multi-objective variant of the Genetic Programming approach, kernels that are suitable for predicting several metrics can be learned. We also investigate the effect that the choice of the sentence embedding method has on the kernel learning process.

  相似文献   

17.
This paper discusses a method to estimate the expected value of the Gaussian kernel in the presence of incomplete data. We show how, under the general assumption of a missing-at-random mechanism, the expected value of the Gaussian kernel function has a simple closed-form solution. Such a solution depends only on the parameters of the Gamma distribution which is assumed to represent squared distances. Furthermore, we show how the parameters governing the Gamma distribution depend only on the non-central moments of the kernel arguments, via the second-order moments of their squared distance, and can be estimated by making use of any parametric density estimation model of the data distribution. We approximate the data distribution with the maximum likelihood estimate of a Gaussian mixture distribution. The validity of the method is empirically assessed, under a range of conditions, on synthetic and real problems and the results compared to existing methods. For comparison, we consider methods that indirectly estimate a Gaussian kernel function by either estimating squared distances or by imputing missing values and then computing distances. Based on the experimental results, the proposed method consistently proves itself an accurate technique that further extends the use of Gaussian kernels with incomplete data.  相似文献   

18.
19.
20.
Insufficiency of labeled training data is a major obstacle for automatic video annotation. Semi-supervised learning is an effective approach to this problem by leveraging a large amount of unlabeled data. However, existing semi-supervised learning algorithms have not demonstrated promising results in large-scale video annotation due to several difficulties, such as large variation of video content and intractable computational cost. In this paper, we propose a novel semi-supervised learning algorithm named semi-supervised kernel density estimation (SSKDE) which is developed based on kernel density estimation (KDE) approach. While only labeled data are utilized in classical KDE, in SSKDE both labeled and unlabeled data are leveraged to estimate class conditional probability densities based on an extended form of KDE. It is a non-parametric method, and it thus naturally avoids the model assumption problem that exists in many parametric semi-supervised methods. Meanwhile, it can be implemented with an efficient iterative solution process. So, this method is appropriate for video annotation. Furthermore, motivated by existing adaptive KDE approach, we propose an improved algorithm named semi-supervised adaptive kernel density estimation (SSAKDE). It employs local adaptive kernels rather than a fixed kernel, such that broader kernels can be applied in the regions with low density. In this way, more accurate density estimates can be obtained. Extensive experiments have demonstrated the effectiveness of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号