首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper addresses the problem of finding matching points in stereo image pairs, i.e., the problem of correspondence. Even though this topic is well-known, a complete probabilistic formulation of it using psychovisual cues is still missing. We propose a novel Bayesian model based on Markov Random Fields (MRFs); the prior energy function is built in terms of the probability density function (pdf) of the disparity gradient. This pdf has never been reported in the past. The likelihood energy function is defined in terms of the pdf of the square normalized cross covariance between any two matching points. The stereo correspondence map is then obtained as the MAP estimator of the posterior field. Comparative results with methods previously reported, show the adequacy of the general model here proposed, and a good compromise between deterministic and stochastic images is attained.  相似文献   

2.

In this work we introduce a new approach for robust image segmentation. The idea is to combine two strategies within a Bayesian framework. The first one is to use a Markov Random Field (MRF), which allows to introduce prior information with the purpose of preserve the edges in the image. The second strategy comes from the fact that the probability density function (pdf) of the likelihood function is non Gaussian or unknown, so it should be approximated by an estimated version, and for this, it is used the classical non-parametric or kernel density estimation. This two strategies together lead us to the definition of a new maximum a posteriori (MAP) approach based on the minimization of the entropy of the estimated pdf of the likelihood function and the MRF at the same time, named MAP entropy estimator (MAPEE). Some experiments were conducted for different kind of images degraded with impulsive noise and other non-Gaussian distributions, where the segmentation results are very satisfactory comparing them with respect to recent robust approaches based on the fuzzy c-means (FCM) segmentation.

  相似文献   

3.
面向小目标图像的快速核密度估计图像阈值分割算法   总被引:1,自引:1,他引:0  
王骏  王士同  邓赵红  应文豪 《自动化学报》2012,38(10):1679-1689
针对当前小目标图像阈值分割研究工作面临的难题,提出了快速核密 度估计图像阈值分割新方法.首先给出了基于加权核密度估计器的概率计算模 型,通过引入二阶Renyi熵作为阈值选取准则,提出了基于核密度估计的图像阈 值分割算法 (Kernel density estimator based image thresholding algorithm, KDET), 然后通过引入快速压缩集密度估计 (Fast reduced set density estimator, FRSDE)技术,得到核密度估计的 稀疏权系数表示形式,提出快速核密度估计图像阈值分割算法fastKDET,并从 理论上对相关性质进行了深入探讨.实验表明,本文算法对小目标图像 阈值分割问题具有更广泛的适应性,并且对参数变化不敏感.  相似文献   

4.
Feature selection for logistic regression (LR) is still a challenging subject. In this paper, we present a new feature selection method for logistic regression based on a combination of the zero-norm and l2-norm regularization. However, discontinuity of the zero-norm makes it difficult to find the optimal solution. We apply a proper nonconvex approximation of the zero-norm to derive a robust difference of convex functions (DC) program. Moreover, DC optimization algorithm (DCA) is used to solve the problem effectively and the corresponding DCA converges linearly. Compared with traditional methods, numerical experiments on benchmark datasets show that the proposed method reduces the number of input features while maintaining accuracy. Furthermore, as a practical application, the proposed method is used to directly classify licorice seeds using near-infrared spectroscopy data. The simulation results in different spectral regions illustrates that the proposed method achieves equivalent classification performance to traditional logistic regressions yet suppresses more features. These results show the feasibility and effectiveness of the proposed method.  相似文献   

5.
A conditional density function, which describes the relationship between response and explanatory variables, plays an important role in many analysis problems. In this paper, we propose a new kernel-based parametric method to estimate conditional density. An exponential function is employed to approximate the unknown density, and its parameters are computed from the given explanatory variable via a nonlinear mapping using kernel principal component analysis (KPCA). We develop a new kernel function, which is a variant to polynomial kernels, to be used in KPCA. The proposed method is compared with the Nadaraya-Watson estimator through numerical simulation and practical data. Experimental results show that the proposed method outperforms the Nadaraya-Watson estimator in terms of revised mean integrated squared error (RMISE). Therefore, the proposed method is an effective method for estimating the conditional densities.  相似文献   

6.
为了提高运动目标检测的准确度和精度,提出一种基于空时置信关系的运动检测方法。该方法利用快速核密度估计对图像像素点与其邻域像素点的空时关系进行建模,并根据样本值的离散度为背景模型分配对应的权重,最后依据像素值的背景隶属度权重均值,判断当前像素点属于运动前景还是背景。实验结果表明该方法的运动检测性能优于主流代表性算法。  相似文献   

7.
A Kernel-Based Two-Class Classifier for Imbalanced Data Sets   总被引:3,自引:0,他引:3  
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm  相似文献   

8.
Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check. For each selected kernels, the associated kernel width is updated via the Gauss-Newton method with the model parameter estimate fixed. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate the efficacy of the proposed approach.  相似文献   

9.
A modified probabilistic neural network (PNN) for brain tissue segmentation with magnetic resonance imaging (MRI) is proposed. In this approach, covariance matrices are used to replace the singular smoothing factor in the PNN's kernel function, and weighting factors are added in the pattern of summation layer. This weighted probabilistic neural network (WPNN) classifier can account for partial volume effects, which exist commonly in MRI, not only in the final result stage, but also in the modeling process. It adopts the self-organizing map (SOM) neural network to overly segment the input MR image, and yield reference vectors necessary for probabilistic density function (pdf) estimation. A supervised "soft" labeling mechanism based on Bayesian rule is developed, so that weighting factors can be generated along with corresponding SOM reference vectors. Tissue classification results from various algorithms are compared, and the effectiveness and robustness of the proposed approach are demonstrated.  相似文献   

10.
Standard fixed symmetric kernel-type density estimators are known to encounter problems for positive random variables with a large probability mass close to zero. It is shown that, in such settings, alternatives of asymmetric gamma kernel estimators are superior, but also differ in asymptotic and finite sample performance conditionally on the shape of the density near zero and the exact form of the chosen kernel. Therefore, a refined version of the gamma kernel with an additional tuning parameter adjusted according to the shape of the density close to the boundary is suggested. A data-driven method for the appropriate choice of the modified gamma kernel estimator is also provided. An extensive simulation study compares the performance of this refined estimator to those of standard gamma kernel estimates and standard boundary corrected and adjusted fixed kernels. It is found that the finite sample performance of the proposed new estimator is superior in all settings. Two empirical applications based on high-frequency stock trading volumes and realized volatility forecasts demonstrate the usefulness of the proposed methodology in practice.  相似文献   

11.
We estimate interclass (mom-sib) correlation by maximizing the log-likelihood function of a Kotz-type distribution. The results are illustrated on a real life data set due to Galton. Using extensive simulations and the three criteria, namely, bias, MSE and Pitman nearness probability, we compare the proposed estimator with the maximum likelihood estimator based on normal distribution and with a non-iterative estimator due to Srivastava. We conclude that the proposed estimator performs well when the data are not from multivariate normal distribution. However, if the data are from multivariate normal distribution then Srivastava's estimator and normal based maximum likelihood estimator perform well as expected. Testing of hypothesis about this correlation is also discussed using likelihood based tests. It is concluded that score test derived using Kotz-type density performs the best.  相似文献   

12.
This paper is concerned with density estimation based on the stagewise minimization of the U-divergence. The U-divergence is a general divergence measure involving a convex function U which includes the Kullback-Leibler divergence and the L 2 norm as special cases. The algorithm to yield the density estimator is closely related to the boosting algorithm and it is shown that the usual kernel density estimator can also be seen as a special case of the proposed estimator. Non-asymptotic error bounds of the proposed estimators are developed and numerical experiments show that the proposed estimators often perform better than several existing methods for density estimation.  相似文献   

13.
周璨  李伯阳  黄斌  刘刘 《计算机工程》2008,34(8):184-186
通过分析现有入侵检测技术的不足,探讨基于孤立点挖掘的入侵检测技术的优势,提出一种基于核密度估计的入侵检测方法。该方法通过核密度估计求出孤立点的近似集,再通过筛选近似集获得最终的孤立点集合,从而检测入侵记录。阐述了具体实现方案,通过仿真实验验证了该方法的可行性。  相似文献   

14.
We prove consistency results for two types of density estimators on a closed, connected Riemannian manifold under suitable regularity conditions. The convergence rates are consistent with those in Euclidean space as well as those obtained for a previously proposed class of kernel density estimators on closed Riemannian manifolds. The first estimator is the uniform mixture of heat kernels centered at each observation, a natural extension of the usual Gaussian estimator to Riemannian manifolds. The second is an approximate heat kernel (AHK) estimator that is motivated by more practical considerations, where observations occur on a manifold isometrically embedded in Euclidean space whose structure or heat kernel may not be completely known. We also provide some numerical evidence that the predicted convergence rate is attained for the AHK estimator.  相似文献   

15.
Moving horizon estimation (MHE) is a numerical optimization based approach to state estimation, where the joint probability density function (pdf) of a finite state trajectory is sought, which is conditioned on a moving horizon of measurements. The joint conditional pdf depends on the a priori state pdf at the start of the horizon, which is a prediction pdf based on historical data outside the horizon. When the joint pdf is maximized, the arrival cost is a penalty term based on the a priori pdf in the MHE objective function. Traditionally, the a priori pdf is assumed as a multivariate Gaussian pdf and the extended Kalman filter (EKF) and smoother are used to recursively update the mean and covariance. However, transformation of moments through nonlinearity is poorly approximated by linearization, which can result in poor initialization of MHE. Sampling based nonlinear filters completely avoid Taylor series approximations of nonlinearities and attempt to approximate the non-Gaussian state pdf using samples and associated weights or probability mass points. The performance gains of sampling based filters over EKF motivate their use to formulate the arrival cost in MHE. The a priori mean and covariance are more effectively propagated through nonlinearities and the resulting arrival cost term can help to keep the horizon small. It is also possible to find closed-form approximations to the non-Gaussian a priori pdf from the sampling based filters. Thus, more realistic nonparametric arrival cost terms can be included by avoiding the Gaussian assumption. In this paper the use of the deterministic sampling based unscented Kalman filter, the class of random sampling based particle filter and the aggregate Markov chain based cell filter are discussed for initializing MHE. Two simulation examples are included to demonstrate the benefits of these methods over the traditional EKF approach.  相似文献   

16.
为了提高网络流量的预测精度,提出一种布谷鸟算法优化混合核相关向量机的网络流量预测模型(CS-RVM)。首先采用多项式和高斯核函数构成混合核函数代替相关向量机的单一核函数,然后引入布谷鸟算法对混合核参数进行寻优,最后建立网络流量预测模型。仿真结果表明,CS-RVM具有良好的建模效果,可提高网络流量的预测精度。  相似文献   

17.
针对工业、信息等领域出现的基于较大规模、非平稳变化复杂数据的回归问题,已有算法在计算成本及拟合效果方面无法同时满足要求.因此,文中提出基于多尺度高斯核的分布式正则化回归学习算法.算法中的假设空间为多个具有不同尺度的高斯核生成的再生核Hilbert空间的和空间.考虑到整个数据集划分的不同互斥子集波动程度不同,建立不同组合系数核函数逼近模型.利用最小二乘正则化方法同时独立求解各逼近模型.最后,通过对所得的各个局部估计子加权合成得到整体逼近模型.在2个模拟数据集和4个真实数据集上的实验表明,文中算法既能保证较优的拟合性能,又能降低运行时间.  相似文献   

18.
基于高阶近似核和傅里叶系数内插的快速频率估计算法   总被引:2,自引:0,他引:2  
通过理论分析建立了近似核和量化位数之间的约束关系,并提出无需乘法运算的高阶近似核,用于提高单比特测频算法的动态范围.此算法同时对基于近似核的傅里叶系数实部或虚部最大值附近的DFT系数内插构造频率修正项.可以避免常规测频校正算法的复数运算从而有效减少运算量.此算法简单且宜于硬件快速实现,其有效性得到了理论分析和仿真结果的验证.  相似文献   

19.
The Gaussian kernel density estimator is known to have substantial problems for bounded random variables with high density at the boundaries. For independent and identically distributed data, several solutions have been put forward to solve this boundary problem. In this paper, we propose the gamma kernel estimator as a density estimator for positive time series data from a stationary α-mixing process. We derive the mean (integrated) squared error and asymptotic normality. In a Monte Carlo simulation, we generate data from an autoregressive conditional duration model and a stochastic volatility model. We study the local and global behavior of the estimator and we find that the gamma kernel estimator outperforms the local linear density estimator and the Gaussian kernel estimator based on log-transformed data. We also illustrate the good performance of the h-block cross-validation method as a bandwidth selection procedure. An application to data from financial transaction durations and realized volatility is provided.  相似文献   

20.
The Nadaraya–Watson estimator, also known as kernel regression, is a density-based regression technique. It weights output values with the relative densities in input space. The density is measured with kernel functions that depend on bandwidth parameters. In this work we present an evolutionary bandwidth optimizer for kernel regression. The approach is based on a robust loss function, leave-one-out cross-validation, and the CMSA-ES as optimization engine. A variant with local parameterized Nadaraya–Watson models enhances the approach, and allows the adaptation of the model to local data space characteristics. The unsupervised counterpart of kernel regression is an approach to learn principal manifolds. The learning problem of unsupervised kernel regression (UKR) is based on optimizing the latent variables, which is a multimodal problem with many local optima. We propose an evolutionary framework for optimization of UKR based on scaling of initial local linear embedding solutions, and minimization of the cross-validation error. Both methods are analyzed experimentally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号