首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Revisiting Hartley's normalized eight-point algorithm   总被引:1,自引:0,他引:1  
Hartley's eight-point algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the eight-point algorithm that results from using normalized data. It is first established that the normalized algorithm acts to minimize a specific cost function. It is then shown that this cost function I!; statistically better founded than the cost function associated with the nonnormalized algorithm. This augments the original argument that improved performance is due to the better conditioning of a pivotal matrix. Experimental results are given that support the adopted approach. This work continues a wider effort to place a variety of estimation techniques within a coherent framework.  相似文献   

2.
在非均匀杂波环境下的合成孔径雷达(synthetic aperture radar,SAR)图像背景建模问题中,针对非参量建模算法Parzen窗估计严重依赖于窗宽设置及最优核函数选择的问题,提出一种基于K近邻优化的概率密度函数估计算法,解决因固定近邻数而导致估计不准确甚至不能估计的问题.该算法不需要图像的任何先验知识,且无需考虑窗宽的设置及最优核函数的选择问题.与Parzen窗估计、K分布和$G^0$分布的对比实验表明,所提出的K近邻优化估计算法可以实现对单峰、多峰甚至不规则图像数据的准确建模,优于K分布和$G^0$分布;同时,对图像首尾数据的处理优于Parzen窗估计.实验结果验证了所提出方法对SAR图像杂波建模的精确性、鲁棒性和简便性,以及全局恒虚警率目标检测的有效性.  相似文献   

3.
Many statistical queries such as maximum likelihood estimation involve finding the best candidate model given a set of candidate models and a quality estimation function. This problem is common in important applications like land-use classification at multiple spatial resolutions from remote sensing raster data. Such a problem is computationally challenging due to the significant computation cost to evaluate the quality estimation function for each candidate model. For example, a recently proposed method of multi-scale, multi-granular classification has high computational overhead of function evaluation for various candidate models independently before comparison. In contrast, we propose an upper bound based context-inclusive approach that reduces computational overhead based on the context, i.e. the value of the quality estimation function for the best candidate model so far. We also prove that an upper bound exists for each candidate model and the proposed algorithm is correct. Experimental results using land-use classification at multiple spatial resolutions from satellite imagery show that the proposed approach reduces the computational cost significantly.  相似文献   

4.
融入邻域作用的高斯混合分割模型及简化求解   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 基于高斯混合模型(GMM)的图像分割方法易受噪声影响,为此采用马尔可夫随机场(MRF)将像素邻域关系引入GMM,提高算法抗噪性。针对融入邻域作用的高斯混合分割模型结构复杂、参数估计困难,难以获得全局最优分割解等问题,提出一种融入邻域作用的高斯混合分割模型及其简化求解方法。方法 首先,构建融入邻域作用的GMM。为了提高GMM的抗噪性,采用MRF建模混合模型权重系数的先验分布。然后,利用贝叶斯理论建立图像分割模型,即品质函数;由于品质函数中参数较多(包括权重系数,均值,协方差)、函数结构复杂,导致参数求解困难。因此,将品质函数中的均值和协方差定义为权重系数的函数,由此简化模型结构并方便其求解;虽然品质函数中仅包含参数权重系数,但结构比较复杂,难以求得参数的解析式。最后,采用非线性共轭梯度法(CGM)求解参数,该方法仅需利用品质函数值和参数梯度值,降低了参数求解的复杂性,并且收敛快,可以得到全局最优解。结果 为了有效而准确地验证提出的分割方法,分别采用本文算法和对比算法对合成图像和高分辨率遥感图像进行分割实验,并定性和定量地评价和分析了实验结果。实验结果表明本文方法的有效抗噪性,并得到很好的分割结果。从参数估计结果可以看出,本文算法有效简化了模型参数,并获得全局最优解。结论 提出一种融入邻域作用的高斯混合分割模型及其简化求解方法,实验结果表明,本文算法提高了算法的抗噪性,有效地简化了模型参数,并得到全局最优参数解。本文算法对具有噪声的高分辨率遥感影像广泛适用。  相似文献   

5.
The Bradley-Terry model is a statistical representation for one's preference or ranking data by using pairwise comparison results of items. For estimation of the model, several methods based on the sum of weighted Kullback-Leibler divergences have been proposed from various contexts. The purpose of this letter is to interpret an estimation mechanism of the Bradley-Terry model from the viewpoint of flatness, a fundamental notion used in information geometry. Based on this point of view, a new estimation method is proposed on a framework of the em algorithm. The proposed method is different in its objective function from that of conventional methods, especially in treating unobserved comparisons, and it is consistently interpreted in a probability simplex. An estimation method with weight adaptation is also proposed from a viewpoint of the sensitivity. Experimental results show that the proposed method works appropriately, and weight adaptation improves accuracy of the estimate.  相似文献   

6.
Modeling the dependence of credit ratings is an important issue for portfolio credit risk analysis. Multivariate Markov chain models are a feasible mathematical tool for modeling the dependence of credit ratings. Here we develop a flexible multivariate Markov chain model for modeling the dependence of credit ratings. The proposed model provides a parsimonious way to capture both the cross-sectional and temporal associations among ratings of individual entities. The number of model parameters is of the magnitude O(sm 2 + s 2 m), where m is the number of ratings categories and s is the number of entities in a credit portfolio. The proposed model is also easy to implement. The estimation method is formulated as a set of s linear programming problems and the estimation algorithm can be implemented easily in a Microsoft EXCEL worksheet, see Ching et al. Int J Math Educ Sci Eng 35:921–932 (2004). We illustrate the practical implementation of the proposed model using real ratings data. We evaluate risk measures, such as Value at Risk and Expected Shortfall, for a credit portfolio using the proposed model and compare the risk measures with those arising from Ching et al. IMRPreprintSeries (2007), Siu et al. Quant Finance 5:543–556 (2005).  相似文献   

7.
Topology optimization is often used in the conceptual design stage as a preprocessing tool to obtain overall material distribution in the solution domain. The resulting topology is then used as an initial guess for shape optimization. It is always desirable to use fine computational grids to obtain high-resolution layouts that minimize the need for shape optimization and postprocessing (Bendsoe and Sigmund, Topology optimization theory, methods and applications. Springer, Berlin Heidelberg New York 2003), but this approach results in high computation cost and is prohibitive for large structures. In the present work, parallel computing in combination with domain decomposition is proposed to reduce the computation time of such problems. The power law approach is used as the material distribution method, and an optimality criteria-based optimizer is used for locating the optimum solution [Sigmund (2001)21:120–127; Rozvany and Olhoff, Topology optimization of structures and composites continua. Kluwer, Norwell 2000]. The equilibrium equations are solved using a preconditioned conjugate gradient algorithm. These calculations have been done using a master–slave programming paradigm on a coarse-grain, multiple instruction multiple data, shared-memory architecture. In this study, by avoiding the assembly of the global stiffness matrix, the memory requirement and computation time has been reduced. The results of the current study show that the parallel computing technique is a valuable tool for solving computationally intensive topology optimization problems.  相似文献   

8.
目的 针对融合—复原法超分辨率重建中融合与复原两大环节,提出新的改进算法框架:用改进的归一化卷积实现融合,再用改进的最大后验估计实现复原,得到更优的超分辨率重建。方法 改进的归一化卷积引入了双适应度函数和一种新的混合确定度函数;改进的最大后验估计,引入一种特征驱动先验模型,该模型通过混合两种不变先验模型而得到,形式完全取决于图像自身的统计特征。结果 用本文算法对不同降质水平的图像进行重建,并与其他若干算法重建结果比较。无论从视觉效果还是从评价指标,本文算法均优于其他算法。结论 本文超分辨率重建算法,融合环节兼顾了邻域像素的空间距离和光度差,充分利用两种确定度函数的各自优势,可以抑制更多噪声和异常值;复原环节的先验模型依据图像特征而不是经验,对图像刻画更准确。实验结果也验证了本文算法的有效性。  相似文献   

9.
In this paper, we propose a novel and highly robust estimator, called MDPE1 (Maximum Density Power Estimator). This estimator applies nonparametric density estimation and density gradient estimation techniques in parametric estimation (model fitting). MDPE optimizes an objective function that measures more than just the size of the residuals. Both the density distribution of data points in residual space and the size of the residual corresponding to the local maximum of the density distribution, are considered as important characteristics in our objective function. MDPE can tolerate more than 85% outliers. Compared with several other recently proposed similar estimators, MDPE has a higher robustness to outliers and less error variance.We also present a new range image segmentation algorithm, based on a modified version of the MDPE (Quick-MDPE), and its performance is compared to several other segmentation methods. Segmentation requires more than a simple minded application of an estimator, no matter how good that estimator is: our segmentation algorithm overcomes several difficulties faced with applying a statistical estimator to this task.  相似文献   

10.

针对多无源传感器多维分配数据关联模型在构造关联代价时, 未充分考虑位置估计不确定性所引入的误差问题, 提出一种基于信息散度的数据关联算法. 将伪量测信息的概率密度函数与真实观测数据的最大后验概率密度函数之间的差异性信息作为关联代价, 并分别采用Kullback-Leibler 散度和对称Kullback-Leibler 散度来量化该差异.仿真分析结果表明, 该算法具有良好的关联性能, 其关联代价能更精准地反映数据关联的可能性程度.

  相似文献   

11.
摄像机姿态估计是影响三维注册成功与否的关键技术,目前在基于基本矩阵的估计中最流行的是八点算法,但八点算法针对平面场景出现明显的退化,因此在针对古建筑室内环境的注册应用中,为了扩大场景的适用范围,利用计算机视觉技术,对基于五点算法和八点算法的两种算法进行研究。以盒子结构为古建筑的室内环境结构进行模拟,结果表明,五点算法较八点算法恢复的三维模型有更高的精确性。  相似文献   

12.
目的 光场相机可以通过单次曝光同时从多个视角采样单个场景,在深度估计领域具有独特优势。消除遮挡的影响是光场深度估计的难点之一。现有方法基于2D场景模型检测各视角遮挡状态,但是遮挡取决于所采样场景的3D立体模型,仅利用2D模型无法精确检测,不精确的遮挡检测结果将降低后续深度估计精度。针对这一问题,提出了3D遮挡模型引导的光场图像深度获取方法。方法 向2D模型中的不同物体之间添加前后景关系和深度差信息,得到场景的立体模型,之后在立体模型中根据光线的传输路径推断所有视角的遮挡情况并记录在遮挡图(occlusion map)中。在遮挡图引导下,在遮挡和非遮挡区域分别使用不同成本量进行深度估计。在遮挡区域,通过遮挡图屏蔽被遮挡视角,基于剩余视角的成像一致性计算深度;在非遮挡区域,根据该区域深度连续特性设计了新型离焦网格匹配成本量,相比传统成本量,该成本量能够感知更广范围的色彩纹理,以此估计更平滑的深度图。为了进一步提升深度估计的精度,根据遮挡检测和深度估计的依赖关系设计了基于最大期望(exception maximization,EM)算法的联合优化框架,在该框架下,遮挡图和深度图通过互相引导的方式相继提升彼此精度。结果 实验结果表明,本文方法在大部分实验场景中,对于单遮挡、多遮挡和低对比度遮挡在遮挡检测和深度估计方面均能达到最优结果。均方误差(mean square error,MSE)对比次优结果平均降低约19.75%。结论 针对遮挡场景的深度估计,通过理论分析和实验验证,表明3D遮挡模型相比传统2D遮挡模型在遮挡检测方面具有一定优越性,本文方法更适用于复杂遮挡场景的深度估计。  相似文献   

13.
14.
Unsupervised technique like clustering may be used for software cost estimation in situations where parametric models are difficult to develop. This paper presents a software cost estimation model based on a modified K-Modes clustering algorithm. The aims of this paper are: first, the modified K-Modes clustering which is an enhancement over the simple K-Modes algorithm using a proper dissimilarity measure for mixed data types, is presented and second, the proposed K-Modes algorithm is applied for software cost estimation. We have compared our modified K-Modes algorithm with existing algorithms on different software cost estimation datasets, and results showed the effectiveness of our proposed algorithm.  相似文献   

15.
基于核函数法及粒子滤波的煤矿井下定位算法研究   总被引:1,自引:0,他引:1  
煤矿井下受限空间中,射频信号强度受到多径衰落、阴影效应及人为因素的影响,采用路径损耗模型的定位方法误差较大,提出了基于核函数法及粒子滤波的定位算法。该算法利用指纹匹配技术结合贝叶斯估计,基于核函数法构建模型,搜索训练数据中接近未知节点指纹特征的位置并加权得到初步观测坐标,最后利用粒子滤波将目标运动状态与观测值相融合,平滑位置突变以追踪移动轨迹。实验证明,对于静态目标定位,核函数法效果优于确定型匹配算法和高斯分布模型;对于动态目标定位,所提算法比基于Markov状态转移的算法定位结果更精准。  相似文献   

16.
Based on given data center network topology and risk-neutral management, this work proposes a simple but efficient probability-based model to calculate the probability of insecurity of each protected resource and the optimal investment on each security protection device when a data center is under security breach. We present two algorithms that calculate the probability of threat and the optimal investment for data center security respectively. Based on the insecurity flow model (Moskowitz and Kang 1997) of analyzing security violations, we first model data center topology using two basic components, namely resources and filters, where resources represent the protected resources and filters represent the security protection devices. Four basic patterns are then identified as the building blocks for the first algorithm, called Accumulative Probability of Insecurity, to calculate the accumulative probability of realized threat (insecurity) on each resource. To calculate the optimal security investment, a risk-neutral based algorithm, called Optimal Security Investment, which maximizes the total expected net benefit is then proposed. Numerical simulations show that the proposed approach coincides with Gordon’s (Gordon and Loeb, ACM Transactions on Information and Systems Security 5(4):438–457, 2002) single-system analytical model. In addition, numerical results on two common data center topologies are analyzed and compared to demonstrate the effectiveness of the proposed approach. The technique proposed here can be used to facilitate the analysis and design of more secured data centers.  相似文献   

17.
In this study, we propose a learning algorithm for ordinal regression problems. In most existing learning algorithms, the threshold or location model is assumed to be the statistical model. For estimation of conditional probability of labels for a given covariate vector, we extended the location model to apply ordinal regressions. We present this learning algorithm using the squared-loss function with the location-scale models for estimating conditional probability. We prove that the estimated conditional probability satisfies the monotonicity of the distribution function. Furthermore, we have conducted numerical experiments to compare these proposed methods with existing approaches. We found that, in its ability to predict labels, our method may not have an advantage over existing approaches. However, for estimating conditional probabilities, it does outperform the learning algorithm using location models.  相似文献   

18.
Triplet Markov fields (TMF) model proposed recently is suitable for nonstationary image segmentation. For synthetic aperture radar (SAR) image segmentation, TMF model can adopt diverse statistical models for SAR data related to diverse radar backscattering sources. However, TMF model does not take into account the inherent imprecision associated with SAR images. In this paper, we propose a statistical fuzzy TMF (FTMF) model, which is a fuzzy clustering type treatment of TMF model, for unsupervised multi-class segmentation of SAR images. This paper contributes to SAR image segmentation in four aspects: (1) Nonstationarity of the statistical distribution of SAR intensity/amplitude data is taken into account to improve the spatial modeling capability of fuzzy TMF model. (2) Mean field theory is generalized to deal with planar variables to derive prior probability in fuzzy TMF model, which resolves the problem in Gibbs sampler in terms of computation cost. (3) A fuzzy objective function with regularization by Kullback–Leibler information of fuzzy TMF model is constructed for SAR image segmentation. The introduction of fuzziness for the belongingness of SAR image pixel makes fuzzy TMF model be able to retain more information from SAR image. (4) Fuzzy iterative conditional estimation (ICE) method, as an extension of the general ICE method is proposed to perform the model parameters estimation. The effectiveness of the proposed algorithm is demonstrated by application to simulated data and real SAR images.  相似文献   

19.
Companies usually have limited amount of data for effort estimation. Machine learning methods have been preferred over parametric models due to their flexibility to calibrate the model for the available data. On the other hand, as machine learning methods become more complex, they need more data to learn from. Therefore the challenge is to increase the performance of the algorithm when there is limited data. In this paper, we use a relatively complex machine learning algorithm, neural networks, and show that stable and accurate estimations are achievable with an ensemble using associative memory. Our experimental results show that our proposed algorithm (ENNA) produces significantly better results than neural network (NN) in terms of accuracy and robustness. We also analyze the effect of feature subset selection on ENNA’s estimation performance in a wrapper framework. We show that the proposed ENNA algorithm that use the features selected by the wrapper does not perform worse than those that use all available features. Therefore, measuring only company specific key factors is sufficient to obtain accurate and robust estimates about software cost estimation using ENNA.  相似文献   

20.
对基于多维分配模型的多无源传感器(Multi-passive-sensor system,MPSS)多目标数据关联算法进行了归纳分析,指出该模型不仅忽略了极大似然估计所引入的随机误差,而且未充分考虑量测与伪量测之间的相关性.继而建立了一种去相关修正数据关联模型,并提出利用无迹变换计算二者之间的互协方差. 另外定义了概念解的区分度来评估关联代价构造的合理性. 最后进行了仿真实验,结果表明去相关后的关联代价能更精准地反映数据关联的可能性,所提关联算法运算时间有所增加,但关联性能更佳.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号