共查询到20条相似文献,搜索用时 103 毫秒
1.
核回归方法的散点拟合曲面重构 总被引:2,自引:0,他引:2
散点曲面重构是计算机图形学中的一个基本问题,针对这个问题提出了一种全新的基于核回归方法的散点曲面重构方法,使用二维信号处理方法中非参数滤波等成熟手段进行曲面重构.这种方法可以生成任意阶数连续的曲面,在理论上保证了生成曲面的连续性,可以自定义网格的拓扑,在曲率大或者感兴趣的局部能够自适应调整网格点的密度,生成的结果方便LOD建模,数据的拟合精度也可以通过调整滤波参数控制,算法自适应调整滤波器的方向,使结果曲面可以更好保持尖锐特征.同时在构造过程中避免了传统的细分曲面方法中迭代、Delaunay剖分和点云数据中重采样等时间开销大的过程,提高了效率.对于采样不均、噪声较大的数据,该算法的鲁棒性很好.实验表明这种曲面建模方法能够散点重构出精度较高的连续曲面,在效率上有很大提高,在只需要估计曲面和其一阶导数时,利用Nadaraya-Watson快速算法可以使算法时间复杂度降为O(N),远低于其他曲面重构平滑方法.同时算法可以对曲面的局部点云密度、网格顶点法矢等信息做有效的估计.重构出的曲面对类似数字高程模型(DEM)的数据可以保证以上的优点.但如果散点数据不能被投影到2维平面上,曲面重构就需要包括基网格生成、重构面片缝合等过程.缝合边缘的连续性也不能在理论上得到保证. 相似文献
3.
4.
支持向量回归模型在曲线光顺拟合中的改进 总被引:1,自引:1,他引:1
几何逆向工程中的光顺曲线重构问题本质上属于回归问题。支持向量回归机是求解回归问题的新的十分有效的方法。论文研究用支持向量回归机处理光顺曲线的重构问题。鉴于后者有着对于光顺性的特殊要求,已有的支持向量机并不适用。通过修正惩罚因子对支持向量机加以改造,即根据测量数据点的分布情况,利用各测量点圆率的特性确定对应的惩罚因子,从而实现了自由曲线的光顺重构。数值试验表明新方法可以剔除输入数据中不光顺点的影响,并在给定的精度条件下有效地逼近曲线,达到较好的拟合效果。 相似文献
5.
针对现有的神经纤维聚类技术通常依赖于纤维束的空间位置,缺乏对神经纤维走向信息考量的问题,提出一种基于B样条拟合与回归模型的脑神经纤维聚类方法,将神经纤维进行B样条拟合与采样,标准化,使用线性回归方程进行神经纤维描述,构造出每根神经纤维的高斯密度函数与神经纤维分布的极大似然函数,使用EM聚类算法进行聚类.将该算法与QB聚类算法应用到真实医学数据(PPMI影像数据跟踪出的脑神经纤维)上,并对聚类结果与QB聚类结果进行评估,定性地判断分类结果在解剖学空间上的相似性,定量地比较聚类后脑神经纤维的数目.实验结果表明,该方法在功能区层面的聚类可以更有效地分割出具有解剖学结构的脑神经纤维. 相似文献
6.
针对椒盐噪声的去噪和细节保护问题,提出一种基于核回归拟合的开关去噪算法。首先,通过高效脉冲检测器对图像中的椒盐噪声像素点进行精确检测;其次,将所检测到的噪声像素点当作缺失数据,应用核回归方法对以噪声像素点为中心的邻域内的非噪声像素点进行拟合,得到符合图像局部结构特征的核回归拟合曲面;最后,以噪声像素点的空间坐标对核回归拟合曲面进行重采样,获得噪声像素点恢复后的灰度值,从而实现椒盐噪声的滤除。与经典的中值滤波器(SMF)、自适应中值滤波器(AMF)、改进型的方向加权中值滤波器(MDWMF)、快速开关中均值滤波器(FSMMF)、图像修补(Ⅱ)等算法进行不同噪声密度的实验对比,所提算法的去噪结果图像的主观视觉质量均为最优;在低密度、中等密度以及高密度噪声场景下,所提算法对不同测试图像去噪结果的峰值信噪比(PSNR)分别平均提高了6.02dB、6.33dB和5.58dB,且平均绝对误差(MAE)分别平均降低了0.90、5.84和25.29。实验结果表明,所提算法不仅能够有效去除各种密度的椒盐噪声,同时具备良好的图像细节保护性能。 相似文献
7.
8.
9.
10.
11.
软件可靠性预测中不同核函数的预测能力评估 总被引:2,自引:0,他引:2
基于核函数回归估计理论的软件可靠性预测建模引起诸多研究者的兴趣.此类研究中,核函数选择问题尤为重要.然而目前还很少有针对所给软件失效数据进行核函数选择或者构建核函数的工作.在14个常用软件失效数据集上应用配对t-检验对基于核函数理论的软件可靠性预测模型中核函数选择问题进行研究.使用的核函数回归估计方法包括核主成分回归算法、核偏最小二乘回归算法、支持向量回归算法、相关向量回归算法;核函数包括高斯核函数、线性核函数、多项式核函数、柯西核函数、拉普拉斯核函数、对称三角核函数、双曲正割核函数、平方正弦基核函数.实验结果表明:不同类型的核函数在不同数据集上表现差异较大,高斯核函数在所有数据集上表现较为稳定,预测结果最好. 相似文献
12.
Most variable selection techniques focus on first-order linear regression models. Often, interaction and quadratic terms are also of interest, but the number of candidate predictors grows very fast with the number of original predictors, making variable selection more difficult. Forward selection algorithms are thus developed that enforce natural hierarchies in second-order models to control the entry rate of uninformative effects and to equalize the false selection rates from first-order and second-order terms. Method performance is compared through Monte Carlo simulation and illustrated with data from a Cox regression and from a response surface experiment. 相似文献
13.
Arup Kumar Nandi Frank Klawonn 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2007,11(5):467-478
Regression refers to the problem of approximating measured data that are assumed to be produced by an underlying, possibly
noisy function. However, in real applications the assumption that the data represent samples from one function is sometimes
wrong. For instance, in process control different strategies might be used to achieve the same goal. Any regression model,
trying to fit such data as good as possible, must fail, since it can only find an intermediate compromise between the different
strategies by which the data were produced. To tackle this problem, an approach is proposed here to detect ambiguities in
regression problems by selecting a subset of data from the total data set using TSK models, which work in parallel by sharing
the data with each other in every step. The proposed approach is verified with artificial data, and finally utilised to real
data of grinding, a manufacturing process used to generate smooth surfaces on work pieces. 相似文献
14.
Sami S. Brandt 《Journal of Mathematical Imaging and Vision》2006,25(1):25-48
In this paper, we will consider a robust estimator, which was proposed earlier by the authors, in a general non-linear regression
framework. The basic idea of the estimator is, instead of trying to classify the observations to good and false, to model
the residual distribution of the contaminants, determine the probability for each observation to be a good sample, and finally
perform weighted fitting. The main contributions of this paper are: (1) We show now that the estimator is consistent with
the true parameter values that simply means optimality regardless of the problematical outliers in the observations. (2) We
propose how robust uncertainty computations and robust model selection can be performed in the similar, consistent manner.
(3) We derive the expectation maximisation algorithm for the estimator and (4) extend the estimator to handle unknown outlier
residual distributions. (5) We finally give some experiments with real data, where robustness in model fitting is needed.
Sami Brandt received the degree of Master of Science in Technology from the department of Engineering Physics and Mathematics in Helsinki
University of Technology, Finland, in September 1999 and the degree of Doctor of Science in Technology at the Laboratory of
Computational Engineering, Helsinki University of Technology, in October 2002. After serving one year as a research scientist
in Instrumentarium Corporation Imaging Division and two years as a post-doc at LCE, he is currently jointly affiliated at
LCE and Information Processing Laboratory, University of Oulu, Finland; and he focuses research on bio-medical imaging and
3D vision. He is a member of the IEEE and IEEE Computer Society, member of the Pattern Recognition Society of Finland, member
of the International Association for Pattern Recognition (IAPR), and member of the Finnish Inverse Problems Society. 相似文献
15.
张瑾 《数字社区&智能家居》2008,(10):242-243
本文介绍了在Excel环境下的三种线性回归模型的构建方法,分析了三种方法的优缺点,讨论了各自的适用范围,不同用户可以根据自身特点和需要来选择使用适合自己的方法。 相似文献
16.
Boosting Methods for Regression 总被引:6,自引:0,他引:6
In this paper we examine ensemble methods for regression that leverage or boost base regressors by iteratively calling them on modified samples. The most successful leveraging algorithm for classification is AdaBoost, an algorithm that requires only modest assumptions on the base learning method for its strong theoretical guarantees. We present several gradient descent leveraging algorithms for regression and prove AdaBoost-style bounds on their sample errors using intuitive assumptions on the base learners. We bound the complexity of the regression functions produced in order to derive PAC-style bounds on their generalization errors. Experiments validate our theoretical results. 相似文献
17.
ZHANG Jin 《数字社区&智能家居》2008,(28)
本文介绍了在Excel环境下的三种线性回归模型的构建方法,分析了三种方法的优缺点,讨论了各自的适用范围,不同用户可以根据自身特点和需要来选择使用适合自己的方法。 相似文献
18.
复杂海量数据往往表现为多种结构特征的混合体,回归类混合模型就是对这种混合体的一个描述.该文基于统计学的有限混合分布理论和可识别性的相关结果,针对回归变量的三种情形:(1)解释变量固定,(2)解释变量随机,(3)解释变量固定且类别参数指定,分别讨论挖掘一般回归类的混合模型的可识别性问题,并给出同族回归类混合模型可识别的相应充分条件.这些条件的一个共同特点是它们都与一类特别的解释变量集合有关,而该类集合是由同族的回归函数与回归参数唯一确定的,其元素使不同的回归参数对应回归函数的相同值.特别地,当回归函数线性时,这类集合就是解释变量空间中的超平面. 相似文献
19.
20.
林晶 《数字社区&智能家居》2010,(4):814-815
对比研究了多种全文检索模型,实现了相应的系统原型,并通过实验对模型的检索性能进行了验证,为检索模型的选择与检索性能优化提供参考。 相似文献