首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In the past several decades, classifier design has attracted much attention. Inspired by the locality preserving idea of manifold learning, here we give a local linear regression (LLR) classifier. The proposed classifier consists of three steps: first, search k nearest neighbors of a pointed sample from each special class, respectively; second, reconstruct the pointed sample using the k nearest neighbors from each special class, respectively; and third, classify the test sample according to the minimum reconstruction error. The experimental results on the ETH80 database, the CENPAMI handwritten number database and the FERET face image database demonstrate that LLR works well, leading to promising image classification performance.  相似文献   

2.
在非参数统计中,局部多项式回归是重要的工具,然而以往研究的算法基本都是非递推的.本文研究递推的局部线性回归估计及其应用.首先推导出递推算法,给出了回归函数及其导函数的非参数估计.在一定的条件下,证明了算法的强一致性.并且通过仿真例子研究了它在非线性条件异方差模型的回归函数估计和非线性ARX(nonlinear autoregressive system with exogenous inputs,NARX)系统辨识中的应用.  相似文献   

3.
We present a novel method for segmenting images with texture and nontexture regions. Local spectral histograms are feature vectors consisting of histograms of chosen filter responses, which capture both texture and nontexture information. Based on the observation that the local spectral histogram of a pixel location can be approximated through a linear combination of the representative features weighted by the area coverage of each feature, we formulate the segmentation problem as a multivariate linear regression, where the solution is obtained by least squares estimation. Moreover, we propose an algorithm to automatically identify representative features corresponding to different homogeneous regions, and show that the number of representative features can be determined by examining the effective rank of a feature matrix. We present segmentation results on different types of images, and our comparison with other methods shows that the proposed method gives more accurate results.  相似文献   

4.
Fuzzy nonparametric regression based on local linear smoothing technique   总被引:1,自引:0,他引:1  
In a great deal of literature on fuzzy regression analysis, most of research has focused on some predefined parametric forms of fuzzy regression relationships, especially on the fuzzy linear regression models. In many practical situations, it may be unrealistic to predetermine a fuzzy parametric regression relationship. In this paper, a fuzzy nonparametric model with crisp input and LR fuzzy output is considered and, based on the distance measure for fuzzy numbers suggested by Diamond [P. Diamond, Fuzzy least squares, Information Sciences 46 (1988) 141-157], the local linear smoothing technique in statistics with the cross-validation procedure for selecting the optimal value of the smoothing parameter is fuzzified to fit this model. Some simulation experiments are conducted to examine the performance of the proposed method and three real-world datasets are analyzed to illustrate the application of the proposed method. The results demonstrate that the proposed method works quite well not only in producing satisfactory estimate of the fuzzy regression function, but also in reducing the boundary effect significantly.  相似文献   

5.
The problem of automatic bandwidth selection in nonparametric regression is considered when a local linear estimator is used to derive nonparametrically the unknown regression function. A plug-in method for choosing the smoothing parameter based on the use of the neural networks is presented. The method applies to dependent data generating processes with nonlinear autoregressive time series representation. The consistency of the method is shown in the paper, and a simulation study is carried out to assess the empirical performance of the procedure.  相似文献   

6.
The total least squares method is generalized in the context of the functional linear model. A smoothing splines estimator of the functional coefficient of the model is first proposed without noise in the covariates and an asymptotic result for this estimator is obtained. Then, this estimator is adapted to the case where the covariates are noisy and an upper bound for the convergence speed is also derived. The estimation procedure is evaluated by means of simulations.  相似文献   

7.
8.
高磊  罗关凤  刘荡  闵帆 《计算机应用》2022,42(2):655-662
初至波拾取是地震数据处理中的关键步骤,会直接影响动校正、静校正和速度分析等的精度.目前,现有的算法受到背景噪声和复杂近地表条件的影响时拾取精度会降低.基于此,提出基于聚类和局部线性回归的初至波自动拾取算法(FPCL).该算法由预拾取和微调两个阶段来实现.预拾取阶段先基于k均值(k-means)技术找到初至波簇,再利用基...  相似文献   

9.
A new matching procedure based on imputing missing data by means of a local linear estimator of the underlying population regression function (that is assumed not necessarily linear) is introduced. Such a procedure is compared to other traditional approaches, more precisely hot deck methods as well as methods based on kNN estimators. The relationship between the variables of interest is assumed not necessarily linear. Performance is measured by the matching noise given by the discrepancy between the distribution generating genuine data and the distribution generating imputed values.  相似文献   

10.
A new matching procedure based on imputing missing data by means of a local linear estimator of the underlying population regression function (that is assumed not necessarily linear) is introduced. Such a procedure is compared to other traditional approaches, more precisely hot deck methods as well as methods based on kNN estimators. The relationship between the variables of interest is assumed not necessarily linear. Performance is measured by the matching noise given by the discrepancy between the distribution generating genuine data and the distribution generating imputed values.  相似文献   

11.
针对高维小样本鲁棒人脸识别问题,提出了一种局部线性嵌入优化光谱回归算法。计算出训练样本的特征向量,然后用局部线性嵌入算法构建分类问题所需的嵌入,并学习每种分类的子流形所需的嵌入;利用光谱回归计算投影矩阵,最近邻分类器完成人脸的识别。在人脸数据库FERET、AR及扩展YaleB上的实验结果表明,相比其他几种光谱回归算法,该算法取得了更好的识别效果。  相似文献   

12.
This paper studies model diagnostics for linear regression models. We propose two tree-based procedures to check the adequacy of linear functional form and the appropriateness of homoscedasticity, respectively. The proposed tree methods not only facilitate a natural assessment of the linear model, but also automatically provide clues for amending deficiencies. We explore and illustrate their uses via both Monte Carlo studies and real data examples.  相似文献   

13.
Linear regression uses the least square algorithm to solve the solution of linear regression equation. Linear regression classification (LRC) shows good classification performance on face image data. However, when the axes of linear regression of class-specific samples have intersections, LRC could not well classify the samples that distribute around intersections. Moreover, the LRC could not perform well at the situation of severe lighting variations. This paper proposes a new classification method, kernel linear regression classification (KLRC), based on LRC and the kernel trick. KLRC is a nonlinear extension of LRC and can offset the drawback of LRC. KLRC implicitly maps the data into a high-dimensional kernel space by using the nonlinear mapping determined by a kernel function. Through this mapping, KLRC is able to make the data more linearly separable and can perform well for face recognition with varying lighting. For comparison, we conduct on three standard databases under some evaluation protocols. The proposed methodology not only outperforms LRC but also takes the better performance than typical kernel methods such as kernel linear discriminant analysis and kernel principal component analysis.  相似文献   

14.
BMDP program for piecewise linear regression   总被引:1,自引:0,他引:1  
Piecewise linear regression has potentially broad applications in medical data analysis as well as other types of regression. Various kinds of algorithms have been proposed for finding optimum piecewise linear regressions. This paper presents a BMDP program for obtaining near optimum piecewise linear regression equations. An idea intrinsic to the method is that restricting parameter space to a discrete set makes the difficult problems become standard problems. Any software having the variable selection feature in the multiple linear regression can be used to apply the method.  相似文献   

15.
《Knowledge》2002,15(3):169-175
Clustered linear regression (CLR) is a new machine learning algorithm that improves the accuracy of classical linear regression by partitioning training space into subspaces. CLR makes some assumptions about the domain and the data set. Firstly, target value is assumed to be a function of feature values. Second assumption is that there are some linear approximations for this function in each subspace. Finally, there are enough training instances to determine subspaces and their linear approximations successfully. Tests indicate that if these approximations hold, CLR outperforms all other well-known machine-learning algorithms. Partitioning may continue until linear approximation fits all the instances in the training set — that generally occurs when the number of instances in the subspace is less than or equal to the number of features plus one. In other case, each new subspace will have a better fitting linear approximation. However, this will cause over fitting and gives less accurate results for the test instances. The stopping situation can be determined as no significant decrease or an increase in relative error. CLR uses a small portion of the training instances to determine the number of subspaces. The necessity of high number of training instances makes this algorithm suitable for data mining applications.  相似文献   

16.
A fast implementation of a formerly [5] published algorithm is given.  相似文献   

17.
Zhang  Xiaomin  Zhu  Xiaojin  Loh  Po-Ling 《Machine Learning》2021,110(10):2763-2834
Machine Learning - We investigate problems in penalized M-estimation, inspired by applications in machine learning debugging. Data are collected from two pools, one containing data with possibly...  相似文献   

18.
The class of all linear predictors of minimal order for a stationary vector-valued process is specified in terms of linear transformations on the associated Hankel covariance matrix. Two particular transformations, yielding computationally efficient construction schemes, are proposed  相似文献   

19.
A constructive procedure to design a single linear functional observer for a time-invariant linear system is given. The proposed procedure is simple and is not based on the solution of a Sylvester equation or on the use of canonical state space forms. Both stable observers or fixed poles observers problems are considered for minimality.  相似文献   

20.
Patra  A.  Das  S.  Mishra  S. N.  Senapati  M. R. 《Neural computing & applications》2017,28(1):101-110

For financial time series, the generation of error bars on the point of prediction is important in order to estimate the corresponding risk. In recent years, optimization techniques-driven artificial intelligence has been used to make time series approaches more systematic and improve forecasting performance. This paper presents a local linear radial basis functional neural network (LLRBFNN) model for classifying finance data from Yahoo Inc. The LLRBFNN model is learned by using the hybrid technique of backpropagation and recursive least square algorithm. The LLRBFNN model uses a local linear model in between the hidden layer and the output layer in contrast to the weights connected from hidden layer to output layer in typical neural network models. The obtained prediction result is compared with multilayer perceptron and radial basis functional neural network with the parameters being trained by gradient descent learning method. The proposed technique provides a lower mean squared error and thus can be considered as superior to other models. The technique is also tested on linear data, i.e., diabetic data, to confirm the validity of the result obtained from the experiment.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号