首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   51篇
  免费   5篇
  国内免费   9篇
电工技术   1篇
综合类   4篇
化学工业   3篇
机械仪表   1篇
建筑科学   4篇
能源动力   1篇
轻工业   1篇
石油天然气   1篇
无线电   3篇
一般工业技术   7篇
自动化技术   39篇
  2023年   1篇
  2022年   4篇
  2021年   1篇
  2020年   3篇
  2019年   5篇
  2018年   7篇
  2017年   7篇
  2016年   1篇
  2015年   7篇
  2014年   6篇
  2013年   3篇
  2012年   7篇
  2011年   5篇
  2010年   2篇
  2009年   3篇
  2007年   3篇
排序方式: 共有65条查询结果,搜索用时 15 毫秒
1.
Ensemble pruning deals with the selection of base learners prior to combination in order to improve prediction accuracy and efficiency. In the ensemble literature, it has been pointed out that in order for an ensemble classifier to achieve higher prediction accuracy, it is critical for the ensemble classifier to consist of accurate classifiers which at the same time diverse as much as possible. In this paper, a novel ensemble pruning method, called PL-bagging, is proposed. In order to attain the balance between diversity and accuracy of base learners, PL-bagging employs positive Lasso to assign weights to base learners in the combination step. Simulation studies and theoretical investigation showed that PL-bagging filters out redundant base learners while it assigns higher weights to more accurate base learners. Such improved weighting scheme of PL-bagging further results in higher classification accuracy and the improvement becomes even more significant as the ensemble size increases. The performance of PL-bagging was compared with state-of-the-art ensemble pruning methods for aggregation of bootstrapped base learners using 22 real and 4 synthetic datasets. The results indicate that PL-bagging significantly outperforms state-of-the-art ensemble pruning methods such as Boosting-based pruning and Trimmed bagging.  相似文献   
2.
A Bayesian approach to variable selection which is based on the expected Kullback-Leibler divergence between the full model and its projection onto a submodel has recently been suggested in the literature. For generalized linear models an extension of this idea is proposed by considering projections onto subspaces defined via some form of L1 constraint on the parameter in the full model. This leads to Bayesian model selection approaches related to the lasso. In the posterior distribution of the projection there is positive probability that some components are exactly zero and the posterior distribution on the model space induced by the projection allows exploration of model uncertainty. Use of the approach in structured variable selection problems such as ANOVA models is also considered, where it is desired to incorporate main effects in the presence of interactions. Projections related to the non-negative garotte are able to respect the hierarchical constraints. A consistency result is given concerning the posterior distribution on the model induced by the projection, showing that for some projections related to the adaptive lasso and non-negative garotte the posterior distribution concentrates on the true model asymptotically.  相似文献   
3.
In the context of a partially linear regression model, shrinkage semiparametric estimation is considered based on the Stein-rule. In this framework, the coefficient vector is partitioned into two sub-vectors: the first sub-vector gives the coefficients of interest, i.e., main effects (for example, treatment effects), and the second sub-vector is for variables that may or may not need to be controlled. When estimating the first sub-vector, the best estimate may be obtained using either the full model that includes both sub-vectors, or the reduced model which leaves out the second sub-vector. It is demonstrated that shrinkage estimators which combine two semiparametric estimators computed for the full model and the reduced model outperform the semiparametric estimator for the full model. Using the semiparametric estimate for the reduced model is best when the second sub-vector is the null vector, but this estimator suffers seriously from bias otherwise. The relative dominance picture of suggested estimators is investigated. In particular, suitability of estimating the nonparametric component based on the B-spline basis function is explored. Further, the performance of the proposed estimators is compared with an absolute penalty estimator through Monte Carlo simulation. Lasso and adaptive lasso were implemented for simultaneous model selection and parameter estimation. A real data example is given to compare the proposed estimators with lasso and adaptive lasso estimators.  相似文献   
4.
随着DNA微阵列技术的出现,大量关于不同肿瘤的基因表达谱数据集被发布到网络上,从而使得对肿瘤特征基因选择和亚型分类的研究成为生物信息学领域的热点。基于Lasso(least absolute shrinkage and selection operator)方法提出了K-split Lasso特征选择方法,其基本思想是将数据集平均划分为K份,分别使用Lasso方法对每份进行特征选择,而后将选择出来的每份特征子集合并,重新进行特征选择,得到最终的特征基因。实验采用支持向量机作为分类器,结果表明K-split Lasso方法减少了冗余特征,提高了分类精度,具有良好的稳定性。由于每次计算的维数降低,K-split Lasso方法解决了计算开销过大的问题,并在一定程度上解决了"过拟合"问题。因此K-split Lasso方法是一种有效的肿瘤特征基因选择方法。  相似文献   
5.
随着P2P网络借贷交易量的增大,对P2P交易数据的挖掘和分析备受关注,其中一项重要的研究课题是网络借款成功率的影响因素分析.现有的文献多采用线性回归方法对该课题进行研究,但未考虑变量之间的多重共线性和采用最优变量子集建立回归模型的问题.本文采用Lasso回归方法,建立最优变量子集的回归模型对影响网络借款成功率的因素进行分析,避免了多重共线性问题对模型的干扰,同时提高了模型对数据的拟合精度.对Lending Club平台的借贷数据的实证分析结果显示,本文方法在模型的拟合精度和避免共线性方面优于对比方法.  相似文献   
6.
由于缺乏先验信息,组Lasso模型在训练时仅是基于组数参数对单元进行均匀、连续、固定的分组,缺乏分组依据,容易造成变量组结构的有偏估计。为此,提出特征聚类自适应变组稀疏自编码网络模型,在迭代过程中使用特征聚类法来改变隐层单元的分组,使得分组能够随着特征的收敛而自适应地发生改变,从而更好地实现变量组结构的估计。实验表明,该模型能够很好地捕捉训练过程中出现的组相关信息,并在一定程度上提高图像的分类识别率。  相似文献   
7.
正则化稀疏模型   总被引:17,自引:0,他引:17  
正则化稀疏模型在机器学习和图像处理等领域发挥着越来越重要的作用,它具有变量选择功能,可以解决建模中的过拟合等问题.Tibshirani 提出的 Lasso 使得正则化稀疏模型真正开始流行.文中总结了各种正则化稀疏模型,指出了各个稀疏模型被提出的原因、所具有的优点、适宜解决的问题及其模型的具体形式.最后,文中还指出了正则化稀疏模型未来的研究方向.  相似文献   
8.
吴斌鑫     刘美   周正南     莫常春     吴猛   张斐 《机械与电子》2022,(9):17-21
针对各传感网络中传感数据因工作环境变化、传感设备异常等因素而引起的测量值缺失的问题,提出了一种基于 Lasso 回归及模型修正的双重回归缺失值插补方法。该方法采用原始数据滑动窗口法生成数据集并随机删除部分数据,以 Lasso 回归模型为基准,使用岭回归与皮尔逊相关性分析联合分析且生成集成岭回归与相关性的数据集,并将其作为 Lasso 回归模型的特征(双列),以双重回归方式进行模型修正,最终实现对缺失值的插补。以西储大学轴承数据为例,对所提方法及另外 2 种缺失值插补方法( KNN的数据插补和 Lasso 回归的缺失值插补)在缺失率为 4% 、10% 和 20% 下进行比较,并采用均方根误差、模型训练时间及决定系数作为评估指标。结果表明,基于 Lasso 回归及模型修正的双重回归缺失值插补方法具有较好的表现,为后续的故障诊断提供可靠的基础数据。  相似文献   
9.
针对语音信号在离散余弦变换基上的稀疏性,提出了一种基于压缩感知的语音压缩编码算法。算法在编码端采用随机高斯矩阵直接对语音波形进行观测,并采样均匀量化技术对随机观测进行量化。解码端利用未饱和的观测值通过Lasso算法实现语音信号的重构。仿真结果表明,该算法具有良好的重构性能。  相似文献   
10.
Variable selection is a topic of great importance in high-dimensional statistical modeling and has a wide range of real-world applications. Many variable selection techniques have been proposed in the context of linear regression, and the Lasso model is probably one of the most popular penalized regression techniques. In this paper, we propose a new, fully hierarchical, Bayesian version of the Lasso model by employing flexible sparsity promoting priors. To obtain the Bayesian Lasso estimate, a reversible-jump MCMC algorithm is developed for joint posterior inference over both discrete and continuous parameter spaces. Simulations demonstrate that the proposed RJ-MCMC-based Bayesian Lasso yields smaller estimation errors and more accurate sparsity pattern detection when compared with state-of-the-art optimization-based Lasso-type methods, a standard Gibbs sampler-based Bayesian Lasso and the Binomial-Gaussian prior model. To demonstrate the applicability and estimation stability of the proposed Bayesian Lasso, we examine a benchmark diabetes data set and real functional Magnetic Resonance Imaging data. As an extension of the proposed RJ-MCMC framework, we also develop an MCMC-based algorithm for the Binomial-Gaussian prior model and illustrate its improved performance over the non-Bayesian estimate via simulations.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号