首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3篇
  免费   1篇
一般工业技术   1篇
自动化技术   3篇
  2020年   1篇
  2019年   1篇
  2008年   1篇
  2000年   1篇
排序方式: 共有4条查询结果,搜索用时 15 毫秒
1
1.
一种基于有限记忆多LRU的Web缓存替换算法   总被引:3,自引:0,他引:3  
Web缓存的核心是缓存内容的替换算法.在动态不确定的网络环境下,本文提出一种基于有限记忆的多LRU (LH-MLRU)Web缓存替换算法,它是一种低开销、高性能和适应性的算法.LH-MLRU综合考虑各项因素对Web对象使用多个LRU队列进行分类管理,引入Web对象最近被访问的历史作为缓存内容替换的一个关键因素,来预测对象可能再次被访问的概率.通过周期性的训练参数可以适应动态不确定的网络环境.轨迹驱动的仿真实验表明LH-MLRU在各项性能指标上均优于其他算法,可以显著的提高Web缓存的性能.  相似文献   
2.
几何迭代法在计算机辅助几何设计(CAGD)中有广泛地应用,为了提高传统的 B-样 条曲线插值在几何迭代中的收敛速度和迭代精度,提出了基于多结点样条磨光函数的几何迭代 法,引入多结点样条磨光函数,在曲线拟合时把多结点样条磨光方法和几何迭代方法结合,经过 磨光和迭代,在 L-BFGS 迭代算法的最优解下构造具有高逼近性的曲线拟合方法。实验结果表明, 在相同精度下,该方法不仅减少了迭代次数,且提高了迭代速度,可以用于飞机、汽车等外形设 计上,亦可用于文物、房屋等外形重构和重建,以及卫星图形图像的处理中。  相似文献   
3.
ABSTRACT

Machine learning (ML) problems are often posed as highly nonlinear and nonconvex unconstrained optimization problems. Methods for solving ML problems based on stochastic gradient descent are easily scaled for very large problems but may involve fine-tuning many hyper-parameters. Quasi-Newton approaches based on the limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) update typically do not require manually tuning hyper-parameters but suffer from approximating a potentially indefinite Hessian with a positive-definite matrix. Hessian-free methods leverage the ability to perform Hessian-vector multiplication without needing the entire Hessian matrix, but each iteration's complexity is significantly greater than quasi-Newton methods. In this paper we propose an alternative approach for solving ML problems based on a quasi-Newton trust-region framework for solving large-scale optimization problems that allow for indefinite Hessian approximations. Numerical experiments on a standard testing data set show that with a fixed computational time budget, the proposed methods achieve better results than the traditional limited-memory BFGS and the Hessian-free methods.  相似文献   
4.
This paper uses two simple variational data assimilation problems with the 1D viscous Burgers' equation on a periodic domain to investigate the impact of various diagonal-preconditioner update and scaling strategies, both on the limited-memory BFGS (Broyden, Fletcher, Goldfarb and Shanno) inverse Hessian approximation and on the minimization performance. These simple problems share some characteristics with the large-scale variational data assimilation problems commonly dealt with in meteorology and oceanography.The update formulae studied are those proposed by Gilbert and Lemaréchal (Math. Prog., vol. 45, pp. 407–435, 1989) and the quasi-Cauchy formula of Zhu et al. (SIAM J. Optim., vol. 9, pp. 1192–1204, 1999). Which information should be used for updating the diagonal preconditioner, the one to be forgotten or the most recent one, is considered first. Then, following the former authors, a scaling of the diagonal preconditioner is introduced for the corresponding formulae in order to improve the minimization performance. The large negative impact of such a scaling on the quality of the L-BFGS inverse Hessian approximation led us to propose an alternate updating and scaling strategy, that provides a good inverse Hessian approximation and gives the best minimization performance for the problems considered. With this approach the quality of the inverse Hessian approximation improves steadily during the minimization process. Moreover, this quality and the L-BFGS minimization performance improves when the amount of stored information is increased.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号