首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11篇
  免费   5篇
轻工业   5篇
冶金工业   4篇
自动化技术   7篇
  2017年   2篇
  2016年   3篇
  2015年   1篇
  2012年   1篇
  2005年   1篇
  2001年   1篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1986年   1篇
  1985年   1篇
  1984年   1篇
  1982年   1篇
排序方式: 共有16条查询结果,搜索用时 15 毫秒
1.
An Efficient Method To Estimate Bagging's Generalization Error   总被引:3,自引:0,他引:3  
Bagging (Breiman, 1994a) is a technique that tries to improve a learning algorithm's performance by using bootstrap replicates of the training set (Efron & Tibshirani, 1993, Efron, 1979). The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive, for leave-one-out cross-validation one needs to train the underlying algorithm on the order of m times, where m is the size of the training set and is the number of replicates. This paper presents several techniques for estimating the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm (beyond that of the bagging itself), as is required by cross-validation-based estimation. These techniques all exploit the bias-variance decomposition (Geman, Bienenstock & Doursat, 1992, Wolpert, 1996). The best of our estimators also exploits stacking (Wolpert, 1992). In a set of experiments reported here, it was found to be more accurate than both the alternative cross-validation-based estimator of the bagged algorithm's error and the cross-validation-based estimator of the underlying algorithm's error. This improvement was particularly pronounced for small test sets. This suggests a novel justification for using bagging—more accurate estimation of the generalization error than is possible without bagging.  相似文献   
2.
Recent work on the foundational underpinnings of black-box optimization has begun to uncover a rich mathematical structure. In particular, it is now known that an inner product between the optimization algorithm and the distribution of optimization problems likely to be encountered fixes the distribution over likely performances in running that algorithm. One ramification of this is the "No Free Lunch" (NFL) theorems, which state that any two algorithms are equivalent when their performance is averaged across all possible problems. This highlights the need for exploiting problem-specific knowledge to achieve better than random performance. In this paper, we present a general framework covering most optimization scenarios. In addition to the optimization scenarios addressed in the NFL results, this framework covers multiarmed bandit problems and evolution of multiple coevolving players. As a particular instance of the latter, it covers "self-play" problems. In these problems, the set of players work together to produce a champion, who then engages one or more antagonists in a subsequent multiplayer game. In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. However, in the typical coevolutionary scenarios encountered in biology, where there is no champion, the NFL theorems still hold.  相似文献   
3.
Ramsey theory is an active research area in combinatorics whose central theme is the emergence of order in large disordered structures, with Ramsey numbers marking the threshold at which this order first appears. For generalized Ramsey numbers r(GH), the emergent order is characterized by graphs G and H. In this paper we: (i) present a quantum algorithm for computing generalized Ramsey numbers by reformulating the computation as a combinatorial optimization problem which is solved using adiabatic quantum optimization; and (ii) determine the Ramsey numbers \(r({\mathscr {T}}_{m},{\mathscr {T}}_{n})\) for trees of order \(m,n = 6,7,8\), most of which were previously unknown.  相似文献   
4.
This note discusses the recent paper "Some technical remarks on the proof of the no free lunch theorem" by Koppen (2000). In that paper, some technical issues related to the formal proof of the no free lunch (NFL) theorem for search were given by Wolpert and Macready (1995, 1997). The present authors explore the issues raised in that paper including the presentation of a simpler version of the NFL proof in accord with a suggestion made explicitly by Koppen (2000) and implicitly by Wolpert and Macready (1997). They also includes the correction of an incorrect claim made by Koppen (2000) of a limitation of the NFL theorem. Finally, some thoughts on future research directions for research into algorithm performance are given.  相似文献   
5.
Explored the nature of the prerequisite relations among 5 types of text-based inference question tasks (reader-generated macrostructure questions, experimenter-generated macrostructure questions—unconstrained and constrained, text-connecting inference questions, and within-sentence inference questions) for 124 good and poor 6th-grade readers. Latent class modeling procedures were used in exploratory and confirmatory analyses of the prerequisite relations among tasks. All task types were found to provide separate manifestations of the same underlying attribute for both good and poor readers. Particular inference tasks appeared to be better than others in assessing this underlying trait (in terms of the rate of individual misclassification with respect to trait acquisition). (53 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
6.
Adiabatic quantum optimization offers a new method for solving hard optimization problems. In this paper we calculate median adiabatic times (in seconds) determined by the minimum gap during the adiabatic quantum optimization for an NP-hard Ising spin glass instance class with up to 128 binary variables. Using parameters obtained from a realistic superconducting adiabatic quantum processor, we extract the minimum gap and matrix elements using high performance Quantum Monte Carlo simulations on a large-scale Internet-based computing platform. We compare the median adiabatic times with the median running times of two classical solvers and find that, for the considered problem sizes, the adiabatic times for the simulated processor architecture are about 4 and 6 orders of magnitude shorter than the two classical solvers’ times. This shows that if the adiabatic time scale were to determine the computation time, adiabatic quantum optimization would be significantly superior to those classical solvers for median spin glass problems of at least up to 128 qubits. We also discuss important additional constraints that affect the performance of a realistic system.  相似文献   
7.
Bandit problems and the exploration/exploitation tradeoff   总被引:1,自引:0,他引:1  
We explore the two-armed bandit with Gaussian payoffs as a theoretical model for optimization. The problem is formulated from a Bayesian perspective, and the optimal strategy for both one and two pulls is provided. We present regions of parameter space where a greedy strategy is provably optimal. We also compare the greedy and optimal strategies to one based on a genetic algorithm. In doing so, we correct a previous error in the literature concerning the Gaussian bandit problem and the supposed optimality of genetic algorithms for this problem. Finally, we provide an analytically simple bandit model that is more directly applicable to optimization theory than the traditional bandit problem and determine a near-optimal strategy for that model  相似文献   
8.
No free lunch theorems for optimization   总被引:46,自引:0,他引:46  
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori “head-to-head” minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms  相似文献   
9.
10.
Respondents' perceived protection when using randomized response.   总被引:1,自引:0,他引:1  
Assessed the self-perceived protection via randomness of respondents and their willingness to cooperate in the unrelated-question randomized response technique; 480 undergraduates participated. The 2?×?4 factorial design included 2 strategies for constructing the innocuous questions and 4 levels for the probability (p) of selecting the sensitive item. Willingness to apply randomized response was not significantly associated with either of the factors under study. The perceived protection, however, was associated with the probability of selecting the sensitive item. One recommendation proceeding from post hoc tests is that p be restricted to .70?≤?p?≤?.85. (13 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号