首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24篇
  免费   0篇
机械仪表   1篇
轻工业   4篇
无线电   6篇
自动化技术   13篇
  2013年   4篇
  2011年   1篇
  2007年   2篇
  2005年   1篇
  2004年   1篇
  2003年   1篇
  2002年   1篇
  2000年   2篇
  1999年   1篇
  1997年   1篇
  1996年   1篇
  1995年   1篇
  1994年   3篇
  1992年   1篇
  1991年   1篇
  1989年   1篇
  1988年   1篇
排序方式: 共有24条查询结果,搜索用时 15 毫秒
1.
A neural network approach to job-shop scheduling   总被引:6,自引:0,他引:6  
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.  相似文献   
2.
Part 2 of the paper presents the neural network as a meta‐model for one‐dimensional fibrous materials which was elaborated on the basis of a discrete‐event simulation model that was presented in Part 1 of the paper. The architecture of the full‐size and part‐size network was developed and tested. The training set for the neural networks was obtained from the simulation model. The results of the sensitivity analysis of the neural networks have showed that the standard deviations of the fiber length and the fineness have no affect on the irregularity of the fibrous material. Testing has shown a high‐level coincidence between the simulation results and the prediction of the neural network meta‐model.  相似文献   
3.
This paper presents a model of the roll‐drafting process which is based on the concept of discrete‐event simulation (DES). This model is free of limitations and simplifications which are inherent in the known models of the roll‐drafting process and is able to trace each separated fiber within the roll‐drafting zone. Due to this feature DES model enables investigation of a wide range of roll‐drafting cases and representations. The influence of the parameters of the basic roll‐drafting model was studied. Models of the first and the second limit schemes, models with a randomly distributed velocity change point, and a model with a shifting velocity change point were elaborated and studied on the basis of the basic model of the roll‐drafting process. The effect of the feedback depth was simulated and analyzed in the frame of the model with the shifting velocity change point. The elaborated model and the obtained results will be useful for creating a neural network meta‐model of the roll‐drafting process.  相似文献   
4.
Gradually appearing high-power terahertz sources require the development of adequate imaging techniques. This paper describes four imaging techniques (with a thermal recorder, temperature-sensitive phosphor plates, a visible-light thermal sensitive Fizeau interferometer, and an uncooled microbolometer array) applied with the Novosibirsk terahertz free electron laser as a radiation source. The space and time resolutions of the devices were examined thoroughly. Examples of the application of these techniques, including in-line holography and real-time moving-objects detection, are given.  相似文献   
5.
A simple yet effective method for improving multicomputer multiprocessor system reliability via redundant allocation of tasks to computers (processors) is described. Given any known (nonredundant) scheduling strategy, tasks are allocated to processors statically and redundantly using a k-circular shifting (KCS) algorithm. so that if some processors fail during the execution. all tasks can be completed on the remaining processors (but at a longer time). Redundant allocation of independent tasks to identical processors (computers), subject to real-time constraints on total execution time, is discussed in detail, and analytic reliability estimates are derived. The longest processing time scheduling is given as an example of nonredundant deterministic scheduling of independent tasks. Processor utilization for redundant task-allocation is discussed and compared with standby redundancy: the authors' KCS algorithm achieves much higher processor utilization than standby redundancy  相似文献   
6.
Comparison of adaptive methods for function estimation from samples   总被引:2,自引:0,他引:2  
The problem of estimating an unknown function from a finite number of noisy data points has fundamental importance for many applications. This problem has been studied in statistics, applied mathematics, engineering, artificial intelligence, and, more recently, in the fields of artificial neural networks, fuzzy systems, and genetic optimization. In spite of many papers describing individual methods, very little is known about the comparative predictive (generalization) performance of various methods. We discuss subjective and objective factors contributing to the difficult problem of meaningful comparisons. We also describe a pragmatic framework for comparisons between various methods, and present a detailed comparison study comprising several thousand individual experiments. Our approach to comparisons is biased toward general (nonexpert) users. Our study uses six representative methods described using a common taxonomy. Comparisons performed on artificial data sets provide some insights on applicability of various methods. No single method proved to be the best, since a method's performance depends significantly on the type of the target function, and on the properties of training data.  相似文献   
7.
Multiple model regression estimation   总被引:2,自引:0,他引:2  
This paper presents a new learning formulation for multiple model estimation (MME). Under this formulation, training data samples are generated by several (unknown) statistical models. Hence, most existing learning methods (for classification or regression) based on a single model formulation are no longer applicable. We describe a general framework for MME. Then we introduce a constructive support vector machine (SVM)-based methodology for multiple regression estimation. Several empirical comparisons using synthetic and real-life data sets are presented to illustrate the proposed approach for multiple model regression formulation.  相似文献   
8.
Model complexity control for regression using VC generalizationbounds   总被引:8,自引:0,他引:8  
It is well known that for a given sample size there exists a model of optimal complexity corresponding to the smallest prediction (generalization) error. Hence, any method for learning from finite samples needs to have some provisions for complexity control. Existing implementations of complexity control include penalization (or regularization), weight decay (in neural networks), and various greedy procedures (aka constructive, growing, or pruning methods). There are numerous proposals for determining optimal model complexity (aka model selection) based on various (asymptotic) analytic estimates of the prediction risk and on resampling approaches. Nonasymptotic bounds on the prediction risk based on Vapnik-Chervonenkis (VC)-theory have been proposed by Vapnik. This paper describes application of VC-bounds to regression problems with the usual squared loss. An empirical study is performed for settings where the VC-bounds can be rigorously applied, i.e., linear models and penalized linear models where the VC-dimension can be accurately estimated, and the empirical risk can be reliably minimized. Empirical comparisons between model selection using VC-bounds and classical methods are performed for various noise levels, sample size, target functions and types of approximating functions. Our results demonstrate the advantages of VC-based complexity control with finite samples.  相似文献   
9.
Comparison of model selection for regression   总被引:10,自引:0,他引:10  
Cherkassky V  Ma Y 《Neural computation》2003,15(7):1691-1714
We discuss empirical comparison of analytical methods for model selection. Currently, there is no consensus on the best method for finite-sample estimation problems, even for the simple case of linear estimators. This article presents empirical comparisons between classical statistical methods - Akaike information criterion (AIC) and Bayesian information criterion (BIC) - and the structural risk minimization (SRM) method, based on Vapnik-Chervonenkis (VC) theory, for regression problems. Our study is motivated by empirical comparisons in Hastie, Tibshirani, and Friedman (2001), which claims that the SRM method performs poorly for model selection and suggests that AIC yields superior predictive performance. Hence, we present empirical comparisons for various data sets and different types of estimators (linear, subset selection, and k-nearest neighbor regression). Our results demonstrate the practical advantages of VC-based model selection; it consistently outperforms AIC for all data sets. In our study, SRM and BIC methods show similar predictive performance. This discrepancy (between empirical results obtained using the same data) is caused by methodological drawbacks in Hastie et al. (2001), especially in their loose interpretation and application of SRM method. Hence, we discuss methodological issues important for meaningful comparisons and practical application of SRM method. We also point out the importance of accurate estimation of model complexity (VC-dimension) for empirical comparisons and propose a new practical estimate of model complexity for k-nearest neighbors regression.  相似文献   
10.
Measuring the VC-dimension using optimized experimental design   总被引:1,自引:0,他引:1  
Shao X  Cherkassky V  Li W 《Neural computation》2000,12(8):1969-1986
VC-dimension is the measure of model complexity (capacity) used in VC-theory. The knowledge of the VC-dimension of an estimator is necessary for rigorous complexity control using analytic VC generalization bounds. Unfortunately, it is not possible to obtain the analytic estimates of the VC-dimension in most cases. Hence, a recent proposal is to measure the VC-dimension of an estimator experimentally by fitting the theoretical formula to a set of experimental measurements of the frequency of errors on artificially generated data sets of varying sizes (Vapnik, Levin, & Le Cun, 1994). However, it may be difficult to obtain an accurate estimate of the VC-dimension due to the variability of random samples in the experimental procedure proposed by Vapnik et al. (1994). We address this problem by proposing an improved design procedure for specifying the measurement points (i.e., the sample size and the number of repeated experiments at a given sample size). Our approach leads to a nonuniform design structure as opposed to the uniform design structure used in the original article (Vapnik et al., 1994). Our simulation results show that the proposed optimized design structure leads to a more accurate estimation of the VC-dimension using the experimental procedure. The results also show that a more accurate estimation of VC-dimension leads to improved complexity control using analytic VC-generalization bounds and, hence, better prediction accuracy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号