首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   83篇
  免费   3篇
电工技术   2篇
金属工艺   2篇
轻工业   3篇
无线电   45篇
一般工业技术   3篇
冶金工业   8篇
自动化技术   23篇
  2022年   1篇
  2021年   1篇
  2018年   1篇
  2016年   4篇
  2014年   1篇
  2013年   1篇
  2012年   3篇
  2011年   3篇
  2010年   2篇
  2009年   1篇
  2008年   1篇
  2007年   4篇
  2006年   8篇
  2005年   6篇
  2004年   7篇
  2003年   1篇
  2002年   6篇
  2001年   3篇
  2000年   3篇
  1999年   5篇
  1998年   7篇
  1997年   3篇
  1996年   1篇
  1995年   1篇
  1994年   2篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1986年   2篇
  1982年   1篇
  1981年   1篇
  1976年   1篇
排序方式: 共有86条查询结果,搜索用时 944 毫秒
51.
52.
This paper reviews the problem of translating signals into symbols preserving maximally the information contained in the signal time structure. In this context, we motivate the use of nonconvergent dynamics for the signal to symbol translator. We then describe a biologically realistic model of the olfactory system proposed by W. Freeman (1975) that has locally stable dynamics but is globally chaotic. We show how we can discretize Freeman's model using digital signal processing techniques, providing an alternative to the more conventional Runge-Kutta integration. This analysis leads to a direct mixed-signal (analog amplitude/discrete time) implementation of the dynamical building block that simplifies the implementation of the interconnect. We present results of simulations and measurements obtained from a fabricated analog VLSI chip  相似文献   
53.
The combination of the famed kernel trick and the least-mean-square (LMS) algorithm provides an interesting sample-by-sample update for an adaptive filter in reproducing kernel Hilbert spaces (RKHS), which is named in this paper the KLMS. Unlike the accepted view in kernel methods, this paper shows that in the finite training data case, the KLMS algorithm is well posed in RKHS without the addition of an extra regularization term to penalize solution norms as was suggested by Kivinen [Kivinen, Smola and Williamson, ldquoOnline Learning With Kernels,rdquo IEEE Transactions on Signal Processing, vol. 52, no. 8, pp. 2165-2176, Aug. 2004] and Smale [Smale and Yao, ldquoOnline Learning Algorithms,rdquo Foundations in Computational Mathematics, vol. 6, no. 2, pp. 145-176, 2006]. This result is the main contribution of the paper and enhances the present understanding of the LMS algorithm with a machine learning perspective. The effect of the KLMS step size is also studied from the viewpoint of regularization. Two experiments are presented to support our conclusion that with finite data the KLMS algorithm can be readily used in high dimensional spaces and particularly in RKHS to derive nonlinear, stable algorithms with comparable performance to batch, regularized solutions.  相似文献   
54.
In this paper we present stabilized finite element methods to discretize in space the monochromatic radiation transport equation. These methods are based on the decomposition of the unknowns into resolvable and subgrid scales, with an approximation for the latter that yields a problem to be solved for the former. This approach allows us to design the algorithmic parameters on which the method depends, which we do here when the discrete ordinates method is used for the directional approximation. We concentrate on two stabilized methods, namely, the classical SUPG technique and the orthogonal subscale stabilization. A numerical analysis of the spatial approximation for both formulations is performed, which shows that they have a similar behavior: they are both stable and optimally convergent in the same mesh-dependent norm. A comparison with the behavior of the Galerkin method, for which a non-standard numerical analysis is done, is also presented.  相似文献   
55.
The article provides a review of the fundamental of neural networks and reports recent progress. Topics covered include dynamic modeling, model-based neural networks, statistical learning, eigenstructure-based processing, active learning, and generalization capability. Current and potential applications of neural networks are also described in detail. Those applications include optical character recognition, speech recognition and synthesis, automobile and aircraft control, image analysis and neural vision, and several medical applications. Essentially, neural networks have become a very effective tool in signal processing, particularly in various recognition tasks  相似文献   
56.
Feature selection in MLPs and SVMs based on maximum output information   总被引:5,自引:0,他引:5  
This paper presents feature selection algorithms for multilayer perceptrons (MLPs) and multiclass support vector machines (SVMs), using mutual information between class labels and classifier outputs, as an objective function. This objective function involves inexpensive computation of information measures only on discrete variables; provides immunity to prior class probabilities; and brackets the probability of error of the classifier. The maximum output information (MOI) algorithms employ this function for feature subset selection by greedy elimination and directed search. The output of the MOI algorithms is a feature subset of user-defined size and an associated trained classifier (MLP/SVM). These algorithms compare favorably with a number of other methods in terms of performance on various artificial and real-world data sets.  相似文献   
57.
The technique of local linear models is appealing for modeling complex time series due to the weak assumptions required and its intrinsic simplicity. Here, instead of deriving the local models from the data, we propose to estimate them directly from the weights of a self-organizing map (SOM), which functions as a dynamic preserving model of the dynamics. We introduce one modification to the Kohonen learning to ensure good representation of the dynamics and use weighted least squares to ensure continuity among the local models. The proposed scheme is tested using synthetic chaotic time series and real-world data. The practicality of the method is illustrated in the identification and control of the NASA Langley wind tunnel during aerodynamic tests of model aircraft. Modeling the dynamics with an SOM lends to a predictive multiple model control strategy. Comparison of the new controller against the existing controller in test runs shows the superiority of our method  相似文献   
58.
Training multilayer neural networks is typically carried out using descent techniques such as the gradient-based backpropagation (BP) of error or the quasi-Newton approaches including the Levenberg-Marquardt algorithm. This is basically due to the fact that there are no analytical methods to find the optimal weights, so iterative local or global optimization techniques are necessary. The success of iterative optimization procedures is strictly dependent on the initial conditions, therefore, in this paper, we devise a principled novel method of backpropagating the desired response through the layers of a multilayer perceptron (MLP), which enables us to accurately initialize these neural networks in the minimum mean-square-error sense, using the analytic linear least squares solution. The generated solution can be used as an initial condition to standard iterative optimization algorithms. However, simulations demonstrate that in most cases, the performance achieved through the proposed initialization scheme leaves little room for further improvement in the mean-square-error (MSE) over the training set. In addition, the performance of the network optimized with the proposed approach also generalizes well to testing data. A rigorous derivation of the initialization algorithm is presented and its high performance is verified with a number of benchmark training problems including chaotic time-series prediction, classification, and nonlinear system identification with MLPs.  相似文献   
59.
A new technique for designing filters with long time constants in the discrete time domain is presented. The F&H (filter and hold) methodology halts the state of a continuous time filter every T seconds resulting in a filter implementation with time constants that can be controlled in three distinct ways: by the sampling period T, the duty cycle k=τ/T or the time constant of the continuous time filter prototype. The final filter can be constructed from a typical Gm-C technique with very low power consumption  相似文献   
60.
Training neural networks with additive noise in the desired signal   总被引:5,自引:0,他引:5  
A global optimization strategy for training adaptive systems such as neural networks and adaptive filters (finite or infinite impulse response) is proposed. Instead of adding random noise to the weights as proposed in the past, additive random noise is injected directly into the desired signal. Experimental results show that this procedure also speeds up greatly the backpropagation algorithm. The method is very easy to implement in practice, preserving the backpropagation algorithm and requiring a single random generator with a monotonically decreasing step size per output channel. Hence, this is an ideal strategy to speed up supervised learning, and avoid local minima entrapment when the noise variance is appropriately scheduled.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号