首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10篇
  免费   0篇
自动化技术   10篇
  2008年   1篇
  2007年   1篇
  2004年   1篇
  2003年   2篇
  2000年   1篇
  1999年   1篇
  1998年   3篇
排序方式: 共有10条查询结果,搜索用时 31 毫秒
1
1.
2.
Speeding up approximate pattern matching is a line of research in stringology since the 80s. Practically fast approaches belong to the class of filtration algorithms, in which text regions dissimilar to the pattern are first excluded, and the remaining regions are then compared to the pattern by dynamic programming. Among the conditions used to test similarity between the regions and the pattern, many require a minimum number of common substrings between them. When only substitutions are taken into account for measuring dissimilarity, counting spaced subwords instead of substrings improves the filtration efficiency. However, a preprocessing step is required to design one or more patterns, called spaced seeds (or gapped seeds), for the subwords, depending on the search parameters. Two distinct lines of research appear the literature: one with probabilistic formulations of seed design problems, in which one wishes for instance to compute a seed with the highest probability to detect the desired similarities (lossy filtration), a second line with combinatorial formulations, where the goal is to find a seed that detects all or a maximum number of similarities (both lossless and lossy filtration). We concentrate on combinatorial seed design problems and consider formulations in which the set of sought similarities is either listed explicitly (RSOS), or characterised by their length and maximal number of mismatches (Non-Detection). Several articles exhibit exponential algorithms for these problems. In this work, we provide hardness and inapproximability results for several seed design problems, thereby justifying the complexity of known algorithms. Moreover, we introduce a new formulation of seed design (MWLS), in which the weight of the seed has to be maximised, and show it is as difficult to approximate as Maximum Independent Set.  相似文献   
3.
Our aim is to stress the importance of Jacobian matrix conditioning for model validation. We also comment on Monari and Dreyfus (2002), where, following Rivals and Personnaz (2000), it is proposed to discard neural candidates that are likely to overfit and/or for which quantities of interest such as confidence intervals cannot be computed accurately. In Rivals and Personnaz (2000), we argued that such models are to be discarded on the basis of the condition number of their Jacobian matrix. But Monari and Dreyfus (2002) suggest making the decision on the basis of the computed values of the leverages, the diagonal elements of the projection matrix on the range of the Jacobian, or "hat" matrix: they propose to discard a model if computed leverages are outside some theoretical bounds, pretending that it is the symptom of the Jacobian rank deficiency. We question this proposition because, theoretically, the hat matrix is defined whatever the rank of the Jacobian and because, in practice, the computed leverages of very ill-conditioned networks may respect their theoretical bounds while confidence intervals cannot be estimated accurately enough, two facts that have escaped Monari and Dreyfus's attention. Wealso recall the most accurate way to estimate the leverages and the properties of these estimations. Finally, we make an additional comment concerning the performance estimation in Monari and Dreyfus (2002).  相似文献   
4.
Neural-network construction and selection in nonlinear modeling   总被引:3,自引:0,他引:3  
We study how statistical tools which are commonly used independently can advantageously be exploited together in order to improve neural network estimation and selection in nonlinear static modeling. The tools we consider are the analysis of the numerical conditioning of the neural network candidates, statistical hypothesis tests, and cross validation. We present and analyze each of these tools in order to justify at what stage of a construction and selection procedure they can be most useful. On the basis of this analysis, we then propose a novel and systematic construction and selection procedure for neural modeling. We finally illustrate its efficiency through large-scale simulations experiments and real-world modeling problems.  相似文献   
5.
No free lunch with the sandwich [sandwich estimator]   总被引:1,自引:0,他引:1  
In nonlinear regression theory, the sandwich estimator of the covariance matrix of the model parameters is known as a consistent estimator, even when the parameterized model does not contain the regression. However, in the latter case, we emphasize the fact that the consistency of the sandwich holds only if the inputs of the training set are the values of independent identically distributed random variables. Thus, in the frequent practical modeling situation involving a training set whose inputs are deliberately chosen and imposed by the designer, we question the opportunity to use the sandwich estimator rather than the simple estimator based on the inverse squared Jacobian.  相似文献   
6.
Given a finite set of strings X, the Longest Common Subsequence problem (LCS) consists in finding a subsequence common to all strings in X that is of maximal length. LCS is a central problem in stringology and finds broad applications in text compression, conception of error-detecting codes, or biological sequence comparison. However, in numerous contexts, words represent cyclic or unoriented sequences of symbols and LCS must be generalized to consider both orientations and/or all cyclic shifts of the strings involved. This occurs especially in computational biology when genetic material is sequenced from circular DNA or RNA molecules.In this work, we define three variants of LCS when the input words are unoriented and/or cyclic. We show that these problems are -hard, and -hard if parameterized in the number of input strings. These results still hold even if the three LCS variants are restricted to input languages over a binary alphabet. We also settle the parameterized complexity of our problems for most relevant parameters. Moreover, we study the approximability of these problems: we discuss the existence of approximation bounds depending on the cardinality of the alphabet, on the length of the shortest sequence, and on the number of input sequences. For this we prove that Maximum Independent Set in r-uniform hypergraphs is -hard if parameterized in the cardinality of the sought independent set and at least as hard to approximate as Maximum Independent Set in graphs.  相似文献   
7.
In the framework of nonlinear process modeling, we propose training algorithms for feedback wavelet networks used as nonlinear dynamic models. An original initialization procedure is presented that takes the locality of the wavelet functions into account. Results obtained for the modeling of several processes are presented; a comparison with networks of neurons with sigmoidal functions is performed.  相似文献   
8.
We propose a design procedure of neural internal model control systems for stable processes with delay. We show that the design of such nonadaptive indirect control systems necessitates only the training of the inverse of the model deprived from its delay, and that the presence of the delay thus does not increase the order of the inverse. The controller is then obtained by cascading this inverse with a rallying model which imposes the regulation dynamic behavior and ensures the robustness of the stability. A change in the desired regulation dynamic behavior, or an improvement of the stability, can be obtained by simply tuning the rallying model, without retraining the whole model reference controller. The robustness properties of internal model control systems being obtained when the inverse is perfect, we detail the precautions which must be taken for the training of the inverse so that it is accurate in the whole space visited during operation with the process. In the same spirit, we make an emphasis on neural models affine in the control input, whose perfect inverse is derived without training. The control of simulated processes illustrates the proposed design procedure and the properties of the neural internal model control system for processes without and with delay.  相似文献   
9.
The extended Kalman filter (EKF) is a well-known tool for the recursive parameter estimation of static and dynamic nonlinear models. In particular, the EKF has been applied to the estimation of the weights of feedforward and recurrent neural network models, i.e. to their training, and shown to be more efficient than recursive and nonrecursive first-order training algorithms; nevertheless, these first applications to the training of neural networks did not fully exploit the potentials of the EKF. In this paper, we analyze the specific influence of the EKF parameters for modeling problems, and propose a variant of this algorithm for the training of feedforward neural models which proves to be very efficient as compared to nonrecursive second-order algorithms. We test the proposed EKF algorithm on several static and dynamic modeling problems, some of them being benchmark problems, and which bring out the properties of the proposed algorithm.  相似文献   
10.
On cross validation for model selection   总被引:8,自引:0,他引:8  
In response to Zhu and Rower (1996), a recent communication (Goutte, 1997) established that leave-one-out cross validation is not subject to the "no-free-lunch" criticism. Despite this optimistic conclusion, we show here that cross validation has very poor performances for the selection of linear models as compared to classic statistical tests. We conclude that the statistical tests are preferable to cross validation for linear as well as for nonlinear model selection.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号