首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   40篇
  免费   0篇
化学工业   3篇
金属工艺   1篇
无线电   1篇
一般工业技术   7篇
自动化技术   28篇
  2021年   1篇
  2020年   1篇
  2014年   2篇
  2013年   3篇
  2012年   3篇
  2011年   3篇
  2009年   2篇
  2008年   4篇
  2007年   2篇
  2006年   1篇
  2005年   3篇
  2004年   3篇
  2003年   1篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1996年   3篇
  1991年   2篇
排序方式: 共有40条查询结果,搜索用时 31 毫秒
1.
Time series prediction is a complex problem that consists of forecasting the future behavior of a set of data with the only information of the previous data. The main problem is the fact that most of the time series that represent real phenomena include local behaviors that cannot be modelled by global approaches. This work presents a new procedure able to find predictable local behaviors, and thus, attaining a better level of total prediction. This new method is based on a division of the input space into Voronoi regions by means of Evolution Strategies. Our method has been tested using different time series domains. One of them that represents the water demand in a water tank, through a long period of time. The other two domains are well known examples of chaotic time series (Mackey-Glass) and natural phenomenon time series (Sunspot). Results prove that, in most of cases, the proposed algorithm obtain better results than other algorithms commonly used.  相似文献   
2.
This paper presents a new approach to Particle Swarm Optimization, called Michigan Approach PSO (MPSO), and its application to continuous classification problems as a Nearest Prototype (NP) classifier. In Nearest Prototype classifiers, a collection of prototypes has to be found that accurately represents the input patterns. The classifier then assigns classes based on the nearest prototype in this collection. The MPSO algorithm is used to process training data to find those prototypes. In the MPSO algorithm each particle in a swarm represents a single prototype in the solution and it uses modified movement rules with particle competition and cooperation that ensure particle diversity. The proposed method is tested both with artificial problems and with real benchmark problems and compared with several algorithms of the same family. Results show that the particles are able to recognize clusters, find decision boundaries and reach stable situations that also retain adaptation potential. The MPSO algorithm is able to improve the accuracy of 1-NN classifiers, obtains results comparable to the best among other classifiers, and improves the accuracy reported in literature for one of the problems.
Pedro IsasiEmail:
  相似文献   
3.
Non‐cryptographic hash functions (NCHFs) have an immense number of applications, ranging from compilers and databases to videogames and computer networks. Some of the most important NCHF have been used by major corporations in commercial products. This practical success demonstrates the ability of hashing systems to provide extremely efficient searches over unsorted sets. However, very little research has been devoted to the experimental evaluation of these functions. Therefore, we evaluated the most widely used NCHF using four criteria as follows: collision resistance, distribution of outputs, avalanche effect, and speed. We identified their strengths and weaknesses and found significant flaws in some cases. We also discuss our conclusions regarding general hashing considerations such as selection of the compression map. Our results should assist practitioners and engineers in making more informed choices regarding which function to use for a particular problem. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
4.
This article introduces some relevant research works on computational intelligence applied to finance and economics. The objective is to offer an appropriate context and a starting point for those who are new to computational intelligence in finance and economics and to give an overview of the most recent works. A classification with five different main areas is presented. Those areas are related with different applications of the most modern computational intelligence techniques showing a new perspective for approaching finance and economics problems. Each research area is described with several works and applications. Finally, a review of the research works selected for this special issue is given.  相似文献   
5.
The increasing use of auctions as a selling mechanism has led to a growing interest in the subject. Thus both auction theory and experimental examinations of these theories are being developed. A recent method used for carrying out examinations on auctions has been the design of computational simulations. The aim of this article is to develop a genetic algorithm to find automatically a bidder optimal strategy while the other players are always bidding sincerely. To this end a specific dynamic multiunit auction has been selected: the Ausubel auction, with private values, dropout information, and with several rationing rules implemented. The method provides the bidding strategy (defined as the action to be taken under different auction conditions) that maximizes the bidder's payoff. The algorithm is tested under several experimental environments that differ in the elasticity of their demand curves, number of bidders and quantity of lots auctioned. The results suggest that the approach leads to strategies that outperform sincere bidding when rationing is needed.  相似文献   
6.
Bankruptcy prediction has long time been an active research field in finance. One of the main approaches to this issue is dealing with it as a classification problem. Among the range of instruments available, we focus our attention on the Evolutionary Nearest Neighbor Classifier (ENPC). In this work we assess the performance of the ENPC comparing it to six alternatives. The results suggest that this algorithm might be considered a good choice.  相似文献   
7.

Neuroevolution is the name given to a field of computer science that applies evolutionary computation for evolving some aspects of neural networks. After the AI Winter came to an end, neural networks reemerged to solve a great variety of problems. However, their usage requires designing their topology, a decision with a potentially high impact on performance. Whereas many works have tried to suggest rules-of-thumb for designing topologies, the truth is that there are not analytic procedures for determining the optimal one for a given problem, and trial-and-error is often used instead. Neuroevolution arose almost 3 decades ago, with some works focusing on the evolutionary design of the topology and most works describing techniques for learning connection weights. Since then, evolutionary computation has been proved to be a convenient approach for determining the topology and weights of neural networks, and neuroevolution has been applied to a great variety of fields. However, for more than 2 decades neuroevolution has mainly focused on simple artificial neural networks models, far from today’s deep learning standards. This is insufficient for determining good architectures for modern networks extensively used nowadays, which involve multiple hidden layers, recurrent cells, etc. More importantly, deep and convolutional neural networks have become a de facto standard in representation learning for solving many different problems, and neuroevolution has only focused in this kind of networks in very recent years, with many works being presented in 2017 onward. In this paper, we review the field of neuroevolution during the last 3 decades. We will put the focus on very recent works on the evolution of deep and convolutional neural networks, which is a new but growing field of study. To the best of our knowledge, this is the best survey reviewing the literature in this field, and we have described the features of each work as well as their performance on well-known databases when available. This work aims to provide a complete reference of all works related to neuroevolution of convolutional neural networks up to the date. Finally, we will provide some future directions for the advancement of this research area.

  相似文献   
8.
Multi-step prediction is a difficult task that has attracted increasing interest in recent years. It tries to achieve predictions several steps ahead into the future starting from current information. The interest in this work is the development of nonlinear neural models for the purpose of building multi-step time series prediction schemes. In that context, the most popular neural models are based on the traditional feedforward neural networks. However, this kind of model may present some disadvantages when a long-term prediction problem is formulated because they are trained to predict only the next sampling time. In this paper, a neural model based on a partially recurrent neural network is proposed as a better alternative. For the recurrent model, a learning phase with the purpose of long-term prediction is imposed, which allows to obtain better predictions of time series in the future. In order to validate the performance of the recurrent neural model to predict the dynamic behaviour of the series in the future, three different data time series have been used as study cases. An artificial data time series, the logistic map, and two real time series, sunspots and laser data. Models based on feedforward neural networks have also been used and compared against the proposed model. The results suggest than the recurrent model can help in improving the prediction accuracy.  相似文献   
9.
The navigation problem involves how to reach a goal avoiding obstacles in dynamic environments. This problem can be faced considering reactions and sequences of actions. Classifier systems (CSs) have proven their ability of continuous learning, however, they have some problems in reactive systems. A modified CS, namely a reactive classifier system (RCS), is proposed to overcome those problems. Two special mechanisms are included in the RCS: the non-existence of internal cycles inside the CS (no internal cycles) and the fusion of environmental message with the messages posted to the message list in the previous instant (generation list through fusion). These mechanisms allow the learning of both reactions and sequences of actions. This learning process involves two main tasks: first, discriminate between rules and, second, the discovery of new rules to obtain a successful operation in dynamic environments. DiVerent experiments have been carried out using a mini-robot Khepera to find a generalized solution. The results show the ability of the system for continuous learning and adaptation to new situations.  相似文献   
10.
Evolutionary Computation encompasses computational models that follow a biological evolution metaphor. The success of these techniques is based on the maintenance of the genetic diversity, for which it is necessary to work with large populations. However, it is not always possible to deal with such large populations, for instance, when the adequacy values must be estimated by a human being (Interactive Evolutionary Computation, IEC). This work introduces a new algorithm which is able to perform very well with a very low number of individuals (micropopulations) which speeds up the convergence and it is solving problems with complex evaluation functions. The new algorithm is compared with the canonical genetic algorithm in order to validate its efficiency. Two experimental frameworks have been chosen: table and logotype designs. An objective evaluation measures has been proposed to avoid user interaction in the experiments. In both cases the results show the efficiency of the new algorithm in terms of quality of solutions and convergence speed, two key issues in decreasing user fatigue. Yago Saez: He received the Computer Engineering degree from the Universidad Pontificia de Salamanca in 1999 Spain. He now is a Ph.D. student and works as assistant professor at the EVANNAI Group at the Computer Science Department of CARLOS III, Madrid, Spain. His main research areas encompasses the interactive evolutionary computation, the design applications and the optimization problems. Pedro Isasi, Ph.D.: He received Computer Science degree and Ph.D. degree from the Universidad Politécnica de Madrid (UPM), Spain in 1994. He is now working as professor at the EVANNAI Group at the Computer Science Department of CARLOS III, Madrid, Spain. His main research areas are Machine Learning, Evolutionary, Computation and Neural Networks and Applications to Optimization Problems. Javier Segovia, Ph.D.: He is a receiving physicist, Ph.D. degree in Computer Science (with honours) from the Universidad Politécnica de Madrid (UPM). Currently Dean of the UPM School of Computer Science, and is editor and/or author of more than 70 scientific publications in the fields of genetic algorithms, data and web mining, artificial intelligence and intelligent interfaces. Julio C. Hernandez, Ph.D.: He has received degree in Maths, Ph.D. degree in Computer Science. His main research area is the artificial intelligence applied to criptography and net security. His unofficial hobbies are chess and go. Currently, he is working as invited researcher at INRIA, France.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号