首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Artificial neural networks were used to support applications across a variety of business and scientific disciplines during the past years. Artificial neural network applications are frequently viewed as black boxes which mystically determine complex patterns in data. Contrary to this popular view, neural network designers typically perform extensive knowledge engineering and incorporate a significant amount of domain knowledge into artificial neural networks. This paper details heuristics that utilize domain knowledge to produce an artificial neural network with optimal output performance. The effect of using the heuristics on neural network performance is illustrated by examining several applied artificial neural network systems. Identification of an optimal performance artificial neural network requires that a full factorial design with respect to the quantity of input nodes, hidden nodes, hidden layers, and learning algorithm be performed. The heuristic methods discussed in this paper produce optimal or near-optimal performance artificial neural networks using only a fraction of the time needed for a full factorial design.  相似文献   

2.
General-purpose generative planners use domain-independent search heuristics to generate solutions for problems in a variety of domains. However, in some situations these heuristics force the planner to perform inefficiently or obtain solutions of poor quality. Learning from experience can help to identify the particular situations for which the domain-independent heuristics need to be overridden. Most of the past learning approaches are fully deductive and eagerly acquire correct control knowledge from a necessarily complete domain theory and a few examples to focus their scope. These learning strategies are hard to generalize in the case of nonlinear planning, where it is difficult to capture correct explanations of the interactions among goals, multiple planning operator choices, and situational data. In this article, we present a lazy learning method that combines a deductive and an inductive strategy to efficiently learn control knowledge incrementally with experience. We present hamlet, a system we developed that learns control knowledge to improve both search efficiency and the quality of the solutions generated by a nonlinear planner, namely prodigy4.0. We have identified three lazy aspects of our approach from which we believe hamlet greatly benefits: lazy explanation of successes, incremental refinement of acquired knowledge, and lazy learning to override only the default behavior of the problem solver. We show empirical results that support the effectiveness of this overall lazy learning approach, in terms of improving the efficiency of the problem solver and the quality of the solutions produced.  相似文献   

3.
A case-based approach to heuristic planning   总被引:1,自引:1,他引:0  
Most of the great success of heuristic search as an approach to AI Planning is due to the right design of domain-independent heuristics. Although many heuristic planners perform reasonably well, the computational cost of computing the heuristic function in every search node is very high, causing the planner to scale poorly when increasing the size of the planning tasks. For tackling this problem, planners can incorporate additional domain-dependent heuristics in order to improve their performance. Learning-based planners try to automatically acquire these domain-dependent heuristics using previous solved problems. In this work, we present a case-based reasoning approach that learns abstracted state transitions that serve as domain control knowledge for improving the planning process. The recommendations from the retrieved cases are used as guidance for pruning or ordering nodes in different heuristic search algorithms applied to planning tasks. We show that the CBR guidance is appropriate for a considerable number of planning benchmarks.  相似文献   

4.
Some of the current best conformant probabilistic planners focus on finding a fixed length plan with maximal probability. While these approaches can find optimal solutions, they often do not scale for large problems or plan lengths. As has been shown in classical planning, heuristic search outperforms bounded length search (especially when an appropriate plan length is not given a priori). The problem with applying heuristic search in probabilistic planning is that effective heuristics are as yet lacking.In this work, we apply heuristic search to conformant probabilistic planning by adapting planning graph heuristics developed for non-deterministic planning. We evaluate a straight-forward application of these planning graph techniques, which amounts to exactly computing a distribution over many relaxed planning graphs (one planning graph for each joint outcome of uncertain actions at each time step). Computing this distribution is costly, so we apply Sequential Monte Carlo (SMC) to approximate it. One important issue that we explore in this work is how to automatically determine the number of samples required for effective heuristic computation. We empirically demonstrate on several domains how our efficient, but sometimes suboptimal, approach enables our planner to solve much larger problems than an existing optimal bounded length probabilistic planner and still find reasonable quality solutions.  相似文献   

5.
For a given prediction model, some predictions may be reliable while others may be unreliable. The average accuracy of the system cannot provide the reliability estimate for a single particular prediction. The measure of individual prediction reliability can be important information in risk-sensitive applications of machine learning (e.g. medicine, engineering, business). We define empirical measures for estimation of prediction accuracy in regression. Presented measures are based on sensitivity analysis of regression models. They estimate reliability for each individual regression prediction in contrast to the average prediction reliability of the given regression model. We study the empirical sensitivity properties of five regression models (linear regression, locally weighted regression, regression trees, neural networks, and support vector machines) and the relation between reliability measures and distribution of learning examples with prediction errors for all five regression models. We show that the suggested methodology is appropriate only for the three studied models: regression trees, neural networks, and support vector machines, and test the proposed estimates with these three models. The results of our experiments on 48 data sets indicate significant correlations of the proposed measures with the prediction error.  相似文献   

6.
In this paper, different types of learning networks, such as artificial neural networks (ANNs), Bayesian neural networks (BNNs), support vector machines (SVMs) and minimax probability machines (MPMs) are applied for tornado detection. The last two approaches utilize kernel methods to address non-linearity of the data in the input space. All methods are applied to detect when tornadoes occur, using variables based on radar derived velocity data and month number. Computational results indicate that BNNs are more accurate for tornado detection over a suite of forecast evaluation indices.  相似文献   

7.
Declarative problem solving, such as planning, poses interesting challenges for Genetic Programming (GP). There have been recent attempts to apply GP to planning that fit two approaches: (a) using GP to search in plan space or (b) to evolve a planner. In this article, we propose to evolve only the heuristics to make a particular planner more efficient. This approach is more feasible than (b) because it does not have to build a planner from scratch but can take advantage of already existing planning systems. It is also more efficient than (a) because once the heuristics have been evolved, they can be used to solve a whole class of different planning problems in a planning domain, instead of running GP for every new planning problem. Empirical results show that our approach (EvoCK) is able to evolve heuristics in two planning domains (the blocks world and the logistics domain) that improve PRODIGY4.0 performance. Additionally, we experiment with a new genetic operator --Instance-Based Crossover--that is able to use traces of the base planner as raw genetic material to be injected into the evolving population.  相似文献   

8.
针对半监督支持向量机在采用间隔最大化思想对有标签样本和无标签样本进行分类时面临的非凸优化问题,提出了一种采用分布估计算法进行半监督支持向量机优化的方法EDA_S3VM。该方法把无标签样本的标签作为需要优化的参数,从而得到一个在标准支持向量机上的组合优化问题,利用分布估计算法通过概率模型的学习和采样来对问题进行求解。在人工数据集和公共数据集上的实验结果表明,EDA_S3VM与其它一些半监督支持向量机算法相比有更高的分类准确率。  相似文献   

9.
We consider the problem of scheduling a number of jobs on a number of unrelated parallel machines in order to minimize the makespan. We develop three heuristic approaches, i.e., a genetic algorithm, a tabu search algorithm and a hybridization of these heuristics with a truncated branch-and-bound procedure. This hybridization is made in order to accelerate the search process to near-optimal solutions. The branch-and-bound procedure will check whether the solutions obtained by the meta-heuristics can be scheduled within a tight upper bound. We compare the performances of these heuristics on a standard dataset available in the literature. Moreover, the influence of the different heuristic parameters is examined as well. The computational experiments reveal that the hybrid heuristics are able to compete with the best known results from the literature.  相似文献   

10.
Analysis of cancer data: a data mining approach   总被引:1,自引:1,他引:0  
Abstract: Even though cancer research has traditionally been clinical and biological in nature, in recent years data driven analytic studies have become a common complement. In medical domains where data and analytics driven research is successfully applied, new and novel research directions are identified to further advance the clinical and biological studies. In this research, we used three popular data mining techniques (decision trees, artificial neural networks and support vector machines) along with the most commonly used statistical analysis technique logistic regression to develop prediction models for prostate cancer survivability. The data set contained around 120 000 records and 77 variables. A k-fold cross-validation methodology was used in model building, evaluation and comparison. The results showed that support vector machines are the most accurate predictor (with a test set accuracy of 92.85%) for this domain, followed by artificial neural networks and decision trees.  相似文献   

11.
一种卷积神经网络和极限学习机相结合的人脸识别方法   总被引:1,自引:1,他引:0  
卷积神经网络是一种很好的特征提取器,但却不是最佳的分类器,而极限学习机能够很好地进行分类,却不能学习复杂的特征,根据这两者的优点和缺点,将它们结合起来,提出一种新的人脸识别方法。卷积神经网络提取人脸特征,极限学习机根据这些特征进行识别。本文还提出固定卷积神经网络的部分卷积核以减少训练参 数,从而提高识别精度的方法。在人脸库ORL和XM2VTS上进行测试的结果表明,本文的结合方法能有效提高人脸识别的识别率,而且固定部分卷积核的方式在训练样本少时具有优势。  相似文献   

12.
This paper presents a novel feature selection approach for backpropagation neural networks (NNs). Previously, a feature selection technique known as the wrapper model was shown effective for decision trees induction. However, it is prohibitively expensive when applied to real-world neural net training characterized by large volumes of data and many feature choices. Our approach incorporates a weight analysis-based heuristic called artificial neural net input gain measurement approximation (ANNIGMA) to direct the search in the wrapper model and allows effective feature selection feasible for neural net applications. Experimental results on standard datasets show that this approach can efficiently reduce the number of features while maintaining or even improving the accuracy. We also report two successful applications of our approach in the helicopter maintenance applications.  相似文献   

13.
ContextIn software industry, project managers usually rely on their previous experience to estimate the number men/hours required for each software project. The accuracy of such estimates is a key factor for the efficient application of human resources. Machine learning techniques such as radial basis function (RBF) neural networks, multi-layer perceptron (MLP) neural networks, support vector regression (SVR), bagging predictors and regression-based trees have recently been applied for estimating software development effort. Some works have demonstrated that the level of accuracy in software effort estimates strongly depends on the values of the parameters of these methods. In addition, it has been shown that the selection of the input features may also have an important influence on estimation accuracy.ObjectiveThis paper proposes and investigates the use of a genetic algorithm method for simultaneously (1) select an optimal input feature subset and (2) optimize the parameters of machine learning methods, aiming at a higher accuracy level for the software effort estimates.MethodSimulations are carried out using six benchmark data sets of software projects, namely, Desharnais, NASA, COCOMO, Albrecht, Kemerer and Koten and Gray. The results are compared to those obtained by methods proposed in the literature using neural networks, support vector machines, multiple additive regression trees, bagging, and Bayesian statistical models.ResultsIn all data sets, the simulations have shown that the proposed GA-based method was able to improve the performance of the machine learning methods. The simulations have also demonstrated that the proposed method outperforms some recent methods reported in the recent literature for software effort estimation. Furthermore, the use of GA for feature selection considerably reduced the number of input features for five of the data sets used in our analysis.ConclusionsThe combination of input features selection and parameters optimization of machine learning methods improves the accuracy of software development effort. In addition, this reduces model complexity, which may help understanding the relevance of each input feature. Therefore, some input parameters can be ignored without loss of accuracy in the estimations.  相似文献   

14.
提出了一种基于神经网络和层次支持向量机的多姿态人脸识别方法。该方法在训练阶段先利用神经网络把姿态人脸图像特征向准标准人脸图像特征映射,再根据聚类结果来训练支持向量机。识别阶段是利用神经网络变换得到待识别图像所对应的准标准图像的特征,再让层次支持向量机初步判断待识别图像最可能所属的人,最后利用否定算法对待识别的人脸图像进行确认。实验表明该算法效果较佳。  相似文献   

15.
Motion planning is one of the most significant technologies for autonomous driving. To make motion planning models able to learn from the environment and to deal with emergency situations, a new motion planning framework called as "parallel planning" is proposed in this paper. In order to generate sufficient and various training samples, artificial traffic scenes are firstly constructed based on the knowledge from the reality. A deep planning model which combines a convolutional neural network (CNN) with the Long Short-Term Memory module (LSTM) is developed to make planning decisions in an end-toend mode. This model can learn from both real and artificial traffic scenes and imitate the driving style of human drivers. Moreover, a parallel deep reinforcement learning approach is also presented to improve the robustness of planning model and reduce the error rate. To handle emergency situations, a hybrid generative model including a variational auto-encoder (VAE) and a generative adversarial network (GAN) is utilized to learn from virtual emergencies generated in artificial traffic scenes. While an autonomous vehicle is moving, the hybrid generative model generates multiple video clips in parallel, which correspond to different potential emergency scenarios. Simultaneously, the deep planning model makes planning decisions for both virtual and current real scenes. The final planning decision is determined by analysis of real observations. Leveraging the parallel planning approach, the planner is able to make rational decisions without heavy calculation burden when an emergency occurs.   相似文献   

16.
The goal of this paper is to investigate the application of parallel programming techniques to boost the performance of heuristic search‐based planning systems in various aspects. It shows that an appropriate parallelization of a sequential planning system often brings gain in performance and/or scalability. We start by describing general schemes for parallelizing the construction of a plan. We then discuss the applications of these techniques to two domain‐independent heuristic search‐based planners—a competitive conformant planner (CP A) and a state‐of‐the‐art classical planner (FF). We present experimental results—on both shared memory and distributed memory platforms—which show that the performance improvements and scalability are obtained in both cases. Finally, we discuss the issues that should be taken into consideration when designing a parallel planning system and relate our work to the existing literature. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
提出了一种基于神经网络和层次支持向量机的多姿态人脸识别方法.该方法在训练阶段先利用神经网络把姿态人脸图像特征向准标准人脸图像特征映射,再根据聚类结果来训练支持向量机.识别阶段是先利用神经网络变换得到待识别图像所对应的准标准图像的特征,再让层次支持向量机初步判断待识别图像最可能所属的人,最后利用否定算法对待识别的人脸图像进行确认,实验表明该算法效果较佳.  相似文献   

18.
The linear separability problem: some testing methods   总被引:1,自引:0,他引:1  
The notion of linear separability is used widely in machine learning research. Learning algorithms that use this concept to learn include neural networks (single layer perceptron and recursive deterministic perceptron), and kernel machines (support vector machines). This paper presents an overview of several of the methods for testing linear separability between two classes. The methods are divided into four groups: Those based on linear programming, those based on computational geometry, one based on neural networks, and one based on quadratic programming. The Fisher linear discriminant method is also presented. A section on the quantification of the complexity of classification problems is included.  相似文献   

19.
A hyper-heuristic often represents a heuristic search method that operates over a space of heuristic rules. It can be thought of as a high level search methodology to choose lower level heuristics. Nearly 200 papers on hyper-heuristics have recently appeared in the literature. A common theme in this body of literature is an attempt to solve the problems in hand in the following way: at each decision point, first employ the chosen heuristic(s) to generate a solution, then calculate the objective value of the solution by taking into account all the constraints involved. However, empirical studies from our previous research have revealed that, under many circumstances, there is no need to carry out this costly 2-stage determination and evaluation at all times. This is because many problems in the real world are highly constrained with the characteristic that the regions of feasible solutions are rather scattered and small. Motivated by this observation and with the aim of making the hyper-heuristic search more efficient and more effective, this paper investigates two fundamentally different data mining techniques, namely artificial neural networks and binary logistic regression. By learning from examples, these techniques are able to find global patterns hidden in large data sets and achieve the goal of appropriately classifying the data. With the trained classification rules or estimated parameters, the performance (i.e. the worth of acceptance or rejection) of a resulting solution during the hyper-heuristic search can be predicted without the need to undertake the computationally expensive 2-stage of determination and calculation. We evaluate our approaches on the solutions (i.e. the sequences of heuristic rules) generated by a graph-based hyper-heuristic proposed for exam timetabling problems. Time complexity analysis demonstrates that the neural network and the logistic regression method can speed up the search significantly. We believe that our work sheds light on the development of more advanced knowledge-based decision support systems.  相似文献   

20.
As a broad subfield of artificial intelligence, machine learning is concerned with the development of algorithms and techniques that allow computers to learn. These methods such as fuzzy logic, neural networks, support vector machines, decision trees and Bayesian learning have been applied to learn meaningful rules; however, the only drawback of these methods is that it often gets trapped into a local optimal. In contrast with machine learning methods, a genetic algorithm (GA) is guaranteeing for acquiring better results based on its natural evolution and global searching. GA has given rise to two new fields of research where global optimization is of crucial importance: genetic based machine learning (GBML) and genetic programming (GP). This article adopts the GBML technique to provide a three-phase knowledge extraction methodology, which makes continues and instant learning while integrates multiple rule sets into a centralized knowledge base. Moreover, the proposed system and GP are both applied to the theoretical and empirical experiments. Results for both approaches are presented and compared. This paper makes two important contributions: (1) it uses three criteria (accuracy, coverage, and fitness) to apply the knowledge extraction process which is very effective in selecting an optimal set of rules from a large population; (2) the experiments prove that the rule sets derived by the proposed approach are more accurate than GP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号