共查询到20条相似文献,搜索用时 15 毫秒
1.
As part of the operation of an Expert System, a deductive component accesses a database of facts to help simulate the behavior of a human expert in a particular problem domain. The nature of this access is examined, and four access strategies are identified. Features of each of these strategies are addressed within the framework of a logic-based deductive component and the relational model of data. 相似文献
2.
Roberto P. Domingos Gustavo H.F. Caldas Cludio M.N.A. Pereira Roberto Schirru 《Applied Soft Computing》2003,3(4):317
This work proposes the use of genetic programming (GP) for automatic design of a fuzzy expert system aimed to provide the control of axial xenon oscillations in pressurized water reactors (PWRs). The control methodology is based on three axial offsets of xenon (AOx), iodine (AOi) and neutron flux (AOf), effectively used in former work. Simulations were made using a two-point xenon oscillation model, which employs the non-linear xenon and iodine balance equations and the one group, one-dimensional neutron diffusion equation, with non-linear power reactivity feedback, also proposed in the literature. Results have demonstrated the ability of the GP in finding a good fuzzy strategy, which can effectively control the axial xenon oscillations. 相似文献
3.
4.
Energy loss through optically thin radiative cooling plays an important part in the evolution of astrophysical gas dynamics and should therefore be considered a necessary element in any numerical simulation. Although the addition of this physical process to the equations of hydrodynamics is straightforward, it does create numerical challenges that have to be overcome in order to ensure the physical correctness of the simulation. First, the cooling has to be treated (semi-)implicitly, owing to the discrepancies between the cooling timescale and the typical timesteps of the simulation. Secondly, because of its dependence on a tabulated cooling curve, the introduction of radiative cooling creates the necessity for an interpolation scheme. In particular, we will argue that the addition of radiative cooling to a numerical simulation creates the need for extremely high resolution, which can only be fully met through the use of adaptive mesh refinement. 相似文献
5.
Genetic programming is a systematic method for getting computers to automatically solve problems. Genetic programming starts from a high-level statement of what needs to be done and automatically creates a computer program to solve the problem by means of a simulated evolutionary process. The paper demonstrates that genetic programming (1) now routinely delivers high-return human-competitive machine intelligence; (2) is an automated invention machine; (3) can automatically create a general solution to a problem in the form of a parameterized topology and (4) has delivered a progression of qualitatively more substantial results in synchrony with five approximately order-of-magnitude increases in the expenditure of computer time. These points are illustrated by a group of recent results involving the automatic synthesis of the topology and sizing of analog electrical circuits, the automatic synthesis of placement and routing of circuits, and the automatic synthesis of controllers as well as references to work involving the automatic synthesis of antennas, networks of chemical reactions (metabolic pathways), genetic networks, mathematical algorithms, and protein classifiers. 相似文献
6.
Link Patrick Poursanidis Miltiadis Schmid Jochen Zache Rebekka von Kurnatowski Martin Teicher Uwe Ihlenfeldt Steffen 《Journal of Intelligent Manufacturing》2022,33(7):2129-2142
Journal of Intelligent Manufacturing - Increasing digitalization enables the use of machine learning (ML) methods for analyzing and optimizing manufacturing processes. A main application of ML is... 相似文献
7.
Benyamin Grosman Author Vitae Author Vitae 《Automatica》2009,45(1):252-256
This contribution describes an automatic technique to detect suitable Lyapunov functions for nonlinear systems. The theoretical basis for the work is Lyapunov’s Direct Method, which provides sufficient conditions for stability of equilibrium points. In our proposed approach, genetic programming (GP) is used to search for suitable Lyapunov functions, that is, those that best predict the true domain of attraction. In the work presented here, our GP approach has been extended by defining a target function accounting for the Lyapunov function level sets. 相似文献
8.
Leonardo Vanneschi Mauro Castelli Sara Silva 《Genetic Programming and Evolvable Machines》2014,15(2):195-214
Several methods to incorporate semantic awareness in genetic programming have been proposed in the last few years. These methods cover fundamental parts of the evolutionary process: from the population initialization, through different ways of modifying or extending the existing genetic operators, to formal methods, until the definition of completely new genetic operators. The objectives are also distinct: from the maintenance of semantic diversity to the study of semantic locality; from the use of semantics for constructing solutions which obey certain constraints to the exploitation of the geometry of the semantic topological space aimed at defining easy-to-search fitness landscapes. All these approaches have shown, in different ways and amounts, that incorporating semantic awareness may help improving the power of genetic programming. This survey analyzes and discusses the state of the art in the field, organizing the existing methods into different categories. It restricts itself to studies where semantics is intended as the set of output values of a program on the training data, a definition that is common to a rather large set of recent contributions. It does not discuss methods for incorporating semantic information into grammar-based genetic programming or approaches based on formal methods. The objective is keeping the community updated on this interesting research track, hoping to motivate new and stimulating contributions. 相似文献
9.
Declarative problem solving, such as planning, poses interesting challenges for Genetic Programming (GP). There have been recent attempts to apply GP to planning that fit two approaches: (a) using GP to search in plan space or (b) to evolve a planner. In this article, we propose to evolve only the heuristics to make a particular planner more efficient. This approach is more feasible than (b) because it does not have to build a planner from scratch but can take advantage of already existing planning systems. It is also more efficient than (a) because once the heuristics have been evolved, they can be used to solve a whole class of different planning problems in a planning domain, instead of running GP for every new planning problem. Empirical results show that our approach (EvoCK) is able to evolve heuristics in two planning domains (the blocks world and the logistics domain) that improve PRODIGY4.0 performance. Additionally, we experiment with a new genetic operator --Instance-Based Crossover--that is able to use traces of the base planner as raw genetic material to be injected into the evolving population. 相似文献
10.
The road safety performance of countries is conducted by combining seven main risk indicators into one index using a particular weighting and aggregation method. Weights can be determined with respect to the assumed importance of the indicator, whereas aggregation operators can be used to stress better performances differently from worse performances irrespective of the indicator’s meaning. In this research, both expert weights and ordered weighted averaging operators are explored, evaluated and integrated resulting in a ranking of countries based on a road safety index. 相似文献
11.
The limited battery life of modern mobile devices is one of the key problems limiting their use. Even if the offloading of computation onto cloud computing platforms can considerably extend battery duration, it is really hard not only to evaluate the cases where offloading guarantees real advantages on the basis of the requirements of the application in terms of data transfer, computing power needed, etc., but also to evaluate whether user requirements (i.e. the costs of using the cloud services, a determined QoS required, etc.) are satisfied. To this aim, this paper presents a framework for generating models to make automatic decisions on the offloading of mobile applications using a genetic programming (GP) approach. The GP system is designed using a taxonomy of the properties useful to the offloading process concerning the user, the network, the data and the application. The fitness function adopted permits different weights to be given to the four categories considered during the process of building the model. Experimental results, conducted on datasets representing different categories of mobile applications, permit the analysis of the behavior of our algorithm in different applicative contexts. Finally, a comparison with the state of the art of the classification algorithm establishes the goodness of the approach in modeling the offloading process. 相似文献
12.
13.
Fitsum Meshesha Kifetew Roberto Tiella Paolo Tonella 《Empirical Software Engineering》2017,22(2):928-961
Automated generation of system level tests for grammar based systems requires the generation of complex and highly structured inputs, which must typically satisfy some formal grammar. In our previous work, we showed that genetic programming combined with probabilities learned from corpora gives significantly better results over the baseline (random) strategy. In this work, we extend our previous work by introducing grammar annotations as an alternative to learned probabilities, to be used when finding and preparing the corpus required for learning is not affordable. Experimental results carried out on six grammar based systems of varying levels of complexity show that grammar annotations produce a higher number of valid sentences and achieve similar levels of coverage and fault detection as learned probabilities. 相似文献
14.
In this paper, attention is restricted to mesh adaptivity. Traditionally, the most common mesh adaptive strategies for linear problems are used to reach a prescribed accuracy. This goal is best met with an h-adaptive scheme in combination with an error estimator. In an industrial context, the aim of the mechanical simulations in engineering design is not only to obtain greatest quality but more often a compromise between the desired quality and the computation cost (CPU time, storage, software, competence, human cost, computer used). In this paper we propose the use of alternative mesh refinement with an h-adaptive procedure for 3D elastic problems. The alternative mesh refinement criteria allow to obtain the maximum of accuracy for a prescribed cost. These adaptive strategies are based on a technique of error in constitutive relation (the process could be used with other error estimators) and an efficient adaptive technique which automatically takes into account the steep gradient areas. This work proposes a 3D method of adaptivity with the latest version of the INRIA automatic mesh generator GAMHIC3D. 相似文献
15.
Population initialisation in genetic programming is both easy, because random combinations of syntax can be generated straightforwardly,
and hard, because these random combinations of syntax do not always produce random and diverse program behaviours. In this
paper we perform analyses of behavioural diversity, the size and shape of starting populations, the effects of purely semantic
program initialisation and the importance of tree shape in the context of program initialisation. To achieve this, we create
four different algorithms, in addition to using the traditional ramped half and half technique, applied to seven genetic programming
problems. We present results to show that varying the choice and design of program initialisation can dramatically influence
the performance of genetic programming. In particular, program behaviour and evolvable tree shape can have dramatic effects
on the performance of genetic programming. The four algorithms we present have different rates of success on different problems.
相似文献
Colin G. JohnsonEmail: |
16.
A comparison of bloat control methods for genetic programming 总被引:2,自引:0,他引:2
Genetic programming has highlighted the problem of bloat, the uncontrolled growth of the average size of an individual in the population. The most common approach to dealing with bloat in tree-based genetic programming individuals is to limit their maximal allowed depth. An alternative to depth limiting is to punish individuals in some way based on excess size, and our experiments have shown that the combination of depth limiting with such a punitive method is generally more effective than either alone. Which such combinations are most effective at reducing bloat? In this article we augment depth limiting with nine bloat control methods and compare them with one another. These methods are chosen from past literature and from techniques of our own devising. esting with four genetic programming problems, we identify where each bloat control method performs well on a per-problem basis, and under what settings various methods are effective independent of problem. We report on the results of these tests, and discover an unexpected winner in the cross-platform category. 相似文献
17.
We report a series of experiments that use semantic-based local search within a multiobjective genetic programming (GP) framework. We compare various ways of selecting target subtrees for local search as well as different methods for performing that search; we have also made comparison with the random desired operator of Pawlak et al. using statistical hypothesis testing. We find that a standard steady state or generational GP followed by a carefully-designed single-objective GP implementing semantic-based local search produces models that are mode accurate and with statistically smaller (or equal) tree size than those generated by the corresponding baseline GP algorithms. The depth fair selection strategy of Ito et al. is found to perform best compared with other subtree selection methods in the model refinement. 相似文献
18.
César Guerra-García Ismael Caballero Mario Piattini 《Information Systems Frontiers》2013,15(3):433-445
The number of Web applications which are part of Business Intelligence (BI) applications has grown exponentially in recent years, as has their complexity. Consequently, the amount of data used by these applications has also increased. The larger the number of data used, the greater the chance to make errors is. That being the case, managing data with an acceptable level of quality is paramount to success in any organizational business process. In order to raise and maintain adequate levels of Data Quality (DQ), it is indispensable for Web applications to be able to satisfy specific DQ requirements. To do so, DQ requirements should be captured and introduced into the development process of the Web Application, together with the other software requirements needed in the applications. In the field of Web application development, however, there appears to us to exist a lack of proposals aimed at managing specific DQ software requirements. This paper considers the MDA (Model Driven Architecture) approach and, principally, the benefits provided by Model Driven Web Engineering (MDWE), putting forward a proposal for two artifacts. These consist of a metamodel and a UML profile for the management of Data Quality Software Requirements for Web Applications (DQ_WebRE). 相似文献
19.
The organization of high-performance execution of a fragmented program has encountered with the problem of choosing of an acceptable way of its execution. The potentialities of optimizing the execution at the stages of fragmented program development, compilation and execution are considered. The methods and algorithms of such an optimization are proposed to be included into the LuNA fragmented programming language, compiler, generator and run-time system. 相似文献
20.
Effective land cover mapping often requires the use of multiple data sources and data interpretation methods, particularly when no one data source or interpretation method provides sufficiently good results. Method-oriented approaches are often only effective for specific land cover class/data source combinations, and cannot be applied when different classification systems or data sources are required or available. Here we present a method, based on Endorsement Theory, of pooling evidence from multiple expert systems and spatial datasets to produce land cover maps. Individual ‘experts’ are trained to produce evidence for or against a class, with this evidence being categorised according to strength. An evidence integration rule set is applied to evidence lists to produce conclusions of different strength regarding individual classes, and the most likely class identified. The only expert system design implemented currently within the methodology is a neural network model, although the system has been designed to accept information from decision trees, fuzzy k-means and Bayesian statistics as well. We have used the technique to produce land cover maps of Scotland using three classification systems of varying complexity. Mapping accuracy varied between 52.6% for a map with 96 classes to 88.8% for a map with eight classes. The accuracy of the maps generated is higher than when individual datasets are used, showing that the evidence integration method applied is suitable for improving land cover mapping accuracy. We showed that imagery was not necessarily the most important data source for mapping where a large number of classes are used, and also showed that even data sources that produce low accuracy scores when used for mapping by themselves do improve the accuracy of maps produced using this integrative approach. Future work in developing the method is identified, including the inclusion of additional expert systems and improvement of the evidence integration, and evaluation carried out of the overall effectiveness of the approach. 相似文献