首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Intuition is the human capacity to make decisions under novel, complex situations where knowledge is incomplete and of variable levels of certainty. We take the view that intuition can be modeled as a rational and deductive mode of information processing which is suited to novel, complex situations. In this research, a computational algorithm, or “intuitive reasoner”, is proposed which mimics some aspects of human intuition by combining established mathematical tools, such as fuzzy set theory, and some novel innovations. A rule-based scheme is followed and a rule-learning module that allows rules to be learned from incomplete datasets is developed. The input and the rules drawn by the reasoner are allowed to be fuzzy, multi-valued, and low in certainty. A measure of the certainty level, Strength of Belief, is attached to each input as well as each rule. Solutions are formulated through iterations of consolidating intermediate reasoning results, during which the Strength of Belief of corroborating intermediate results is combined. An experimental implementation of the proposed intuitive reasoner is reported, in which the reasoner was used to solve a classification problem. The results showed that, when given increasingly sparse input data, the rule-learning module generated more rules of lower associated certainty than when presented with more complete data. The intuitive reasoner was able to make use of these low-certainty rules to solve the classification problems with an accuracy that compared favorably to that of traditional methods based on complete datasets.  相似文献   

2.
This paper proposes an approach to investigate norm-governed learning agents which combines a logic-based formalism with an equation-based counterpart. This dual formalism enables us to describe the reasoning of such agents and their interactions using argumentation, and, at the same time, to capture systemic features using equations. The approach is applied to norm emergence and internalisation in systems of learning agents. The logical formalism is rooted into a probabilistic defeasible logic instantiating Dung??s argumentation framework. Rules of this logic are attached with probabilities to describe the agents?? minds and behaviours as well as uncertain environments. Then, the equation-based model for reinforcement learning, defined over this probability distribution, allows agents to adapt to their environment and self-organise.  相似文献   

3.
Support vector learning for fuzzy rule-based classification systems   总被引:11,自引:0,他引:11  
To design a fuzzy rule-based classification system (fuzzy classifier) with good generalization ability in a high dimensional feature space has been an active research topic for a long time. As a powerful machine learning approach for pattern recognition problems, the support vector machine (SVM) is known to have good generalization ability. More importantly, an SVM can work very well on a high- (or even infinite) dimensional feature space. This paper investigates the connection between fuzzy classifiers and kernel machines, establishes a link between fuzzy rules and kernels, and proposes a learning algorithm for fuzzy classifiers. We first show that a fuzzy classifier implicitly defines a translation invariant kernel under the assumption that all membership functions associated with the same input variable are generated from location transformation of a reference function. Fuzzy inference on the IF-part of a fuzzy rule can be viewed as evaluating the kernel function. The kernel function is then proven to be a Mercer kernel if the reference functions meet a certain spectral requirement. The corresponding fuzzy classifier is named positive definite fuzzy classifier (PDFC). A PDFC can be built from the given training samples based on a support vector learning approach with the IF-part fuzzy rules given by the support vectors. Since the learning process minimizes an upper bound on the expected risk (expected prediction error) instead of the empirical risk (training error), the resulting PDFC usually has good generalization. Moreover, because of the sparsity properties of the SVMs, the number of fuzzy rules is irrelevant to the dimension of input space. In this sense, we avoid the "curse of dimensionality." Finally, PDFCs with different reference functions are constructed using the support vector learning approach. The performance of the PDFCs is illustrated by extensive experimental results. Comparisons with other methods are also provided.  相似文献   

4.
Abstract: Rule-based and case-based reasoning are two popular approaches used in intelligent systems. Rules usually represent general knowledge, whereas cases encompass knowledge accumulated from specific (specialized) situations. Each approach has advantages and disadvantages, which are proved to be complementary to a large degree. So, it is well justified to combine rules and cases to produce effective hybrid approaches, surpassing the disadvantages of each component method. In this paper, we first present advantages and disadvantages of rule-based and case-based reasoning and show that they are complementary. We then discuss the deficiencies of existing categorization schemes for integrations of rule-based and case-based representations. To deal with these deficiencies, we introduce a new categorization scheme. Finally, we briefly present representative approaches for the final categories of our scheme.  相似文献   

5.
Abstract

One method to overcome the notorious efficiency problems of logical reasoning algorithms in AI has been to combine a general-purpose reasoner with several special-purpose reasoners for commonly used subtasks. In this paper we are using Schubert's (Schubert et al. 1983, 1987) method of implementing a special-purpose class reasoner. We show that it is possible to replace Schubert's preorder number class tree by a preorder number list without loss of functionality. This form of the algorithm lends itself perfectly towards a parallel implementation,1 and we describe design, coding and testing of such an implementation. Our algorithm is practically independent of the size of the class list, and even with several thousand nodes learning times are under a second and retrieval times are under 500 ms.  相似文献   

6.
This paper describes a fuzzy modeling framework based on support vector machine, a rule-based framework that explicitly characterizes the representation in fuzzy inference procedure. The support vector learning mechanism provides an architecture to extract support vectors for generating fuzzy IF-THEN rules from the training data set, and a method to describe the fuzzy system in terms of kernel functions. Thus, it has the inherent advantage that the model does not have to determine the number of rules in advance, and the overall fuzzy inference system can be represented as series expansion of fuzzy basis functions. The performance of the proposed approach is compared to other fuzzy rule-based modeling methods using four data sets.  相似文献   

7.
Software effort estimation is an important but difficult task. Existing algorithmic models often fail to predict effort accurately and consistently. To address this, we developed a computational approach to software effort estimation. cEstor is a case-based reasoning engine developed from an analysis of expert reasoning. cEstor's architecture explicitly separates case-independent productivity adaptation knowledge (rules) from case-specific representations of prior projects encountered (cases). Using new data from actual projects, uncalibrated cEstor generated estimates which compare favorably to those of the referent expert, calibrated Function Points and calibrated COCOMO. The estimates were better than those produced by uncalibrated Basic COCOMO and Intermediate COCOMO. The roles of specific knowledge components in cEstor (cases, adaptation rules, and retrieval heuristics) were also examined. The results indicate that case-independent productivity adaptation rules affect the consistency of estimates and appropriate case selection affects the accuracy of estimates, but the combination of an adaptation rule set and unrestricted case base can yield the best estimates. Retrieval heuristics based on source lines of code and a Function Count heuristic based on summing over differences in parameter values, were found to be equivalent in accuracy and consistency, and both performed better than a heuristic based on Function Count totals.  相似文献   

8.
Ontology classification, the problem of computing the subsumption hierarchies for classes (atomic concepts), is a core reasoning service provided by Web Ontology Language (OWL) reasoners. Although general-purpose OWL 2 reasoners employ sophisticated optimizations for classification, they are still not efficient owing to the high complexity of tableau algorithms for expressive ontologies. Profile-specific OWL 2 EL reasoners are efficient; however, they become incomplete even if the ontology contains only a small number of axioms that are outside the OWL 2 EL fragment. In this paper, we present a technique that combines an OWL 2 EL reasoner with an OWL 2 reasoner for ontology classification of expressive SROIQ. To optimize the workload, we propose a task decomposition strategy for identifying the minimal non-EL subontology that contains only necessary axioms to ensure completeness. During the ontology classification, the bulk of the workload is delegated to an efficient OWL 2 EL reasoner and only the minimal non- EL subontology is handled by a less efficient OWL 2 reasoner. The proposed approach is implemented in a prototype ComR and experimental results show that our approach offers a substantial speedup in ontology classification. For the wellknown ontology NCI, the classification time is reduced by 96.9% (resp. 83.7%) compared against the standard reasoner Pellet (resp. the modular reasoner MORe).  相似文献   

9.
Grid computing is increasingly emerging as a promising platform for large-scale problems solving in science, engineering and technology. Nevertheless, a major effort is still required to harness the high potential performance of such computational framework and in this sense, an important challenge is to develop new strategies that efficiently address scheduling on the distributed, heterogeneous and shared environment of grids. Fuzzy rule-based systems (FRBSs) models are dynamic and are currently attracting the interest of scheduling research community to obtain near-optimal solutions on grids. However, FRBSs performance is strongly related to the quality of their knowledge bases and thus, with the knowledge acquisition process. Due to the inherent dynamic nature and the typical complex search spaces of grids, automatically finding a high-quality knowledge base that accurately describes the fuzzy system is extremely relevant. In this work, we propose a scheduling system for grids considering a novel learning strategy inspired by Michigan and Pittsburgh approaches that applies genetic algorithms (GAs) to evolve the fuzzy rule bases and improves the classical learning strategies in terms of computational effort and convergence behaviour. In addition, experimental results show that the proposed schema significantly outperforms other extensively used scheduling strategies.  相似文献   

10.
Short message service (SMS) is a widely used service in modern mobile phones that allows users to send or receive short text messages. Current SMS, however, has two problems of inconvenient input and short message length. These problems can be resolved if a phone has an ability of automatic word spacing. This is because users need not put spaces in sending messages and longer messages are possible as they contain no space. Thus, automatic word spacing will be a very useful tool for SMS, if it can be commercially served. The practical issues of implementing it on the devices such as mobile phones are small memory and low computing power of the devices. To tackle these problems, this paper proposes a combined model of rule-based learning and memory-based learning. According to the experimental results, the model shows higher accuracy than rule-based learning or memory-based learning alone. In addition, the generated rules are so small and simple that the proposed model is appropriate for small memory devices.  相似文献   

11.
The development of geometry knowledge requires integration of intuitive and novel concepts. While instruction may take many representational forms we argue that grounding novel information in perception and action systems in the context of challenging activities will promote deeper learning. To facilitate learning we introduce a grounded integration pattern of instruction, focusing on (1) eliciting intuitive concepts, (2) introducing novel grounding metaphors, and (3) embedding challenges to promote distinguishing between ideas. To investigate this pattern we compared elementary school children in two conditions who engaged in variations of a computer-based dynamic geometry learning environment that was intended to elicit intuitive concepts of shapes. In the grounded integration condition children performed a procedure of explicitly identifying defining features of shapes (e.g. right angles) with the assistance of animated depictions of spatially-meaningful gestures (e.g. hands forming right angles). In a numerical integration condition children identified defining features with the assistance of a numerical representation. Children in the grounded integration were more likely to accurately identify target shapes in a posttest identification task. We discuss the relevancy of the grounded integration pattern on the development of instructional tools.  相似文献   

12.
In this paper, we discuss a rule-based incremental control program which has been used for controlling a laser cutting robot and in simulation for driving a car on a track, for a car parking manoeuvre, or for parking a truck with one trailer. The core of the paper concerns a learning program, Candide, which learns to control a process without a priori knowledge about the process, by observing random initial evolutions of the process and acquiring a qualitative model. Monotonous or derivative relationships between inputs and outputs are recognized, and then a rule-based incremental controller Is deduced from this model  相似文献   

13.
Rule selection has long been a problem of great challenge that has to be solved when developing a rule-based knowledge learning system.Many methods have been proposed to evaluate the eligibility of a single rule based on some criteria.However,in a knowledge learning system there is usually a set of rules,These rules are not independent,but interactive,They tend to affect each other and form a rulesystem.In such case,it is no longer rasonable to isolate each rule from others for evaluation.A best rule according to certain criterion is not always the best one for the whole system.Furthermore,the data in the real world from which people want to create their learning system are often ill-defined and inconsistent.In this case,the completeness and consistency criteria for rule selection are no longer essential.In this paper,some ideas about how to solve the rule-selection problem in a systematic way are proposed.These ideas have been applied in the design of a Chinese business card layout analysis system and gained a goods result on the training data set of 425 images.The implementation of the system and the result are presented in this paper.  相似文献   

14.
One of the major duties of financial analysts is technical analysis. It is necessary to locate the technical patterns in the stock price movement charts to analyze the market behavior. Indeed, there are two main problems: how to define those preferred patterns (technical patterns) for query and how to match the defined pattern templates in different resolutions. As we can see, defining the similarity between time series (or time series subsequences) is of fundamental importance. By identifying the perceptually important points (PIPs) directly from the time domain, time series and templates of different lengths can be compared. Three ways of distance measure, including Euclidean distance (PIP-ED), perpendicular distance (PIP-PD) and vertical distance (PIP-VD), for PIP identification are compared in this paper. After the PIP identification process, both template- and rule-based pattern-matching approaches are introduced. The proposed methods are distinctive in their intuitiveness, making them particularly user friendly to ordinary data analysts like stock market investors. As demonstrated by the experiments, the template- and the rule-based time series matching and subsequence searching approaches provide different directions to achieve the goal of pattern identification.  相似文献   

15.
In supervised classification, data representation is usually considered at the dataset level: one looks for the ??best?? representation of data assuming it to be the same for all the data in the data space. We propose a different approach where the representations used for classification are tailored to each datum in the data space. One immediate goal is to obtain sparse datum-wise representations: our approach learns to build a representation specific to each datum that contains only a small subset of the features, thus allowing classification to be fast and efficient. This representation is obtained by way of a sequential decision process that sequentially chooses which features to acquire before classifying a particular point; this process is learned through algorithms based on Reinforcement Learning. The proposed method performs well on an ensemble of medium-sized sparse classification problems. It offers an alternative to global sparsity approaches, and is a natural framework for sequential classification problems. The method extends easily to a whole family of sparsity-related problem which would otherwise require developing specific solutions. This is the case in particular for cost-sensitive and limited-budget classification, where feature acquisition is costly and is often performed sequentially. Finally, our approach can handle non-differentiable loss functions or combinatorial optimization encountered in more complex feature selection problems.  相似文献   

16.
The concept of similarity plays a fundamental role in case-based reasoning. However, the meaning of “similarity” can vary in situations and is largely domain dependent. This paper proposes a novel similarity model consisting of linguistic fuzzy rules as the knowledge container. We believe that fuzzy rules representation offers a more flexible means to express the knowledge and criteria for similarity assessment than traditional similarity metrics. The learning of fuzzy similarity rules is performed by exploiting the case base, which is utilized as a valuable resource with hidden knowledge for similarity learning. A sample of similarity is created from a pair of known cases in which the vicinity of case solutions reveals the similarity of case problems. We do pair-wise comparisons of cases in the case base to derive adequate training examples for learning fuzzy similarity rules. The empirical studies have demonstrated that the proposed approach is capable of discovering fuzzy similarity knowledge from a rather low number of cases, giving rise to the competence of CBR systems to work on a small case library.  相似文献   

17.
18.
This paper discusses ground structure approaches for topology optimization of trusses. These topology optimization methods select an optimal subset of bars from the set of all possible bars defined on a discrete grid. The objectives used are based either on minimum compliance or on minimum volume. Advantages and disadvantages are discussed and it is shown that constraints exist where the formulations become equivalent. The incorporation of stability constraints (buckling) into topology design is important. The influence of buckling on the optimal layout is demonstrated by a bridge design example. A second example shows the applicability of truss topology optimization to a real engineering stiffened membrane problem.  相似文献   

19.
20.
In this paper, we propose a software defect prediction model learning problem (SDPMLP) where a classification model selects appropriate relevant inputs, from a set of all available inputs, and learns the classification function. We show that the SDPMLP is a combinatorial optimization problem with factorial complexity, and propose two hybrid exhaustive search and probabilistic neural network (PNN), and simulated annealing (SA) and PNN procedures to solve it. For small size SDPMLP, exhaustive search PNN works well and provides an (all) optimal solution(s). However, for large size SDPMLP, the use of exhaustive search PNN approach is not pragmatic and only the SA–PNN allows us to solve the SDPMLP in a practical time limit. We compare the performance of our hybrid approaches with traditional classification algorithms and find that our hybrid approaches perform better than traditional classification algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号