首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
Transformational approaches to generating design and implementation models from requirements can bring effectiveness and quality to software development. In this paper we present a framework and associated techniques to generate the process model of a service composition from a set of temporal business rules. Dedicated techniques including path-finding,branching structure identification and parallel structure identification are used for semi-automatically synthesizing the process model from the semantics-equivalent Finite State Automata of the rules. These process models naturally satisfy the prescribed behavioral constraints of the rules. With the domain knowledge encoded in the temporal business rules,an executable service composition program,e.g.,a BPEL program,can be further generated from the process models. A running example in the e-business domain is used for illustrating our approach throughout this paper.  相似文献   

2.
In this paper, a hybrid approach termed Biased Dynamic Self-Generated Fuzzy Q-Learning (BDSGFQL) for automatically generating Fuzzy Neural Networks (FNNs) is proposed. In the proposed method, an FNN is generated via the Q-learning and the embedded human expert knowledge. The human expert knowledge is embedded as a bias of the system according to the condence level of the knowledge. The novel BDSGFQL methodology can also automatically create, delete and adjust fuzzy rules according to the evaluations of the entire system as well as the individual fuzzy rules. The salient characteristics of the BDSGFQL approach are: 1) Capable of embedding expert knowledge according to the condence level; 2) Capable of structure self-identication and automatic parameter estimation and modication; 3) FNNs can be quickly generated without supervised learning; 4) Fuzzy rules can be created, adjusted and deleted dynamically; 5) Membership functions of an FNN can be dynamically adjusted according to the evaluation of reinforcement learning. Simulation studies of a wall-following task by a mobile robot demonstrate the superiority of the proposed method.  相似文献   

3.
This paper presents some new algorithms to efficiently mine max frequent generalized itemsets (g-itemsets) and essential generalized association rules (g-rules). These are compact and general representations for all frequent patterns and all strong association rules in the generalized environment. Our results fill an important gap among algorithms for frequent patterns and association rules by combining two concepts. First, generalized itemsets employ a taxonomy of items, rather than a flat list of items. This produces more natural frequent itemsets and associations such as (meat, milk) instead of (beef, milk), (chicken, milk), etc. Second, compact representations of frequent itemsets and strong rules, whose result size is exponentially smaller, can solve a standard dilemma in mining patterns: with small threshold values for support and confidence, the user is overwhelmed by the extraordinary number of identified patterns and associations; but with large threshold values, some interesting patterns and associations fail to be identified. Our algorithms can also expand those max frequent g-itemsets and essential g-rules into the much larger set of ordinary frequent g-itemsets and strong g-rules. While that expansion is not recommended in most practical cases, we do so in order to present a comparison with existing algorithms that only handle ordinary frequent g-itemsets. In this case, the new algorithm is shown to be thousands, and in some cases millions, of the time faster than previous algorithms. Further, the new algorithm succeeds in analyzing deeper taxonomies, with the depths of seven or more. Experimental results for previous algorithms limited themselves to taxonomies with depth at most three or four. In each of the two problems, a straightforward lattice-based approach is briefly discussed and then a classificationbased algorithm is developed. In particular, the two classification-based algorithms are MFGI_class for mining max frequent g-itemsets and EGR_class for mining essential g-rules. The classification-based algorithms are featured with conceptual classification trees and dynamic generation and pruning algorithms.  相似文献   

4.
The information content of rules and rule sets and its application   总被引:1,自引:1,他引:0  
The information content of rules is categorized into inner mutual information content and outer impartation information content. Actually, the conventional objective interestingness measures based on information theory are all inner mutual information, which represent the confidence of rules and the mutual information between the antecedent and consequent. Moreover, almost all of these measures lose sight of the outer impartation information, which is conveyed to the user and help the user to make decisions. We put forward the viewpoint that the outer impartation information content of rules and rule sets can be represented by the relations from input universe to output universe. By binary relations, the interaction of rules in a rule set can be easily represented by operators: union and intersection. Based on the entropy of relations, the outer impartation information content of rules and rule sets are well measured. Then, the conditional information content of rules and rule sets, the independence of rules and rule sets and the inconsistent knowledge of rule sets are defined and measured. The properties of these new measures are discussed and some interesting results are proven, such as the information content of a rule set may be bigger than the sum of the information content of rules in the rule set, and the conditional information content of rules may be negative. At last, the applications of these new measures are discussed. The new method for the appraisement of rule mining algorithm, and two rule pruning algorithms, λ-choice and RPClC, are put forward. These new methods and algorithms have predominance in satisfying the need of more efficient decision information.  相似文献   

5.
Mapping PUNITY to UniNet   总被引:4,自引:0,他引:4       下载免费PDF全文
To solve the problems of the interleaving assumption and the single resource in PUNITY(Petri net and UNITY) and Petri net respectively,this paper proposes a set of mapping rules from PUNITY to uniNet.Based on these rules,problems of one field can be transformed to problems of the other field and powerful tools of Petri net and UNITY can be used.The paper gives a sketch of the mapping rules and applies the rules to an example.Meanwhile ,the mapping rules can help computer to translate PUNITY to UniNet easily.  相似文献   

6.
Web使用挖掘的应用研究   总被引:6,自引:0,他引:6  
Some effective and efficient knowledge patterns will be gained through searching, integrating, mining and analyzing on the Web. These useful knowledge patterns can help us to build so efficient Web site that WWW can ser-vice people well. In this paper we point out Web Usage Mining process influenced by Web site structure and content,and introduce the application of Web Usage mining in E-commerce. In the end a example of Web Usage Mining is given.  相似文献   

7.
In this paper, the authors propose a method that incorporates mechanisms for handling ambiguity in speech and the ability of humans to create associations, and for formulating conversations based on rule base knowledge and common knowledge. Go beyond the level that can be achieved, using only conventional natural language processing and vast repositories of sample patterns. In this paper, the authors propose a method for computer conversation sentences generated using newspaper headlines as an example of how the common knowledge and associative ability are applied.  相似文献   

8.
Rule selection has long been a problem of great challenge that has to be solved when developing a rule-based knowledge learning system.Many methods have been proposed to evaluate the eligibility of a single rule based on some criteria.However,in a knowledge learning system there is usually a set of rules,These rules are not independent,but interactive,They tend to affect each other and form a rulesystem.In such case,it is no longer rasonable to isolate each rule from others for evaluation.A best rule according to certain criterion is not always the best one for the whole system.Furthermore,the data in the real world from which people want to create their learning system are often ill-defined and inconsistent.In this case,the completeness and consistency criteria for rule selection are no longer essential.In this paper,some ideas about how to solve the rule-selection problem in a systematic way are proposed.These ideas have been applied in the design of a Chinese business card layout analysis system and gained a goods result on the training data set of 425 images.The implementation of the system and the result are presented in this paper.  相似文献   

9.
I. COMBINATION OF DATA MINING TECHNIQUE AND CRM A. Data mining techniques used in CRM Data mining discovers patterns and relationships hidden in data, and is actually part of a larger process called “knowledge discovery” which describes the steps that must be taken to ensure meaningful results. Data mining software does not, however, eliminate the need to know the business, understand the data, or be aware of general statistical methods. Data mining does not find patterns an…  相似文献   

10.
One of the important topics in knowledge base revision is to introduce an efficient implementation algorithm. Algebraic approaches have good characteristics and implementation method; they may be a choice to solve the problem. An algebraic approach is presented to revise propositional rule-based knowledge bases in this paper. A way is firstly introduced to transform a propositional rule-based knowledge base into a Petri net. A knowledge base is represented by a Petri net, and facts are represented by the initial marking. Thus, the consistency check of a knowledge base is equivalent to the reachability problem of Petri nets. The reachability of Petri nets can be decided by whether the state equation has a solution; hence the consistency check can also be implemented by algebraic approach. Furthermore, algorithms are introduced to revise a propositional rule-based knowledge base, as well as extended logic programming. Compared with related works, the algorithms presented in the paper are efficient, and the time complexities of these algorithms are polynomial.  相似文献   

11.
Based on the analysis of current Intrusion Detection technologies, this paper introduces the Data Mining Technology to the Intrusion Detection System (IDS), and proposes system architecture as well as a pattern strategy of automatic update. By adopting the Data Mining Technology, the frequency patterns can be dug out from a lot of network events. So, effective examination rules can be discovered, which will be used to instruct the analysis of IDS network intrusion. Meanwhile, the usage of the pattern strategy of automatic update that adopts the ways of network real-time analysis intproves the efficiency and the veracity of the mining greatly. The integration of them will be effective in solving the problems of high misreport and false alerts rate in the traditional Intrusion Detection Systems.  相似文献   

12.
This paper describes the formal verification of the Merchant Registration phase of the Secure Electronic Transactions (SET) protocol, a realistic electronic transaction security protocol which is used to protect the secrecy of online purchases. A number of concepts, notations, functions, predicates, assumptions and rules are introduced. We describe the knowledge of all legal participants, and a malicious spy, to assess the security of the sub-protocol. Avoiding search in a large state space, the method converges very quickly. We implemented our method in the Isabelle/Isar automated reasoning environment, therefore the whole verification process can be executed mechanically and efficiently.  相似文献   

13.
Knowledge is essential for the competitiveness of individuals as well as organizations.Thus the application of the latest methodologies and technologies are utilized to support knowledge acquisition,warehousing,distribution,and transfer.Means and methods of web 2.0 are useful to support this procedure.Especially,highly complex and very dynamic knowledge domains have to be accessible and applicable in the framework of learning network communities,including the stakeholders of training and education.Mechatronics for example is such an interdisciplinary,dynamic field of research and application.Based on intelligence,software,and hardware it is requiring special approaches for developing a courseware based learning and knowledge transfer environment.After defining the specifics of mechatronics education and postgraduate training in the context of e-education,the concepts of the development and utilization of mechatronic courseware can be deduced from e-learning 2.0 and mobile learning facilities,possibilities,and abilities.Mechatronic courseware will be developed by using authoring software and embedding the material into learning management systems with respect to general methods and rules of modern system and software development.As an example,the courseware is used for vocational training and further education especially in cooperation networks of educational institutions and SME.  相似文献   

14.
A Reduction Algorithm Meeting Users Requirements   总被引:9,自引:0,他引:9       下载免费PDF全文
Generally a database encompasses various kinds of knowledge and is shared by many users.Different users may prefer different kinds of knowledge.So it is important for a data mining algorithm to output specific knowledge according to users‘ current requirements (preference).We call this kind of data mining requirement-oriented knowledge discovery (ROKD).When the rough set theory is used in data mining,the ROKD problem is how to find a reduct and corresponding rules interesting for the user.Since reducts and rules are generated in the same way,this paper only concerns with how to find a particular reduct.The user‘s requirement is described by an order of attributes,called attribute order,which implies the importance of attributes for the user.In the order,more important attributes are located before less important ones.then the problem becomes how to find a reduct including those attributes anterior in the attribute order.An approach to dealing with such a problem is proposed.And its completeness for reduct is proved.After that,three kinds of attribute order are developed to describe various user requirements.  相似文献   

15.
Data analysis and automatic processing is often interpreted as knowledge acquisition. In many cases it is necessary to somehow classify data or find regularities in them. Results obtained in the search of regularities in intelligent data analyzing applications are mostly represented with the help of IF-THEN rules. With the help of these rules the following tasks are solved: prediction, classification, pattern recognition and others. Using different approaches---clustering algorithms, neural network methods, fuzzy rule processing methods--we can extract rules that in an understandable language characterize the data. This allows interpreting the data, finding relationships in the data and extracting new rules that characterize them. Knowledge acquisition in this paper is defined as the process of extracting knowledge from numerical data in the form of rules. Extraction of rules in this context is based on clustering methods K-means and fuzzy C-means. With the assistance of K-means, clustering algorithm rules are derived from trained neural networks. Fuzzy C-means is used in fuzzy rule based design method. Rule extraction methodology is demonstrated in the Fisher's Iris flower data set samples. The effectiveness of the extracted rules is evaluated. Clustering and rule extraction methodology can be widely used in evaluating and analyzing various economic and financial processes.  相似文献   

16.
Recent pervasive systems are designed to be context-aware so that they are able to adapt to continual changes of their environments. Rule-based adaptation, which is commonly adopted by these applications, introduces new challenges in software design and verification. Recent research results have identified some faulty or unwanted adaptations caused by factors such as asynchronous context updating, and missing or faulty context reading. In addition, adaptation rules based on simple event models and propositional logic are not expressive enough to address these factors and to satisfy users'' expectation in the design. We tackle these challenges at the design stage by introducing sequential event patterns in adaptation rules to eliminate faulty and unwanted adaptations with features provided in the event pattern query language. We illustrate our approach using the recent published examples of adaptive applications, and show that it is promising in designing more reliable context-aware adaptive applications. We also introduce adaptive rule specification patterns to guide the design of adaptation rules.  相似文献   

17.
A form evaluation system for brush-written Chinese characters is developed.Calligraphic knowledge used in the system is represented in the form of rules with the help of a data structure proposed in this paper.Reflecting the specific hierarchical relations among radicals and strokes of Chinese characters,the proposed data structure is based upon a character model that can generate brush-written Chinese characters on a computer.Through evaluation experiments using the developed system,it is shown that representation of calligraphic knowledge and form evaluation of Chinese characters can be smoothly realized if the data structure is utilized.  相似文献   

18.
In formal concept analysis ,concept lattice as the fundamental data structure can be construct-ed front a formal context. Howevt, r,it is required that the relation between object and feature in the for-real context should be certain, For uncertain relation,this paper uses the thoughts of upper and lowerapproximation in rough set theory to deal with it ,and gives out the corresponding definitions of missing-value context and rough formal concept, Based on them, this paper employs rough concept lattice,formed by rough formal concepts and partial order relation on them,as the basic data structure for con-cept analysis and knowledge acquisition. Then a theroem is presented to describe the method of extract-ing rules from constructed rough formal concept lattice,and the semantic interpretation of discoveredrules is explained.  相似文献   

19.
Knowledge Representation in KDD Based on Linguistic Atoms   总被引:11,自引:0,他引:11       下载免费PDF全文
  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号