首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 18 毫秒
1.
Fuzzy prolog     
Various methods of representing uncertainty are discussed including some fuzzy methods. Representation and calculation of fuzzy expressions are discussed and a symbolic representation of fuzzy quantities coupled with axiomatic evaluation is proposed. This is incorporated into the PROLOG language to produce a fuzzy version. Apart from enabling imprecise facts and rules to be expressed, a natural method of controlling the search is introduced, making the search tree admissible.Formal expression of heuristic information in the same language, FUZZY PROLOG, as the main problem language follows naturally and therefore allows the same executor to evaluate in both “problem” space and “heuristic” space.In addition, the use of variable functors in the specification of bidirectional logic is discussed. The paper shows two areas of application of higher order fuzzy predicates. As an introduction Warren's examples are outlined and used with variable functors to illustrate their use in describing some relatively conventional applications.Translation of English into horn clause format is described and is used to illustrate the simplicity of representation using variable functors. Alternative formulations are also explored, typically the use of the “meta-variable” in MICRO-PROLOG and using the “univ” operator.Representation of rule generation and inference is addressed. Examples are given where the expression of meta-rules in standard PROLOG are compared with the expression of the same rules using “variable” predicate symbols. Some meta-rules illustrated are clearly not universally valid and this leads to the addition of fuzzy tokens.  相似文献   

2.
并发约束程序设计语言COPS及其执行模型   总被引:1,自引:0,他引:1  
约束程序设计尤其是约束逻辑程序设计与并发约束程序设计在AI程序设计领域占据着越来越重要的位置。传统逻辑程序设计的基“计算即为定理证明”的计算风格虽获得了简洁优美的操作语义特性,但也付出了执行效率低的代价,当应用系统规模增大时,其性能严重下降以致崩溃。针对传统逻辑程序设计的这种可伸缩性问题,设计了一个基于并发约束程序设计概念的说明性语言COPS,旨在从语言设计与执行模型两方面降低说明性程序的不确定性,提高搜索与运行效率。在语言设计方面,通过引入确定性语言成分,避免不确定计算用于确定性目标所浪费的系统开销;在执行模型方面,在目标的并发穿叉执行与数据驱动的并发同步机制的基础上,实现“优先执行确定目标”策略与“最少假定”策略,作为约束传播的延伸,最大幅度地剪枝搜索空间,降低搜索复杂性。COPS提供的知识表示、推理与并发机制使其成为构造agent程序的理想语言。论文给出COPS语言的语法规范与执行模型的操作语义描述。  相似文献   

3.
Abstract

A computational verb set (verb set for short) consists of a computational verb and a crisp set or a fuzzy set. Verb sets generalize the other sets from linguistic structure ”BE + statement” to “verb + statement”. Verb sets are strongly connected to computational verbs and their contexts. In this paper, the framework of verb sets and some basic concepts and properties of verb sets are given. An application of verb sets to chaotic cryptanalysis is presented to illustrate how verb sets are constructed in real-life applications.  相似文献   

4.
Abstract

Starting from individual fuzzy preference relations, some (sets of) socially best acceptable options are determined, directly or via a social fuzzy preference relation. An assumed fuzzy majority rule is given by a fuzzy linguistic quantifier, e.g., “most.” Here, as opposed to Part I, where we used a consensory-like pooling of individual opinions, we use an approach to linguistic quantifiers that leads to a competitive-like pooling. Some solution concepts are considered: cores, minimax (opposition) sets, consensus winners, and so forth,  相似文献   

5.
From the point of view of distributed programming one of the most interesting communication mechanisms is associative tuple matching in a shared dataspace, as exemplified in the Linda coordination language. Linda has been used as a coordination layer to parallelize several sequential programming languages, such as C and Scheme. In this paper we study the combination of Linda with a logic language, whose result is the language Extended Shared Prolog (ESP). We show that ESP is based on a new programming model called PoliS, that extends Linda with Multiple Tuple Spaces. A class of applications for ESP is discussed, introducing the concept of “open multiple tuple spaces”. Finally, we show how the distributed implementation of ESP uses the network version of Linda’s tuple space.  相似文献   

6.
Abstract

Many industrial processes are examples of Zadeh's “principle of incompatibility” which states that as a system becomes more complex it becomes increasingly difficult to make mathematical statements about it which are both meaningful and precise. So that if a model of such a process is required then a fuzzy model may be the “best” that can be achieved. This paper considers the problems of building such models. It introduces a definition of a fuzzy model, discusses the assessment of its quality and outlines a systematic procedure for deriving a model from input-output data.  相似文献   

7.
The outranking analysis has been frequently used to deal with the complex decisions involving qualitative criteria and imprecise data. So far, various versions of ELECTRE have been proposed for ranking alternatives in the outranking analysis. Among others, ELECTRE III has been widely used. A distillation procedure using a qualification index is proposed to rank alternatives from the fuzzy outranking relation. A weakness of ELECTRE III, however, is to involve the arbitrariness in the selection of the discrimination threshold function for the distillation procedure.

On the other hand, various variants of PROMETHEE has also been proposed for the outranking analysis. PROMETHEE intends to be simple and easy to understand. A deficiency of PROMETHEE is that it does not take discordance into account when the outranking relation matrix is constructed.

This note revisits an exploitation procedure based on eigenvector using the “weighted” preference in-and out-flows of each alternative in the outranking analysis. The basic idea is that it should be better to outrank a “strong” alternative than a “weak” one and, conversely, it is less serious to be outranked by a “strong” alternative than by a “weak” one in a PROMETHEE context. It has a completely different interpretation with the AHP (Analytic Hierarchy Process) since the components of the fuzzy (valued) outranking relation matrix are neither ratios nor reciprocal as in the AHP.  相似文献   

8.
9.

Artificial intelligence (AI) is the usage of scientific techniques to simulate human intellectual skills and to tackle complex medical issues involving complicated genetic defects such as cancer. The rapid expansion of AI in the past era has paved the way to optimum judgment-making by superior intellect, where the human brain is constrained to manage large information in a limited period. Cancer is a complicated ailment along with several genomic variants. AI-centred systems carry enormous potential in detecting these genomic alterations and abnormal protein communications at a very initial phase. The contemporary biomedical study is also dedicated to bringing AI expertise to hospitals securely and ethically. AI-centred support to diagnosticians and doctors can be the big surge ahead for the forecast of illness threat, identification, diagnosis, and therapies. The applications related to AI and Machine Learning (ML) in the identification of cancer and its therapy possess the potential to provide therapeutic support for quicker planning of a novel therapy for each person. Through the utilization of AI- based methods, scientists can work together in real-time and distribute their expertise digitally to possibly cure billions. In this review, the focus was on the study of linking biology with AI and describe how AI-centred support could assist oncologists in accurate therapy. It is essential to identify new biomarkers that inject drug defiance and discover medicinal goals to improve medication methods. The advent of the “next-generation sequencing” (NGS) programs resolves these challenges and has transformed the prospect of “Precision Oncology” (PO). NGS delivers numerous medical functions which are vital for hazard prediction, initial diagnosis of infection, “Sequence” identification and “Medical Imaging” (MI), precise diagnosis, “biomarker” detection, and recognition of medicinal goals for innovation in medicine. NGS creates a huge repository that requires specific “bioinformatics” sources to examine the information that is pertinent and medically important. The malignancy diagnostics and analytical forecast are improved with NGS and MI that provide superior quality images via AI technology. Irrespective of the advancements in technology, AI faces a few problems and constraints, and the clinical application of NGS continues to be authenticated. Through the steady progress of invention and expertise, the prospects of AI and PO look promising. The purpose of this review was to assess, evaluate, classify, and tackle the present developments in cancer diagnosis utilizing AI methods for breast, lung, liver, skin cancer, and leukaemia. The research emphasizes in what way cancer identification, the treatment procedure is aided by utilizing AI with supervised, unsupervised, and deep learning (DL) methods. Numerous AI methods were assessed on benchmark datasets with respect to “accuracy”, “sensitivity”, “specificity”, and “false-positive” (FP) metrics. Lastly, challenges along with future work were discussed.

  相似文献   

10.
A theory of type polymorphism in programming   总被引:1,自引:0,他引:1  
The aim of this work is largely a practical one. A widely employed style of programming, particularly in structure-processing languages which impose no discipline of types, entails defining procedures which work well on objects of a wide variety. We present a formal type discipline for such polymorphic procedures in the context of a simple programming language, and a compile time type-checking algorithm W which enforces the discipline. A Semantic Soundness Theorem (based on a formal semantics for the language) states that well-type programs cannot “go wrong” and a Syntactic Soundness Theorem states that if W accepts a program then it is well typed. We also discuss extending these results to richer languages; a type-checking algorithm based on W is in fact already implemented and working, for the metalanguage ML in the Edinburgh LCF system.  相似文献   

11.
On-line process fault diagnosis using fuzzy neural networks is described in this paper. The fuzzy neural network is obtained by adding a fuzzification layer to a conventional feed forward neural network. The fuzzification layer converts increments in on-line measurements and controller outputs into three fuzzy sets: “increase”, “steady”, and “decrease”. Abnormalities in a process are represented by qualitative increments in on-line measurements and controller outputs. These are classified into various categories by the network. By representing abnormalities in qualitative form, training data can be condensed. The fuzzy approach ensures smooth transitions from one fuzzy sets to another and, hence, robustness to measurement noise is enhanced. The technique has been successfully applied to a CSTR system.  相似文献   

12.
In this paper, we present a thorough integration of qualitative representations and reasoning for positional information for domestic service robotics domains into our high-level robot control. In domestic settings for service robots like in the RoboCup@Home competitions, complex tasks such as “get the cup from the kitchen and bring it to the living room” or “find me this and that object in the apartment” have to be accomplished. At these competitions the robots may only be instructed by natural language. As humans use qualitative concepts such as “near” or “far”, the robot needs to cope with them, too. For our domestic robot, we use the robot programming and plan language Readylog, our variant of Golog. In previous work we extended the action language Golog, which was developed for the high-level control of agents and robots, with fuzzy set-based qualitative concepts. We now extend our framework to positional fuzzy fluents with an associated positional context called frames. With that and our underlying reasoning mechanism we can transform qualitative positional information from one context to another to account for changes in context such as the point of view or the scale. We demonstrate how qualitative positional fluents based on a fuzzy set semantics can be deployed in domestic domains and showcase how reasoning with these qualitative notions can seamlessly be applied to a fetch-and-carry task in a RoboCup@Home scenario.  相似文献   

13.
This paper describes an implementation of a system for fuzzy sets manipulation which is based on fstds (Fuzzy-Set-Theoretic Data Structure), an extended version of Childs's stds (Set-Theoretic Data Structure). The fstds language is considered as a fuzzy-set-theoretically oriented language which can deal, for example, with ordinary sets, ordinary relations, fuzzy sets, fuzzy relations, L-fuzzy sets, level-m fuzzy sets and type-n fuzzy sets. The system consists of an interpreter, a collection of fuzzy-set operations and the data structure, fstds, for representing fuzzy sets. fstds is made up of eight areas, namely, the fuzzy-set area, fuzzy-set representation area, grade area, grade-tuple area, element area, element-tuple area, fuzzy-set name area and fuzzy-set operator name area. The fstds system, in which 52 fuzzy-set operations are available, is implemented in fortran, and is currently running on a FACOM 230-45S computer.  相似文献   

14.
FUZZY SETS AND SYSTEMS*   总被引:1,自引:0,他引:1  
The notion of fuzziness as defined in this paper relates to situations in which the source of imprecision is not a random variable or a stochastic process, but rather a class or classes which do not possess sharply defined boundaries, e.g., the “class of bald men,” or the “class of numbers which are much greater than 10,” or the “class of adaptive systems,” etc.

A basic concept which makes it possible to treat fuzziness in a quantitative manner is that of a fuzzy set, that is, a class in which there may be grades of membership intermediate between full membership and non-membership. Thus, a fuzzy set is characterized by a membership function which assigns to each object its grade of membership (a number lying between 0 and 1) in the fuzzy set.

After a review of some of the relevant properties of fuzzy sets, the notions of a fuzzy system and a fuzzy class of systems are introduced and briefly analyzed. The paper closes with a section dealing with optimization under fuzzy constraints in which an approach to problems of this type is briefly sketched.  相似文献   


15.
In this paper, we propose a new method to present a fuzzy trapezoidal solution, namely “suitable solution”, for a fully fuzzy linear system (FFLS) based on solving two fully interval linear systems (FILSs) that are 1-cut and 0-cut of the related FILS. After some manipulations, two FILSs are transformed to 2n crisp linear equations and 4n crisp linear nonequations and n crisp nonlinear equations. Then, we propose a nonlinear programming problem (NLP) to computing simultaneous (synchronic) equations and nonequations. Moreover, we define two other new solutions namely, “fuzzy surrounding solution” and “fuzzy peripheral solution” for an FFLS. It is shown that the fuzzy surrounding solution is placed in a tolerable fuzzy solution set and the fuzzy peripheral solution is placed in a controllable fuzzy solution set. Finally, some numerical examples are given to illustrate the ability of the proposed methods.  相似文献   

16.
The use of fuzzy decision tables as a programming language for representing both the knowledge and the procedures in expert systems is discussed. Examples of their use for the generation of procedural code and for the generation of if-then rules are given.  相似文献   

17.
ABSTRACT

Signal-to-noise ratios (SNRs) are widely applicable in quality engineering for improving product quality. In real-world applications, the observations are sometimes described in linguistic terms, or are only approximately known, rather than equated with randomness. To deal with imprecise data, the notion of fuzziness was introduced. This paper develops a procedure to calculate the SNR with fuzzy observations. The idea is based on the extension principle. A pair of mathematical programs was formulated to calculate the lower and upper bounds of the fuzzy SNR at possibility level α. From different values of α, the membership function of the SNR was approximated. Three different types of SNRs are discussed: “higher the better,” “lower the better,” and “nominal the best.” Because the SNR is expressed by a membership function rather than by a crisp value, more information is provided for making decisions.  相似文献   

18.
Abstract

We administered the Verbal IQ (VIQ) part of the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) to the ConceptNet 4 artificial intelligence (AI) system. The test questions (e.g. “Why do we shake hands?”) were translated into ConceptNet 4 inputs using a combination of the simple natural language processing tools that come with ConceptNet together with short Python programs that we wrote. The question answering used a version of ConceptNet based on spectral methods. The ConceptNet system scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5–7 year olds. Large variations among subtests indicate potential areas of improvement. In particular, results were strongest for the Vocabulary and Similarities subtests, intermediate for the Information subtest and lowest for the Comprehension and Word Reasoning subtests. Comprehension is the subtest most strongly associated with common sense. The large variations among subtests and ordinary common sense strongly suggest that the WPPSI-III VIQ results do not show that “ConceptNet has the verbal abilities of a four-year-old”. Rather, children’s IQ tests offer one objective metric for the evaluation and comparison of AI systems. Also, this work continues previous research on psychometric AI.  相似文献   

19.
Mangasarian(5) has proposed an interesting method of pattern separation, which is called “multisurface method”. In the method, linear programming problems are recursively solved, and the correct classification of any disjoint pattern sets is basically possible. However the fact that linear programming problems are recursively solved leads to the result that it takes long computation times and requires much memory space of computer in use. This paper describes a learning procedure for multi-surface method instead of linear programming to avoid drawbacks above. The proposed learning algorithm requires only repetitive simple calculations. Experimental results show that computation times required by learning procedure proposed are shorter than those by linear programming.  相似文献   

20.
Traditional Importance–Performance Analysis assumes the distribution of a given set of attributes in four sets, “Keep up the good work”, “Concentrate here”, “Low priority” and “Possible overkill”, corresponding to the four possibilities, high–high, low–high, low–low and high–low, of the pair performance–importance. This can lead to ambiguities, contradictions or non-intuitive results, especially because the most real-world classes are fuzzy rather than crisp. The fuzzy clustering is an important tool to identify the structure in data, therefore we apply the Fuzzy C-Means Algorithm to obtain a fuzzy partition of a set of attributes. A membership degree of every attribute to each of the sets mentioned above is determined, against to the forcing categorization in traditional Importance–Performance Analysis. The main benefit is related with the deriving of the managerial decisions which become more refined due to the fuzzy approach. In addition, the development priorities and the directions in which the effort of an economic or non-economic entity would be useless or even dangerous are identified on a rigorous basis and taking into account only the internal structure of the input data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号