首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Several extensions and generalizations of fuzzy sets have been introduced in the literature, for example, Atanassov's intuitionistic fuzzy sets, type 2 fuzzy sets, and fuzzy multisets. In this paper, we propose hesitant fuzzy sets. Although from a formal point of view, they can be seen as fuzzy multisets, we will show that their interpretation differs from the two existing approaches for fuzzy multisets. Because of this, together with their definition, we also introduce some basic operations. In addition, we also study their relationship with intuitionistic fuzzy sets. We prove that the envelope of the hesitant fuzzy sets is an intuitionistic fuzzy set. We prove also that the operations we propose are consistent with the ones of intuitionistic fuzzy sets when applied to the envelope of the hesitant fuzzy sets. © 2010 Wiley Periodicals, Inc.  相似文献   

2.
Shadowed sets: representing and processing fuzzy sets   总被引:1,自引:0,他引:1  
This study introduces a new concept of shadowed sets that can be regarded as a certain operational framework simplifying processing carried out with the aid of fuzzy sets and enhancing interpretation of results obtained therein. Some conceptual links between this idea and some others known in the literature are established. In particular, it is demonstrated how fuzzy sets can induce shadowed sets. Subsequently, shadowed sets reveal interesting conceptual and algorithmic relationships existing between rough sets and fuzzy sets. Detailed computational aspects of shadowed sets are discussed. Several illustrative examples are provided.  相似文献   

3.
Rough sets   总被引:1327,自引:0,他引:1327  
We investigate in this paper approximate operations on sets, approximate equality of sets, and approximate inclusion of sets. The presented approach may be considered as an alternative to fuzzy sets theory and tolerance theory. Some applications are outlined.  相似文献   

4.
In this current paper we reveal a mathematical tool that helps us to comprehend certain natural phenomena. The main idea of this tool is a possible generalization of approximations of sets relying on the partial covering of the universe of discourse.Our starting point will be an arbitrary nonempty family B of subsets of an arbitrary nonempty universe U. On the analogy of the definition of Pawlak’s type σ-algebra σ(U/ε) over a finite universe, let DB denote the family of subsets of U which contains the empty set and every set in B and it is closed under unions. However, DBneither covers the universe nor is closed under intersections in general. Our notions of lower and upper approximations are straightforward point-free generalizations of Pawlak’s same approximations which are imitations of the ε-equivalence class based formulations. Both of them belong to DB. Our discussion will be within an overall approximation framework along which the common features of rough set theory and our approach can be treated uniformly.To demonstrate the relationship of our approach with natural computing, we will show an example relying on the so-called MÉTA program which is a recognition and evaluation program of the actual state of the natural and semi-natural vegetation heritage of Hungary.  相似文献   

5.
Population models are widely applied in biomedical data analysis since they characterize both the average and individual responses of a population of subjects. In the absence of a reliable mechanistic model, one can resort to the Bayesian nonparametric approach that models the individual curves as Gaussian processes. This paper develops an efficient computational scheme for estimating the average and individual curves from large data sets collected in standardized experiments, i.e. with a fixed sampling schedule. It is shown that the overall scheme exhibits a “client-server” architecture. The server is in charge of handling and processing the collective data base of past experiments. The clients ask the server for the information needed to reconstruct the individual curve in a single new experiment. This architecture allows the clients to take advantage of the overall data set without violating possible privacy and confidentiality constraints and with negligible computational effort.  相似文献   

6.
Soft sets combined with fuzzy sets and rough sets: a tentative approach   总被引:2,自引:0,他引:2  
Theories of fuzzy sets and rough sets are powerful mathematical tools for modelling various types of uncertainty. Dubois and Prade investigated the problem of combining fuzzy sets with rough sets. Soft set theory was proposed by Molodtsov as a general framework for reasoning about vague concepts. The present paper is devoted to a possible fusion of these distinct but closely related soft computing approaches. Based on a Pawlak approximation space, the approximation of a soft set is proposed to obtain a hybrid model called rough soft sets. Alternatively, a soft set instead of an equivalence relation can be used to granulate the universe. This leads to a deviation of Pawlak approximation space called a soft approximation space, in which soft rough approximations and soft rough sets can be introduced accordingly. Furthermore, we also consider approximation of a fuzzy set in a soft approximation space, and initiate a concept called soft–rough fuzzy sets, which extends Dubois and Prade’s rough fuzzy sets. Further research will be needed to establish whether the notions put forth in this paper may lead to a fruitful theory.  相似文献   

7.
Rough Sets, Their Extensions and Applications   总被引:2,自引:0,他引:2  
Rough set theory provides a useful mathematical foundation for developing automated computational systems that can help understand and make use of imperfect knowledge.Despite its recency,the theory and its extensions have been widely applied to many problems,including decision analysis,data mining,intelligent control and pattern recognition.This paper presents an outline of the basic concepts of rough sets and their major extensions,covering variable precision,tolerance and fuzzy rough sets.It also shows the diversity of successful applications these theories have entailed,ranging from financial and business,through biological and medicine,to physical,art,and meteorological.  相似文献   

8.
The purpose of this paper is to introduce a theory of fuzzily defined complement operations on nonempty sets equipped with fuzzily defined ordering relations. Many-valued equivalence relation-based fuzzy ordering relations (also called vague ordering relations) provide a powerful and a comprehensive mathematical modelling of fuzzily defined partial ordering relations. For this reason, starting with a nonempty set X equipped with a many-valued equivalence relation and a vague ordering relation, a fuzzily defined complement operation (called a vague complement operation) on X will be formulated by means of the underling many-valued equivalence relation and vague ordering relation. Because of the fact that the practical implementations of vague complement operations basically depend on their representation properties, a considerable part of this paper is devoted to the representations of vague complement operations. In addition to this, the present paper provides various nontrivial examples for vague complements, and introduces a many-valued logical interpretation of quantum logic as a real application of vague complements.  相似文献   

9.
This paper studies graphoid properties for epistemic irrelevance in sets of desirable gambles. For that aim, the basic operations of conditioning and marginalization are expressed in terms of variables. Then, it is shown that epistemic irrelevance is an asymmetric graphoid. The intersection property is verified in probability theory when the global probability distribution is positive in all the values. Here it is always verified due to the handling of zero probabilities in sets of gambles. An asymmetrical D-separation principle is also presented, by which this type of independence relationships can be represented in directed acyclic graphs.  相似文献   

10.
Decision making and uncertainty management in a 3D reconstruction system   总被引:3,自引:0,他引:3  
This paper presents a control structure for a general-purpose image understanding system. It addresses the high level of uncertainty in local hypotheses and the computational complexity of image interpretation. The control of vision algorithms is done by an independent subsystem that uses Bayesian networks and utility theory to compute marginal value of information and selects the algorithm with the highest value of information. It is shown that the knowledge base can be acquired using learning techniques and the value-driven approach to the selection of vision algorithms leads to performance gains.  相似文献   

11.
Talking entails costs of production and time, although some of the information sent to hearers will be of value to them in general. We believe that the matter of why we talk at all is a key question for the origin of language, and the answer will shed some light on the mystery of human identity. This article focuses on altruism in communication, and aims to demonstrate evolutionary scenarios based on multilevel selection. We constructed a computational model to examine these scenarios. The evolutionary experiments showed that in the case of an unstructured population, a linguistic system hardly emerged due to the dynamics between interpretable utterance that imposes a penalty and correct interpretation that yields a reward, which is similar to prey-predator dynamics. However, in the case of a multigroup population, a linguistic system emerged owing to multilevel selection among the groups. In addition, the probability of success in conversation was higher in a group in more severe environmental conditions. This result supports Bickerton’s hypothesis based on the ecological gap between human ancestors and other ape species.  相似文献   

12.
After more than 60 years, Shannon’s research continues to raise fundamental questions, such as the one formulated by R. Luce, which is still unanswered: “Why is information theory not very applicable to psychological problems, despite apparent similarities of concepts?” On this topic, S. Pinker, one of the foremost defenders of the widespread computational theory of mind, has argued that thought is simply a type of computation, and that the gap between human cognition and computational models may be illusory. In this context, in his latest book, titled Thinking Fast and Slow, D. Kahneman provides further theoretical interpretation by differentiating the two assumed systems of the cognitive functioning of the human mind. He calls them intuition (system 1) determined to be an associative (automatic, fast and perceptual) machine, and reasoning (system 2) required to be voluntary and to operate logical-deductively. In this paper, we propose a mathematical approach inspired by Ausubel’s meaningful learning theory for investigating, from the constructivist perspective, information processing in the working memory of cognizers. Specifically, a thought experiment is performed utilizing the mind of a dual-natured creature known as Maxwell’s demon: a tiny “man–machine” solely equipped with the characteristics of system 1, which prevents it from reasoning. The calculation presented here shows that the Ausubelian learning schema, when inserted into the creature’s memory, leads to a Shannon-Hartley-like model that, in turn, converges exactly to the fundamental thermodynamic principle of computation, known as the Landauer limit. This result indicates that when the system 2 is shut down, both an intelligent being, as well as a binary machine, incur the same minimum energy cost per unit of information (knowledge) processed (acquired), which mathematically shows the computational attribute of the system 1, as Kahneman theorized. This finding links information theory to human psychological features and opens the possibility to experimentally test the computational theory of mind by means of Landauer’s energy cost, which can pave a way toward the conception of a multi-bit reasoning machine.  相似文献   

13.
There are two contributions of this note. First, it clarifies and unifies the two design approaches which will generate the general state observer and the identity state observer, respectively. Especially for the first approach, even a very recent paper failed to clarify it. Second, this note shows that due to computational error, the general state observer will have an estimation error while the identity state observer will have an error in its desired poles.  相似文献   

14.
冗余数据约简的研究与设计   总被引:2,自引:0,他引:2       下载免费PDF全文
Z. Pawlak于1982年提出的Rough集理论有效地分析了不确定、不精确、不一致等各种不完备信息,其优点是无需任何关于数据的初始的或附加的信息,如统计学中的概率分布。该文介绍了Rough集的基本理论在数据约简中的应用。在分析基于信息系统的粗糙集理论的基础上,描述了一种基于核与重要度的约简算法,从降低约简算法计算复杂度角度出发,修改了属性约简算法,计算了算法修改前后的复杂度。实验结果表明,修改后的算法在降低时间复杂度的同时得出了次优属性集的约简。  相似文献   

15.
Random sets form a well-established, general tool for modelling epistemic uncertainty in engineering. They can be seen as encompassing probability theory, fuzzy sets and interval analysis. Random set models for data uncertainty are typically used to obtain robust upper and lower bounds for the reliability of structures in engineering models. The goal of this paper is to show how random set models can be constructed from measurement data by non-parametric methods using inequalities of Tchebycheff type. Relations with sensitivity analysis will also be high-lighted. We demonstrate the application of the methods in an FE-model for the excavation of a cantilever sheet pile wall.  相似文献   

16.
张楠  陈荣  郭世凯 《计算机科学》2015,42(5):1-9, 23
社会选择理论是研究如何表达和聚合个体选择的一门学问.而社会选择理论与计算机科学的融合产生了称为计算社会选择的交叉学科,该学科成为社会计算的重要研究内容之一,在人工智能、经济和计算性理论领域引起了轰动.其一方面引入了复杂性分析和算法设计等计算机学科中常用的技术来对社会选择机制进行研究;另一方面也通过引入社会选择理论中的概念来推动计算机技术的发展,特别是在多智能体系统研究中有着成功的应用.投票理论是计算社会选择中最重要的研究主题之一.首先介绍常见的投票方法以及投票理论的形式化框架;再对投票理论中所关心的操纵问题做分析;然后介绍在组合域上的投票;最后对其他相关问题作简要介绍,并对该领域未来的发展与应用做出展望.  相似文献   

17.
Rationality is a fundamental concept in economics. Most researchers will accept that human beings are not fully rational. Herbert Simon suggested that we are "bounded rational". However, it is very difficult to quantify "bounded rationality", and therefore it is difficult to pinpoint its impact to all those economic theories that depend on the assumption of full rationality. Ariel Rubinstein proposed to model bounded rationality by explicitly specifying the decision makers' decision-making procedures. This paper takes a computational point of view to Rubinstein's approach. From a computational point of view, decision procedures can be encoded in algorithms and heuristics. We argue that, everything else being equal, the effective rationality of an agent is determined by its computational power - we refer to this as the computational intelligence determines effective rationality (CIDER) theory. This is not an attempt to propose a unifying definition of bounded rationality. It is merely a proposal of a computational point of view of bounded rationality. This way of interpreting bounded rationality enables us to (computationally) reason about economic systems when the full rationality assumption is relaxed.  相似文献   

18.
While in classical scheduling theory the locations of machines are assumed to be fixed we will show how to tackle location and scheduling problems simultaneously. Obviously, this integrated approach enhances the modeling power of scheduling for various real-life problems. In this paper, we introduce in an exemplary way theory and three polynomial solution algorithms for the planar ScheLoc makespan problem, which includes a specific type of a scheduling and a rather general, planar location problem, respectively. Finally, a report on numerical tests as well as a generalization of this specific ScheLoc problem is presented.  相似文献   

19.
20.
In considering identity management, the first issue is—What is identity? This is, of course, an issue that has plagued poets, philosophers, and playwrights for centuries. We're concerned with a more prosaic version of the question: How does an entity recognize another entity? This important question occurs when access to resources, such as health or financial records, services, or benefits, is limited to specific entities. The entity in question could be a person, a computer, or even a device with quite limited memory and computational power. In this issue of IEEE Security & Privacy—the first of what we suspect will be several special issues on identity management—we have chosen to focus on identity management in which the entity being identified is a person.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号