首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 18 毫秒
1.
2.
3.
4.
5.
6.
7.
Is there a need for fuzzy logic?   总被引:1,自引:0,他引:1  
“Is there a need for fuzzy logic?” is an issue which is associated with a long history of spirited discussions and debate. There are many misconceptions about fuzzy logic. Fuzzy logic is not fuzzy. Basically, fuzzy logic is a precise logic of imprecision and approximate reasoning. More specifically, fuzzy logic may be viewed as an attempt at formalization/mechanization of two remarkable human capabilities. First, the capability to converse, reason and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information, conflicting information, partiality of truth and partiality of possibility - in short, in an environment of imperfect information. And second, the capability to perform a wide variety of physical and mental tasks without any measurements and any computations [L.A. Zadeh, From computing with numbers to computing with words - from manipulation of measurements to manipulation of perceptions, IEEE Transactions on Circuits and Systems 45 (1999) 105-119; L.A. Zadeh, A new direction in AI - toward a computational theory of perceptions, AI Magazine 22 (1) (2001) 73-84]. In fact, one of the principal contributions of fuzzy logic - a contribution which is widely unrecognized - is its high power of precisiation.Fuzzy logic is much more than a logical system. It has many facets. The principal facets are: logical, fuzzy-set-theoretic, epistemic and relational. Most of the practical applications of fuzzy logic are associated with its relational facet.In this paper, fuzzy logic is viewed in a nonstandard perspective. In this perspective, the cornerstones of fuzzy logic - and its principal distinguishing features - are: graduation, granulation, precisiation and the concept of a generalized constraint.A concept which has a position of centrality in the nontraditional view of fuzzy logic is that of precisiation. Informally, precisiation is an operation which transforms an object, p, into an object, p, which in some specified sense is defined more precisely than p. The object of precisiation and the result of precisiation are referred to as precisiend and precisiand, respectively. In fuzzy logic, a differentiation is made between two meanings of precision - precision of value, v-precision, and precision of meaning, m-precision. Furthermore, in the case of m-precisiation a differentiation is made between mh-precisiation, which is human-oriented (nonmathematical), and mm-precisiation, which is machine-oriented (mathematical). A dictionary definition is a form of mh-precisiation, with the definiens and definiendum playing the roles of precisiend and precisiand, respectively. Cointension is a qualitative measure of the proximity of meanings of the precisiend and precisiand. A precisiand is cointensive if its meaning is close to the meaning of the precisiend.A concept which plays a key role in the nontraditional view of fuzzy logic is that of a generalized constraint. If X is a variable then a generalized constraint on X, GC(X), is expressed as X isr R, where R is the constraining relation and r is an indexical variable which defines the modality of the constraint, that is, its semantics. The primary constraints are: possibilistic, (r = blank), probabilistic (r = p) and veristic (r = v). The standard constraints are: bivalent possibilistic, probabilistic and bivalent veristic. In large measure, science is based on standard constraints.Generalized constraints may be combined, qualified, projected, propagated and counterpropagated. The set of all generalized constraints, together with the rules which govern generation of generalized constraints, is referred to as the generalized constraint language, GCL. The standard constraint language, SCL, is a subset of GCL.In fuzzy logic, propositions, predicates and other semantic entities are precisiated through translation into GCL. Equivalently, a semantic entity, p, may be precisiated by representing its meaning as a generalized constraint.By construction, fuzzy logic has a much higher level of generality than bivalent logic. It is the generality of fuzzy logic that underlies much of what fuzzy logic has to offer. Among the important contributions of fuzzy logic are the following:
1.
FL-generalization. Any bivalent-logic-based theory, T, may be FL-generalized, and hence upgraded, through addition to T of concepts and techniques drawn from fuzzy logic. Examples: fuzzy control, fuzzy linear programming, fuzzy probability theory and fuzzy topology.
2.
Linguistic variables and fuzzy if-then rules. The formalism of linguistic variables and fuzzy if-then rules is, in effect, a powerful modeling language which is widely used in applications of fuzzy logic. Basically, the formalism serves as a means of summarization and information compression through the use of granulation.
3.
Cointensive precisiation. Fuzzy logic has a high power of cointensive precisiation. This power is needed for a formulation of cointensive definitions of scientific concepts and cointensive formalization of human-centric fields such as economics, linguistics, law, conflict resolution, psychology and medicine.
4.
NL-Computation (computing with words). Fuzzy logic serves as a basis for NL-Computation, that is, computation with information described in natural language. NL-Computation is of direct relevance to mechanization of natural language understanding and computation with imprecise probabilities. More generally, NL-Computation is needed for dealing with second-order uncertainty, that is, uncertainty about uncertainty, or uncertainty2 for short.
In summary, progression from bivalent logic to fuzzy logic is a significant positive step in the evolution of science. In large measure, the real-world is a fuzzy world. To deal with fuzzy reality what is needed is fuzzy logic. In coming years, fuzzy logic is likely to grow in visibility, importance and acceptance.  相似文献   

8.
The popularity of online social networking has heightened academic interest in social capital. However, few studies have investigated the role of social capital in online learning. Using exploratory and confirmatory factor analyses of data from online MIS classes, we find that online learning facilitates social capital formation mostly in terms of the dimensions of community, trust, collective action and cooperation, communication, and sociability and inclusion, depending on the media-based human interaction forms of online learning employed. Structural equation modeling confirms a causal effect of social capital on student satisfaction. Social capital is also found to positively affect learning outcomes as measured by students’ group project scores but not class scores. The study's contribution to the literature and practice are discussed.  相似文献   

9.
Recently a quantum steganographic communication protocol based on quantum key distribution (QKD) was proposed, where it is believed that QKD is a kind of suitable cover of a steganographic communication because QKD itself is not deterministic communication. Here we find that, as a special cryptographic application, the procedure of QKD can be used for deterministic secure communication, and consequently it is not suitable for steganography. Due to similar reasons, other quantum cryptographic schemes, including quantum secret sharing and quantum secure direct communication, are not suitable for steganography either.  相似文献   

10.
We describe two experiments that examine 3D pathway displays in a head-up location for aircraft landing and taxi. We address both guidance performance and pilot strategies in dividing, focusing, and allocating attention between flight path information and event monitoring. In Experiment 1 the 3D pathway head-up display (HUD) was compared with a conventional 2D HUD. The former was found to produce better guidance, with few costs to event detection. Some evidence was provided that attentional tunneling of the pathway HUD inhibits the detection of unexpected traffic events. In Experiment 2, the pathway display was compared in a head-up versus a head-down location. Excellent guidance was achieved in both locations. A slight HUD cost for vertical tracking in the air was offset by a HUD benefit for event detection and for lateral tracking during taxi (i.e., on the ground). The results of both experiments are interpreted within the framework of object- and space-based theories of visual attention and point to the conclusion that pathway HUDs combine the independent advantages of pathways and HUDs, particularly during ground operations. Actual or potential applications include understanding the costs and benefits of positioning a 3D pathway display in a head-up location.  相似文献   

11.
Checking if a program has an answer set, and if so, compute its answer sets are just some of the important problems in answer set logic programming. Solving these problems using Gelfond and Lifschitz's original definition of answer sets is not an easy task. Alternative characterizations of answer sets for nested logic pro- grams by Erdem and Lifschitz, Lee and Lifschitz, and You et al. are based on the completion semantics and various notions of tightness. However, the notion of tightness is a local notion in the sense that for different answer sets there are, in general, different level mappings capturing their tightness. This makes it hard to be used in the design of algorithms for computing answer sets. This paper proposes a characterization of answer sets based on sets of generating rules. From this char- acterization new algorithms are derived for computing answer sets and for per- forming some other reasoning tasks. As an application of the characterization a sufficient and necessary condition for the equivalence between answer set seman- tics and completion semantics has been proven, and a basic theorem is shown on computing answer sets for nested logic programs based on an extended notion of loop formulas. These results on tightness and loop formulas are more general than that in You and Lin's work.  相似文献   

12.
Result rankings from context-aware information retrieval are inherently dynamic, as the same query can lead to significantly different outcomes in different contexts. For example, the search term Digital Camera will lead to different—albeit potentially overlapping—results in the contexts customer reviews and shops, respectively. The comparison of such result rankings can provide useful insights into the effects of context changes on the information retrieval results. In particular, the impact of single aspects of the context in complex applications can be analyzed to identify the most (and least) influential context parameters. While a multitude of methods exists for assessing the relevance of a result ranking with respect to a given query, the question how different two result rankings are from a user’s point of view has not been tackled so far. This paper introduces DIR, a cognitively plausible dissimilarity measure for information retrieval result sets that is based solely on the results and thus applicable independently of the retrieval method. Unlike statistical correlation measures, this dissimilarity measure reflects how human users quantify the changes in information retrieval result rankings. The DIR measure supports cognitive engineering tasks for information retrieval, such as work flow and interface design: using the measure, developers can identify which aspects of context heavily influence the outcome of the retrieval task and should therefore be in the focus of the user’s interaction with the system. The cognitive plausibility of DIR has been evaluated in two human participants tests, which demonstrate a strong correlation with user judgments.  相似文献   

13.
Wu  Yanzhang  Liu  Hongzhe  Yuan  Jiazheng  Zhang  Qikun 《Multimedia Tools and Applications》2018,77(11):13983-14006
Multimedia Tools and Applications - In the real world, people often focus on the distinctive objects (Salient Regions, SR) in a scene. Thus, a number of saliency detection methods are introduced...  相似文献   

14.
This article describes the principles of the design of embedded electronic systems from the perspective of the entire system. By not restricting this perspective to the electrical domain, a more disciplined, unified methodology can lead to more efficient system-level design. In a world where myriad wirelessly interconnected appliances are going to impact our everyday lives and where technology advances are posing fundamental problems at the nanodevice level, the most important challenge will be ensuring safe, secure, and effective design. A unified methodology is an essential ingredient for a successful use of technology in society.  相似文献   

15.
This paper evaluates the consequences of a central bank stabilizing alternative measures of inflation in a model with several exchange rate channels of transmission for the monetary policy. The real exchange rate affects the equilibrium conditions and the utility-based welfare objective places higher weight on output gap stabilization. There is an endogenous stabilization trade-off and policy rules derived from private agents’ optimizing behavior perform better than alternative monetary policy arrangements. The optimal policy is PPI inflation target, under which the exchange rate follows a controlled floating. Contrary to central bank practices, CPI target should be considered only by highly open economies.  相似文献   

16.
In this viewpoint article, the importance of renal tissue proteomics in health and disease is explored. The analysis of the urinary proteome and the potential clinical application of these findings are progressing. However, additional benefit would be gained from a detailed parallel exploration of the proteome of the renal parenchyma, both in models and clinical samples. With this aim, we will briefly summarize the existing literature, compare the findings and propose future tasks. Special emphasis is placed on the importance of studying specific cellular compartments and cell types within the kidney. Recent technical advances are also discussed. It is anticipated that the combination of such technologies, especially proteomic analysis of material extracted by laser capture microdissection from paraffin embedded tissue or direct mass spectrometrical tissue imaging, will revolutionize the field.  相似文献   

17.
This study investigated student preference for overt vs. covert responding in a web-based tutorial using a within-subject design. Twenty-six social psychology students were exposed to the same two treatment conditions: covert question format (which required passive responding—“thinking” about an answer) and overt question format (which required active responding—“clicking” on an answer). The majority of students preferred the overt format. There was a small difference in mean times to complete covert and overt questions. A negative relationship was found between the degree of preference for overt questions and the percent of overt questions answered correctly, and between the total time taken to complete the program and the percent of overt questions answered correctly. Findings lend support to integrating high levels of responding in web-based instruction due to high user preference.  相似文献   

18.
Context: Data miners have been widely used in software engineering to, say, generate defect predictors from static code measures. Such static code defect predictors perform well compared to manual methods, and they are easy to use and useful to use. But one of the “black arts” of data mining is setting the tunings that control the miner.Objective: We seek simple, automatic, and very effective method for finding those tunings.Method: For each experiment with different data sets (from open source JAVA systems), we ran differential evolution as an optimizer to explore the tuning space (as a first step) then tested the tunings using hold-out data.Results: Contrary to our prior expectations, we found these tunings were remarkably simple: it only required tens, not thousands, of attempts to obtain very good results. For example, when learning software defect predictors, this method can quickly find tunings that alter detection precision from 0% to 60%.Conclusion: Since (1) the improvements are so large, and (2) the tuning is so simple, we need to change standard methods in software analytics. At least for defect prediction, it is no longer enough to just run a data miner and present the result without conducting a tuning optimization study. The implication for other kinds of analytics is now an open and pressing issue.  相似文献   

19.
Label-free LC-MS methods are attractive for high-throughput quantitative proteomics, as the sample processing is straightforward and can be scaled to a large number of samples. Label-free methods therefore facilitate biomarker discovery in studies involving dozens of clinical samples. However, despite the increased popularity of label-free workflows, there is a hesitance in the research community to use it in clinical proteomics studies. Therefore, we here discuss pros and cons of label-free LC-MS/MS for biomarker discovery, and delineate the main prerequisites for its successful employment. Furthermore, we cite studies where label-free LC-MS/MS was successfully used to identify novel biomarkers, and foresee an increased acceptance of label-free techniques by the proteomics community in the near future.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号