首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
This paper provides an overview of fuzzy measures, fuzzy integration theories and Choquet's capacity theory. Belief, plausibility, and possibility measures are characterized as Choquet capacities and as fuzzy measures. The relationship between possibility measures, fuzzy sets, and approximate reasoning is established. Recent results on extensions of fuzzy measures, structural characteristics of fuzzy measures, and convergence of function sequences on fuzzy measure spaces are presented. Fuzzy measure integration concepts due to Sugeno and Choquet and their applications are discussed. An extensive list of references to the literature of fuzzy measures, Sugeno and Choquet integrals, fuzzy probabilities, fuzzy random variables, probabilistic sets, and random sets is provided. Applicalions discussed or referenced include information fusion, information retrieval, approximate reasoning, artificial intelligence, uncertainty theory, and control and decision theory.  相似文献   

3.
The relationship is studied between possibility and necessity measures defined on arbitrary spaces, the theory of imprecise probabilities, and elementary random set theory. It is shown how special random sets can be used to generate normal possibility and necessity measures, as well as their natural extensions. This leads to interesting alternative formulas for the calculation of these natural extensions  相似文献   

4.
The theoretical aspects of statistical inference with imprecise data, with focus on random sets, are considered. On the setting of coarse data analysis imprecision and randomness in observed data are exhibited, and the relationship between probability and other types of uncertainty, such as belief functions and possibility measures, is analyzed. Coarsening schemes are viewed as models for perception-based information gathering processes in which random fuzzy sets appear naturally. As an implication, fuzzy statistics is statistics with fuzzy data. That is, fuzzy sets are a new type of data and as such, complementary to statistical analysis in the sense that they enlarge the domain of applications of statistical science.  相似文献   

5.
This paper describes list-processing primitives which form a component of an interactive graphics system. This system, developed at Bowling Green State University, has a plasma display panel as its central component.This system comprises a BASIC interpreter to which has been added a rich variety of graphic and list-processing primitives. These are implemented as additional BASIC verbs.The list-processing primitives permit creation of arbitrarily complex list structures: trees, multiply linked lists, etc. can all be constructed. It is possible to manipulate list and node names in a natural way. Lists, sublists, and nodes can be added and deleted, lists can be searched, and the list space dynamically reformatted if desired.The system also includes an extremely flexible mechanism for describing and manipulating data contained in a node. Several data types are available ranging from floating point to variable-length integer. It is possible to manipulate data at the bit level.This facility greatly increases the flexibility of the graphic system since it is possible to represent structural relationships between the components of a figure.  相似文献   

6.
We present a computational approach to high-order matching of data sets in IR(d). Those are matchings based on data affinity measures that score the matching of more than two pairs of points at a time. High-order affinities are represented by tensors and the matching is then given by a rank-one approximation of the affinity tensor and a corresponding discretization. Our approach is rigorously justified by extending Zass and Shashua's hypergraph matching to high-order spectral matching. This paves the way for a computationally efficient dual-marginalization spectral matching scheme. We also show that, based on the spectral properties of random matrices, affinity tensors can be randomly sparsified while retaining the matching accuracy. Our contributions are experimentally validated by applying them to synthetic as well as real data sets.  相似文献   

7.
Distinguishability is a semantic property of fuzzy sets that has a great relevance in the design of interpretable fuzzy models. Distinguishability has been mathematically defined through different measures, which are addressed in this paper. Special emphasis is given to similarity, which exhibits sound theoretical properties but its calculation is usually computationally intensive, and possibility, whose calculation can be very efficient but it does not exhibit the same properties of similarity. It is shown that under mild conditions - usually met in interpretable fuzzy modeling - possibility can be used as a valid measure for assessing distinguishability, thus overcoming the computational inefficiencies of similarity measures. Moreover, procedures that minimize possibility also minimize similarity and, consequently, improve distinguishability. In this sense, the use of possibility is fully justified in interpretable fuzzy modeling.  相似文献   

8.
Stochastic dominance, which is based on the comparison of distribution functions, is one of the most popular preference measures. However, its use is limited to the case where the goal is to compare pairs of distribution functions, whereas in many cases it is interesting to compare sets of distribution functions: this may be the case for instance when the available information does not allow to fully elicitate the probability distributions of the random variables. To deal with these situations, a number of generalisations of the notion of stochastic dominance are proposed; their connection with an equivalent p-box representation of the sets of distribution functions is studied; a number of particular cases, such as sets of distributions associated to possibility measures, are investigated; and an application to the comparison of the Lorenz curves of countries within the same region is presented.  相似文献   

9.
《Information Sciences》2007,177(16):3239-3250
The results in this paper are about the convergence of capacity functionals of random sets. The motivation stems from asymptotic aspects in inference and decision-making with coarse data in biostatistics, set-valued observations, as well as connections between random sets with several emerging uncertainty calculi in intelligent systems such as fuzziness, belief functions and possibility theory. Specifically, we study the counter-part of Billingsley’s Portmanteau Theorem for weak convergence of probability measures, namely, convergence of capacity functionals of random sets in terms of Choquet integrals.  相似文献   

10.
It is likely that customers issue requests based on out-of-date information in e-commerce application systems. Hence, the transaction failure rates would increase greatly. In this paper, we present a preference update model to address this problem. A preference update is an extended SQL update statement where a user can request the desired number of target data items by specifying multiple preferences. Moreover, the preference update allows easy extraction of criteria from a set of concurrent requests and, hence, optimal decisions for the data assignments can be made. We propose a group evaluation strategy for preference update processing in a multidatabase environment. The experimental results show that the group evaluation can effectively increase the customer satisfaction level with acceptable cost. Peng Li is the Chief Software Architect of didiom LLC. Before that, he was a visiting assistant professor of computer science department in Western Kentucky University. He received his Ph.D. degree of computer science from the University of Texas at Dallas. He also holds a B.Sc. and M.S. in Computer Science from the Renmin University of China. His research interests include database systems, database security, transaction processing, distributed and Internet computer and E-commerce. Manghui Tu received a Bachelor degree of Science from Wuhan University, P.R. China in 1996, and a Master Degree in Computer Science from the University of Texas at Dallas 2001. He is currently working toward the PhD degree in the Department of Computer Science at the University of Texas at Dallas. Mr. Tu’s research interests include distributed systems, grid computing, information security, mobile computing, and scientific computing. His PhD research work focus on the data management in secure and high performance data grid. He is a student member of the IEEE. I-Ling Yen received her BS degree from Tsing-Hua University, Taiwan, and her MS and PhD degrees in Computer Science from the University of Houston. She is currently an Associate Professor of Computer Science at the University of Texas at Dallas. Dr. Yen’s research interests include fault-tolerant computing, security systems and algorithms, distributed systems, Internet technologies, E-commerce, and self-stabilizing systems. She had published over 100 technical papers in these research areas and received many research awards from NSF, DOD, NASA, and several industry companies. She has served as Program Committee member for many conferences and Program Chair/Co-Chair for the IEEE Symposium on Application-Specific Software and System Engineering & Technology, IEEE High Assurance Systems Engineering Symposium, IEEE International Computer Software and Applications Conference, and IEEE International Symposium on Autonomous Decentralized Systems. She is a member of the IEEE. Zhonghang Xia received the B.S. degree in applied mathematics from Dalian University of Technology in 1990, the M.S. degree in Operations Research from Qufu Normal University in 1993, and the Ph.D. degree in computer science from the University of Texas at Dallas in 2004. He is now an assistant professor in the Department of Computer Science, Western Kentucky University, Bowling Green, KY. His research interests are in the area of multimedia computing and networking, distributed systems, and data mining.  相似文献   

11.
The possibility calculus is shown to be a reasonable belief representation in Cox's sense, even though possibility is formally different from probability. So‐called linear possibility measures satisfy the equations that appear in Cox's theorem. Linear possibilities are known to be related to the full range of possibility measures through a method for representing belief based on sets that is similar to a technique pioneered by Cox in the probabilistic domain. Exploring the relationship between possibility and Cox's belief measures provides an opportunity to discuss some of the ways in which Cox dissented from bayesian orthodoxy, especially his tolerance of partially ordered belief and his rejection of prior probabilities for inference which begins in ignorance.  相似文献   

12.
High-order and multilayer perceptron initialization   总被引:1,自引:0,他引:1  
Proper initialization is one of the most important prerequisites for fast convergence of feedforward neural networks like high-order and multilayer perceptrons. This publication aims at determining the optimal variance (or range) for the initial weights and biases, which is the principal parameter of random initialization methods for both types of neural networks. An overview of random weight initialization methods for multilayer perceptrons is presented. These methods are extensively tested using eight real-world benchmark data sets and a broad range of initial weight variances by means of more than 30000 simulations, in the aim to find the best weight initialization method for multilayer perceptrons. For high-order networks, a large number of experiments (more than 200000 simulations) was performed, using three weight distributions, three activation functions, several network orders, and the same eight data sets. The results of these experiments are compared to weight initialization techniques for multilayer perceptrons, which leads to the proposal of a suitable initialization method for high-order perceptrons. The conclusions on the initialization methods for both types of networks are justified by sufficiently small confidence intervals of the mean convergence times.  相似文献   

13.
This paper discusses the formal connection between possibility distributions (Zadeh [21]) and the theory of random sets via Choquet's theorem. Based upon these relationships, it is suggested that plausible inferences and modeling of common sense can be derived from the statistics of random sets. The analysis of subjectivity in meaning representation of natural languages can be carried out by taking account of covariates of individuals as in the statistical analysis of survival data.  相似文献   

14.
In view of the recent interests in random sets in information technology, such as models for imprecise data in intelligent systems, morphological analysis in image processing, we present, in this paper, some contributions to the foundation of random set theory, namely, a complete study of topological properties of capacity functionals of random sets, generalizing weak convergence of probability measures. These results are useful for investigating the concept of Choquet weak convergence of capacity functionals leading to tractable criteria for convergence in distribution of random sets. The weak topology is defined on the space of all capacity functionals on Rd. We show that this topological space is separable and metrizable.  相似文献   

15.
Based on the classical theory of random sets, Feng and Nguyen (2007) [5] studied the convergence of capacity functionals of random sets in terms of Choquet integrals. In this paper, we consider an interval-valued capacity functional which is motivated by the goal to generalize a capacity functional and the Choquet integral with respect to an interval-valued capacity. In particular, we discuss some convergence theorems for interval-valued capacity functionals and interval-valued probability measures in the Hausdorff metric and Choquet weak sense.  相似文献   

16.
为了满足对绿色大学的系统综合评价要求,以达到能够对绿色大学建设效果进行有效评估的目的,并对绿色大学创建过程中的相关绿色属性进行有效的评价,综合运用J2EE技术,研究了基于轻量级Struts+Spring+Hibernate框架的绿色大学评价支持系统的构架方法,在开发过程中对整个系统进行了逻辑分层设计,表现层运用了Struts技术,业务逻辑层对协调分析方法、层次分析方法的算法进行了实现,并完成了J2EE与Matlab的交互,在数据持久层运用了对象关系映射的Hibernate技术,有效地实现了系统的基于开放Web的经济性、可移植性与可拓展性。  相似文献   

17.
In a random fuzzy information system, by introducing a fuzzy t-similarity relation on the objects set for a subset of attributes set, the approximate representations of knowledge are established. By discussing fuzzy belief measures and fuzzy plausibility measures defined by the lower approximation and the upper approximation in a random fuzzy approximation space, some equivalent conditions of knowledge reduction in a random fuzzy information system are proved. Similarly as in an information system, the fuzzy-set-valued attribute discernibility matrixes in a random fuzzy information system are constructed. Knowledge reduction is defined from the view of fuzzy belief measures and fuzzy plausibility measures and a heuristic knowledge reduction algorithm is proposed, and the time complexity of this algorithm is O(|U|2|A|). A running example illustrates the potential application of algorithm, and the experimental results on the data sets with numerical attributes show that the proposed method is effective.  相似文献   

18.
In this article we investigate knowledge acquisition (KA) and its relationships to random sets. Based on random set theory, we develop some estimation theorems and procedures for set-valued statistics such as nonparametric estimators. Under random interval assumption, we establish some special possibility distributions that can be easily implemented in KA tools. The knowledge studied here are rules describing relationships between various concepts, as used in diagnosis (pattern recognition) expert systems. © 1996 John Wiley & Sons, Inc.  相似文献   

19.
As knowledge block in knowledge base is fuzzy and obtained randomly, we propose a random fuzzy rough set model based on random fuzzy sets and fuzzy logic operators. We give some properties of the random fuzzy rough set. We investigate the relationship between fuzzy measures defined by lower approximation and upper approximation of fuzzy set and fuzzy probability measures.  相似文献   

20.
Many clustering algorithms, including cluster ensembles, rely on a random component. Stability of the results across different runs is considered to be an asset of the algorithm. The cluster ensembles considered here are based on k-means clusterers. Each clusterer is assigned a random target number of clusters, k and is started from a random initialization. Here, we use 10 artificial and 10 real data sets to study ensemble stability with respect to random k, and random initialization. The data sets were chosen to have a small number of clusters (two to seven) and a moderate number of data points (up to a few hundred). Pairwise stability is defined as the adjusted Rand index between pairs of clusterers in the ensemble, averaged across all pairs. Nonpairwise stability is defined as the entropy of the consensus matrix of the ensemble. An experimental comparison with the stability of the standard k-means algorithm was carried out for k from 2 to 20. The results revealed that ensembles are generally more stable, markedly so for larger k. To establish whether stability can serve as a cluster validity index, we first looked at the relationship between stability and accuracy with respect to the number of clusters, k. We found that such a relationship strongly depends on the data set, varying from almost perfect positive correlation (0.97, for the glass data) to almost perfect negative correlation (-0.93, for the crabs data). We propose a new combined stability index to be the sum of the pairwise individual and ensemble stabilities. This index was found to correlate better with the ensemble accuracy. Following the hypothesis that a point of stability of a clustering algorithm corresponds to a structure found in the data, we used the stability measures to pick the number of clusters. The combined stability index gave best results  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号