首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We extend Lutz's resource-bounded measure to probabilistic classes, and obtain notions of resource-bounded measure on probabilistic complexity classes such as BPE and BPEXP. Unlike former attempts, our resource bounded measure notions satisfy all three basic measure properties, that is every singleton {L} has measure zero, the whole space has measure one, and “enumerable infinite unions” of measure zero sets have measure zero.  相似文献   

2.
《Intelligent Data Analysis》1997,1(1-4):207-213
This article addresses the issue of quantitative information measurement within the Dempster-Shafer belief function formalism. Entropy computation in Dempster-Shafer depends on the way uncertainty measures are conceptualized. However, freed of most probability constraints, uncertainty measures in Dempster-Shafer theory can lead to further advances in optimization in information theory, which in turn may have a wide impact on decision and control. This article examines one form of current development regarding the entropy measure induced from the measure of dissonance. For a significant period, the measure of dissonance has been taken as a measure of entropy. We present in this article the entropy measure as a monotonically decreasing function, symmetrical to the measure of dissonance.  相似文献   

3.
Uncertainty measure in evidence theory with its applications   总被引:1,自引:0,他引:1  
Uncertainty measure in evidence theory supplies a new criterion to assess the quality and quantity of knowledge conveyed by belief structures. As generalizations of uncertainty measure in the probabilistic framework, several uncertainty measures for belief structures have been developed. Among them, aggregate uncertainty AU and the ambiguity measure AM are well known. However, the inconsistency between evidential and probabilistic frameworks causes limitations to existing measures. They are quite insensitive to the change of belief functions. In this paper, we consider the definition of a novel uncertainty measure for belief structures based on belief intervals. Based on the relation between evidence theory and probability theory, belief structures are transformed to belief intervals on singleton subsets, with the belief function Bel and the plausibility function Pl as its lower and upper bounds, respectively. An uncertainty measure SU for belief structures is then defined based on interval probabilities in the framework of evidence theory, without changing the theoretical frameworks. The center and the span of the interval is used to define the total uncertainty degree of the belief structure. It is proved that SU is identical to Shannon entropy and AM for Bayesian belief structures. Moreover, the proposed uncertainty measure has a wider range determined by the cardinality of discernment frame, which is more practical. Numerical examples, applications and related analyses are provided to verify the rationality of our new measure.  相似文献   

4.
The probabilistic linguistic term set is a powerful tool to express and characterize people’s cognitive complex information and thus has obtained a great development in the last several years. To better use the probabilistic linguistic term sets in decision making, information measures such as the distance measure, similarity measure, entropy measure and correlation measure should be defined. However, as an important kind of information measure, the inclusion measure has not been defined by scholars. This study aims to propose the inclusion measure for probabilistic linguistic term sets. Formulas to calculate the inclusion degrees are put forward Then, we introduce the normalized axiomatic definitions of the distance, similarity and entropy measures of probabilistic linguistic term sets to construct a unified framework of information measures for probabilistic linguistic term sets. Based on these definitions, we present the relationships and transformation functions among the distance, similarity, entropy and inclusion measures. We believe that more formulas to calculate the distance, similarity, inclusion degree and entropy can be induced based on these transformation functions. Finally, we put forward an orthogonal clustering algorithm based on the inclusion measure and use it in classifying cities in the Economic Zone of Chengdu Plain, China.  相似文献   

5.
Probabilistic timed automata can be used to model systems in which probabilistic and timing behaviour coexist. Verification of probabilistic timed automata models is generally performed with regard to a single reference valuation π 0 of the timing parameters. Given such a parameter valuation, we present a method for obtaining automatically a constraint K 0 on timing parameters for which the reachability probabilities (1) remain invariant and (2) are equal to the reachability probabilities for the reference valuation. The method relies on parametric analysis of a non-probabilistic version of the probabilistic timed automata model using the “inverse method”. The method presents the following advantages. First, since K 0 corresponds to a dense domain around π 0 on which the system behaves uniformly, it gives us a measure of robustness of the system. Second, it allows us to obtain a valuation satisfying K 0 which is as small as possible while preserving reachability probabilities, thus making the probabilistic analysis of the system easier and faster in practice. We provide examples of the application of our technique to models of randomized protocols, and introduce an extension of the method allowing the generation of a “probabilistic cartography” of a system.  相似文献   

6.
We study the problem of defining similarity measures on preferences from a decision-theoretic point of view. We propose a similarity measure, called probabilistic distance, that originates from the Kendall's tau function, a well-known concept in the statistical literature. We compare this measure to other existing similarity measures on preferences. The key advantage of this measure is its extensibility to accommodate partial preferences and uncertainty. We develop efficient methods to compute this measure, exactly or approximately, under all circumstances. These methods make use of recent advances in the area of Markov chain Monte Carlo simulation. We discuss two applications of the probabilistic distance: in the construction of the Decision-Theoretic Video Advisor (diva), and in robustness analysis of a theory refinement technique for preference elicitation.  相似文献   

7.
Probabilistic flooding has been frequently considered as a suitable dissemination information approach for limiting the large message overhead associated with traditional (full) flooding approaches that are used to disseminate globally information in unstructured peer-to-peer and other networks. A key challenge in using probabilistic flooding is the determination of the forwarding probability so that global network outreach is achieved while keeping the message overhead as low as possible. In this paper, by showing that a probabilistic flooding network, generated by applying probabilistic flooding to a connected random graph network, can be (asymptotically) “bounded” by properly parameterized random graph networks and by invoking random graph theory results, asymptotic values of the forwarding probability are derived guaranteeing (probabilistically) successful coverage, while significantly reducing the message overhead with respect to traditional flooding. Asymptotic expressions with respect to the average number of messages and the average time required to complete network coverage are also derived, illustrating the benefits of the properly parameterized probabilistic flooding scheme. Simulation results support the claims and expectations of the analytical results and reveal certain aspects of probabilistic flooding not covered by the analysis.  相似文献   

8.
We propose the f-divergences of two probability distributions as the measures of the organization of a probabilistic system with respect to its probabilistic uncertainty. A probabilistic system consist of stochastical objects on which random variables are defined which are statistically dependent. Using Shannon's f-divergence for the organization of a probabilistic system we express it in terms of the probability distributions of the element random variables and their statistical linkages. Then we find the maximum entropy of a probabilistic system if the statistical linkages between its elements are given as input data. We show that an important class of physical statistical systems can be described by the probabilistic systems of Gibbsian type.  相似文献   

9.
The properties of full meet contraction are investigated, andit is shown to be a useful building-block in the constructionof composite contraction operators. This is followed by a detailedinvestigation of specified meet contraction, i.e. the operation÷ such that K ÷ p = K f(p), where is full meetcontraction and f is a function from sentences to sentences.A number of realistic examples of belief contraction are discussed,and it is shown how the properties of contraction differ accordingto how the sentence to be contracted is situated in the structureof the belief state. Some plausible properties of f are proposed,and it is shown how they correspond to properties of the contractionoperator.  相似文献   

10.
We show that if RP does not have p-measure zero then ZPP = EXP. As corollaries we obtain a zero-one law for RP in EXP, and that both probabilistic classes ZPP and RP have the same measure in EXP. We also prove that if NP does not have p-measure zero then NP = AM.  相似文献   

11.
We consider the maximum entropy problems associated with Rényi Q-entropy, subject to two kinds of constraints on expected values. The constraints considered are a constraint on the standard expectation, and a constraint on the generalized expectation as encountered in nonextensive statistics. The optimum maximum entropy probability distributions, which can exhibit a power-law behaviour, are derived and characterized.The Rényi entropy of the optimum distributions can be viewed as a function of the constraint. This defines two families of entropy functionals in the space of possible expected values. General properties of these functionals, including nonnegativity, minimum, convexity, are documented. Their relationships as well as numerical aspects are also discussed. Finally, we work out some specific cases for the reference measure Q(x) and recover in a limit case some well-known entropies.  相似文献   

12.
《Information Sciences》1986,40(2):165-174
A new nonprobabilistic entropy measure is introduced in the context of fuzzy sets or messages. Fuzzy units, or fits, replace bits in a new framework of fuzzy information theory. An appropriate measure of entropy or fuzziness of messages is shown to be a simple ratio of distances: the distances between the fuzzy message and its nearest and farthest nonfuzzy neighbors. Fuzzy conditioning is examined as the degree of subsethood (submessagehood) of one fuzzy set or message in another. This quantity is shown to behave as a conditional probability in many contexts. It is also shown that the entropy of A is the degree to which AAc is a subset of AAc, an intuitive relationship that cannot occur in probability theory. This theory of subsethood is then shown to solve one of the major problems with Bayes-theorem learning and its variants—the problem of requiring that the space of alternatives be partitioned into disjoint exhaustive hypotheses. Any fuzzy subsets will do. However, a rough inverse relationship holds between number and fuzziness of partitions and the information gained from experience. All results reduce to fuzzy cardinality.  相似文献   

13.
Narrative complexity should be managed systematically during the creation process because it heavily influences the levels of understanding and the interest of recipients. Existing research has tended to depend on impressionist criticism or to deal with narrative complexity at general and superficial levels. In this paper, we consider the creation and acceptance of narrative as information processing mediated by a narrative text. Under this assumption, we propose a method to quantitatively evaluate narrative complexity at the recipient's cognitive level, and to effectively utilize the evaluation to aid in the author's narrative creation process. Within our knowledge distribution model, a narrative is represented as a knowledge structure, and the knowledge state and knowledge flow of narrative agents are evaluated using a probabilistic reasoning model. From the knowledge flows, the amount of information processed and required by the recipient is calculated as a measure of the narrative complexity using entropy theory. As a case study, we conduct a comparative analysis of an actual cinematic narrative and a structurally manipulated version of that narrative to show how the method captures characteristics and changes of the narrative; we also discuss the improvements presented in this paper as compared to our previous research.  相似文献   

14.
This paper presents a framework for querying inconsistent databases in the presence of functional dependencies. Most of the works dealing with the problem of extracting reliable information from inconsistent databases are based on the notion of repair, a minimal set of tuple insertions and deletions which leads the database to a consistent state (called repaired database), and the notion of consistent query answer, a query answer that can be obtained from every repaired database. In this work, both the notion of repair and query answer differ from the original ones. In the presence of functional dependencies, tuple deletions are the only operations that are performed in order to restore the consistency of an inconsistent database. However, deleting a tuple to remove an integrity violation potentially eliminates useful information in that tuple. In order to cope with this problem, we adopt a notion of repair, based on tuple updates, which allows us to better preserve information in the source database. A drawback of the notion of consistent query answer is that it does not allow us to discriminate among non-consistent answers, namely answers which can be obtained from a non-empty proper subset of the repaired databases. To obtain more informative query answers, we propose the notion of probabilistic query answer, that is query answers are tuples associated with probabilities. This new semantics of query answering over inconsistent databases allows us to give a measure of uncertainty to query answers. We show that the problem of computing probabilistic query answers is FP #P -complete. We also propose a technique for computing probabilistic answers to arbitrary relational algebra queries.  相似文献   

15.
In our previous researches, we proposed the artificial chromosomes with genetic algorithm (ACGA) which combines the concept of the Estimation of Distribution Algorithms (EDAs) with genetic algorithms (GAs). The probabilistic model used in the ACGA is the univariate probabilistic model. We showed that ACGA is effective in solving the scheduling problems. In this paper, a new probabilistic model is proposed to capture the variable linkages together with the univariate probabilistic model where most EDAs could use only one statistic information. This proposed algorithm is named extended artificial chromosomes with genetic algorithm (eACGA). We investigate the usefulness of the probabilistic models and to compare eACGA with several famous permutation-oriented EDAs on the benchmark instances of the permutation flowshop scheduling problems (PFSPs). eACGA yields better solution quality for makespan criterion when we use the average error ratio metric as their performance measures. In addition, eACGA is further integrated with well-known heuristic algorithms, such as NEH and variable neighborhood search (VNS) and it is denoted as eACGAhybrid to solve the considered problems. No matter the solution quality and the computation efficiency, the experimental results indicate that eACGAhybrid outperforms other known algorithms in literature. As a result, the proposed algorithms are very competitive in solving the PFSPs.  相似文献   

16.
We define a wash criterion as one where the decision-maker is indifferent among the alternatives when they are compared on that criterion. In view of the Belton–Gear example and other such anomalies associated with the analytic hierarchy process (AHP), we ask whether eliminating a wash criterion will affect the overall ranking of objects. In the case where there is only one level of criteria, the rank-order of objects is unaffected by leaving out a wash criterion. However, in the case where the wash criterion is a subcriterion, the rank order may be affected by leaving it out.Scope and purposeA wash criterion is defined as a criterion where the decision-maker is indifferent among the alternatives when they are compared on that criterion. We would like to think that the overall rank-order of objects would be unaffected in the case where the wash criterion is excluded. We give an example of an AHP hierarchy where this is not the case. In our view this presents another challenge to the AHP methodology.  相似文献   

17.
针对概率不确定语言集的多个不确定语言术语和其概率各不相同的特点,提出基于概率不确定语言熵的多属性决策方法。定义四种新的概率不确定语言熵:模糊熵、犹豫熵、不完全信息熵和总熵,以分别测量概率不确定语言集的模糊性、犹豫性、信息不完全性和整体不确定性。给出了四种熵测度的公理化定义和表达式,根据概率不确定语言集的四种熵,构建能够解决属性权重未知的多属性决策模型,并通过案例和对比分析验证了该模型的有效性和合理性。  相似文献   

18.
A unified approach to ranking in probabilistic databases   总被引:1,自引:0,他引:1  
Ranking is a fundamental operation in data analysis and decision support and plays an even more crucial role if the dataset being explored exhibits uncertainty. This has led to much work in understanding how to rank the tuples in a probabilistic dataset in recent years. In this article, we present a unified approach to ranking and top-k query processing in probabilistic databases by viewing it as a multi-criterion optimization problem and by deriving a set of features that capture the key properties of a probabilistic dataset that dictate the ranked result. We contend that a single, specific ranking function may not suffice for probabilistic databases, and we instead propose two parameterized ranking functions, called PRF ω and PRF e, that generalize or can approximate many of the previously proposed ranking functions. We present novel generating functions-based algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations modeled using probabilistic and/xor trees or Markov networks. We further propose that the parameters of the ranking function be learned from user preferences, and we develop an approach to learn those parameters. Finally, we present a comprehensive experimental study that illustrates the effectiveness of our parameterized ranking functions, especially PRF e, at approximating other ranking functions and the scalability of our proposed algorithms for exact or approximate ranking.  相似文献   

19.
Opacity is a generic security property, that has been defined on (non-probabilistic) transition systems and later on Markov chains with labels. For a secret predicate, given as a subset of runs, and a function describing the view of an external observer, the value of interest for opacity is a measure of the set of runs disclosing the secret. We extend this definition to the richer framework of Markov decision processes, where non-deterministic choice is combined with probabilistic transitions, and we study related decidability problems with partial or complete observation hypotheses for the schedulers. We prove that all questions are decidable with complete observation and ω-regular secrets. With partial observation, we prove that all quantitative questions are undecidable but the question whether a system is almost surely non-opaque becomes decidable for a restricted class of ω-regular secrets, as well as for all ω-regular secrets under finite-memory schedulers.  相似文献   

20.
Adaptive multilevel rough entropy evolutionary thresholding   总被引:1,自引:0,他引:1  
In this study, comprehensive research into rough set entropy-based thresholding image segmentation techniques has been performed producing new and robust algorithmic schemes. Segmentation is the low-level image transformation routine that partitions an input image into distinct disjoint and homogenous regions using thresholding algorithms most often applied in practical situations, especially when there is pressing need for algorithm implementation simplicity, high segmentation quality, and robustness. Combining entropy-based thresholding with rough set results in the rough entropy thresholding algorithm.The authors propose a new algorithm based on granular multilevel rough entropy evolutionary thresholding that operates on a multilevel domain. The MRET algorithm performance has been compared to the iterative RET algorithm and standard k-means clustering methods on the basis of β-index as a representative validation measure. Performance in experimental assessment suggests that granular multilevel rough entropy threshold based segmentations - MRET - present high quality, comparable with and often better than k-means clustering based segmentations. In this context, the rough entropy evolutionary thresholding MRET algorithm is suitable for specific segmentation tasks, when seeking solutions that incorporate spatial data features with particular characteristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号