首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
 The nature of a contradiction (conflict) between two belief functions is investigated. Alternative ways of distributing the contradiction among nonempty subsets of frame of discernment are studied. The paper employes a new approach to understanding contradictions and introduces an original notion of potential contradiction. A method of an associative combination of generalized belief functions – minC combination and its derivation – is presented as part of the new approach. A proportionalization of generalized results is suggested as well. RID="*" ID="*" Support by Grant No. 1030803 of the GA AV ČR is acknowledged. I am grateful to Philippe Smets for a fruitful discussion on the topic.  相似文献   

2.
This paper addresses classification problems in which the class membership of training data are only partially known. Each learning sample is assumed to consist of a feature vector xiX and an imprecise and/or uncertain “soft” label mi defined as a Dempster-Shafer basic belief assignment over the set of classes. This framework thus generalizes many kinds of learning problems including supervised, unsupervised and semi-supervised learning. Here, it is assumed that the feature vectors are generated from a mixture model. Using the generalized Bayesian theorem, an extension of Bayes’ theorem in the belief function framework, we derive a criterion generalizing the likelihood function. A variant of the expectation maximization (EM) algorithm, dedicated to the optimization of this criterion is proposed, allowing us to compute estimates of model parameters. Experimental results demonstrate the ability of this approach to exploit partial information about class labels.  相似文献   

3.
In this paper we present a new credal classification rule (CCR) based on belief functions to deal with the uncertain data. CCR allows the objects to belong (with different masses of belief) not only to the specific classes, but also to the sets of classes called meta-classes which correspond to the disjunction of several specific classes. Each specific class is characterized by a class center (i.e. prototype), and consists of all the objects that are sufficiently close to the center. The belief of the assignment of a given object to classify with a specific class is determined from the Mahalanobis distance between the object and the center of the corresponding class. The meta-classes are used to capture the imprecision in the classification of the objects when they are difficult to correctly classify because of the poor quality of available attributes. The selection of meta-classes depends on the application and the context, and a measure of the degree of indistinguishability between classes is introduced. In this new CCR approach, the objects assigned to a meta-class should be close to the center of this meta-class having similar distances to all the involved specific classes? centers, and the objects too far from the others will be considered as outliers (noise). CCR provides robust credal classification results with a relatively low computational burden. Several experiments using both artificial and real data sets are presented at the end of this paper to evaluate and compare the performances of this CCR method with respect to other classification methods.  相似文献   

4.
Analyzing the degree of conflict among belief functions   总被引:12,自引:0,他引:12  
The study of alternative combination rules in DS theory when evidence is in conflict has emerged again recently as an interesting topic, especially in data/information fusion applications. These studies have mainly focused on investigating which alternative would be appropriate for which conflicting situation, under the assumption that a conflict is identified. The issue of detection (or identification) of conflict among evidence has been ignored. In this paper, we formally define when two basic belief assignments are in conflict. This definition deploys quantitative measures of both the mass of the combined belief assigned to the emptyset before normalization and the distance between betting commitments of beliefs. We argue that only when both measures are high, it is safe to say the evidence is in conflict. This definition can be served as a prerequisite for selecting appropriate combination rules.  相似文献   

5.
In this paper, the concept of stochastic ordering is extended to belief functions on the real line defined by random closed intervals. In this context, the usual stochastic ordering is shown to break down into four distinct ordering relations, called credal orderings, which correspond to the four basic ordering structures between intervals. These orderings are characterized in terms of lower and upper expectations. We then derive the expressions of the least committed (least informative) belief function credally less (respectively, greater) than or equal to a given belief function. In each case, the solution is a consonant belief function that can be described by a possibility distribution. A simple application to reliability analysis is used as an example throughout the paper.  相似文献   

6.
This paper applies the Transferable Belief Model (TBM) interpretation of the Dempster-Shafer theory of evidence to estimate parameter distributions for probabilistic structural reliability assessment based on information from previous analyses, expert opinion, or qualitative assessments (i.e., evidence). Treating model parameters as credal variables, the suggested approach constructs a set of least-committed belief functions for each parameter defined on a continuous frame of real numbers that represent beliefs induced by the evidence in the credal state, discounts them based on the relevance and reliability of the supporting evidence, and combines them to obtain belief functions that represent the aggregate state of belief in the true value of each parameter. Within the TBM framework, beliefs held in the credal state can then be transformed to a pignistic state where they are represented by pignistic probability distributions. The value of this approach lies in its ability to leverage results from previous analyses to estimate distributions for use within a probabilistic reliability and risk assessment framework. The proposed methodology is demonstrated in an example problem that estimates the physical vulnerability of a notional office building to blast loading.  相似文献   

7.
This paper investigates the issues of combination and normalization of interval-valued belief structures within the framework of Dempster-Shafer theory (DST) of evidence. Existing approaches are reviewed, examined and critically analysed. They either ignore the normalization or separate it from the combination process, leading to irrational or suboptimal interval-valued belief structures. A new logically correct optimality approach is developed, where the combination and the normalization are optimised together rather than separately. Numerical examples are provided throughout the paper.  相似文献   

8.
This paper addresses the supervised learning in which the class memberships of training data are subject to ambiguity. This problem is tackled in the ensemble learning and the Dempster-Shafer theory of evidence frameworks. The initial labels of the training data are ignored and by utilizing the main classes’ prototypes, each training pattern is reassigned to one class or a subset of the main classes based on the level of ambiguity concerning its class label. Multilayer perceptron neural network is employed to learn the characteristics of the data with new labels and for a given test pattern its outputs are considered as basic belief assignment. Experiments with artificial and real data demonstrate that taking into account the ambiguity in labels of the learning data can provide better classification results than single and ensemble classifiers that solve the classification problem using data with initial imperfect labels.  相似文献   

9.
The well-known Fuzzy C-Means (FCM) algorithm for data clustering has been extended to Evidential C-Means (ECM) algorithm in order to work in the belief functions framework with credal partitions of the data. Depending on data clustering problems, some barycenters of clusters given by ECM can become very close to each other in some cases, and this can cause serious troubles in the performance of ECM for the data clustering. To circumvent this problem, we introduce the notion of imprecise cluster in this paper. The principle of our approach is to consider that objects lying in the middle of specific classes (clusters) barycenters must be committed with equal belief to each specific cluster instead of belonging to an imprecise meta-cluster as done classically in ECM algorithm. Outliers object far away of the centers of two (or more) specific clusters that are hard to be distinguished, will be committed to the imprecise cluster (a disjunctive meta-cluster) composed by these specific clusters. The new Belief C-Means (BCM) algorithm proposed in this paper follows this very simple principle. In BCM, the mass of belief of specific cluster for each object is computed according to distance between object and the center of the cluster it may belong to. The distances between object and centers of the specific clusters and the distances among these centers will be both taken into account in the determination of the mass of belief of the meta-cluster. We do not use the barycenter of the meta-cluster in BCM algorithm contrariwise to what is done with ECM. In this paper we also present several examples to illustrate the interest of BCM, and to show its main differences with respect to clustering techniques based on FCM and ECM.  相似文献   

10.
When conjunctively merging two belief functions concerning a single variable but coming from different sources, Dempster rule of combination is justified only when information sources can be considered as independent. When dependencies between sources are ill-known, it is usual to require the property of idempotence for the merging of belief functions, as this property captures the possible redundancy of dependent sources. To study idempotent merging, different strategies can be followed. One strategy is to rely on idempotent rules used in either more general or more specific frameworks and to study, respectively, their particularization or extension to belief functions. In this paper, we study the feasibility of extending the idempotent fusion rule of possibility theory (the minimum) to belief functions. We first investigate how comparisons of information content, in the form of inclusion and least-commitment, can be exploited to relate idempotent merging in possibility theory to evidence theory. We reach the conclusion that unless we accept the idea that the result of the fusion process can be a family of belief functions, such an extension is not always possible. As handling such families seems impractical, we then turn our attention to a more quantitative criterion and consider those combinations that maximize the expected cardinality of the joint belief functions, among the least committed ones, taking advantage of the fact that the expected cardinality of a belief function only depends on its contour function.  相似文献   

11.
OMR抽象模型     
在计算机领域反映现实世界中的事物,离不开有效模型的支持。这个模型一方面是现实世界中具体事物的抽象,另一方面又便于计算机实现。提出的OMR模型反映了现实世界的运行规律,用对象、消息和关系三要素对现实事物进行抽象和描述,是辅助计算机实现的有效工具。为了从形式化和数学化的角度描述OMR模型,又探讨了对象内Petri网和对象间Petri网,从而使OMR模型进一步丰满。  相似文献   

12.
We introduce a new probabilistic approach to dealing with uncertainty, based on the observation that probability theory does not require that every event be assigned a probability. For a nonmeasurable event (one to which we do not assign a probability), we can talk about only the inner measure and outer measure of the event. In addition to removing the requirement that every event be assigned a probability, our approach circumvents other criticisms of probability-based approaches to uncertainty. For example, the measure of belief in an event turns out to be represented by an interval (defined by the inner and outer measures), rather than by a single number. Further, this approach allows us to assign a belief (inner measure) to an event E without committing to a belief about its negation -E (since the inner measure of an event plus the inner measure of its negation is not necessarily one). Interestingly enough, inner measures induced by probability measures turn out to correspond in a precise sense to Dempster-Shafer belief functions. Hence, in addition to providing promising new conceptual tools for dealing with uncertainty, our approach shows that a key part of the important Dempster-Shafer theory of evidence is firmly rooted in classical probability theory. Cet article présente une nouvelle approche probabiliste en ce qui concerne le traitement de l'incertitude; celle-ci est basée sur l'observation que la théorie des probabilityés n'exige pas qu'une probabilityé soit assignée à chaque événement. Dans le cas d'un événement non mesurable (un événement pour lequel on n'assigne aucune probabilityé), nous ne pouvons discuter que de la mesure intérieure et de la mesure extérieure de l'évenément. En plus d'éliminer la nécessité d'assigner une probabilityéà l'événement, cette nouvelle approche apporte une réponse aux autres critiques des approches à l'incertitude basées sur des probabilityés. Par exemple, la mesure de croyance dans un événement est représentée par un intervalle (défini par la mesure intérieure et extérieure) plutǒt que par un nombre unique. De plus, cette approche nous permet d'assigner une croyance (mesure intérieure) à un événement E sans se compromettre vers une croyance à propos de sa négation -E (puisque la mesure intérieure d'un événement et la mesure intérieure de sa négation ne sont pas nécessairement une seule et unique mesure). II est intéressant de noter que les mesures intérieures qui résultent des mesures de probabilityé correspondent d'une manière précise aux fonctions de croyance de Dempster-Shafer. En plus de constituer un nouvel outil conceptuel prometteur dans le traitement de l'incertitude, cette approche démontre qu'une partie importante de la théorie de l'évidence de Dempster-Shafer est fermement ancrée dans la theorie classique des probabilityés.  相似文献   

13.
We study the decision-making problem with Dempster-Shafer theory of evidence. We analyze how to deal with this model when the available information is uncertain and it can be represented with fuzzy numbers. We use different types of aggregation operators that aggregate fuzzy numbers such as the fuzzy weighted average (FWA), the fuzzy ordered weighted averaging (FOWA) operator and the fuzzy hybrid averaging (FHA) operator. As a result, we get the belief structure fuzzy weighted average (BS-FWA), the belief structure fuzzy ordered weighted averaging (BS-FOWA) operator and the belief structure fuzzy hybrid averaging (BS-FHA) operator. We further generalize this new approach by using generalized and quasi-arithmetic means. We also develop an illustrative example regarding the selection of investments where we can see the different results obtained by using different types of fuzzy aggregation operators.  相似文献   

14.
面向业务人员设计了一套类自然语言的业务规则语言,并根据其语法设计了规则语句编辑的在线提示算法,该算法能迅速提供与语法语义相容的词选项列表,引导用户完成规则录入。  相似文献   

15.
One major goal of active object recognition systems is to extract useful information from multiple measurements. We compare three frameworks for information fusion and view-planning using different uncertainty calculi: probability theory, possibility theory and Dempster-Shafer theory of evidence. The system dynamically repositions the camera to capture additional views in order to improve the classification result obtained from a single view. The active recognition problem can be tackled successfully by all the considered approaches with sometimes only slight differences in performance. Extensive experiments confirm that recognition rates can be improved considerably by performing active steps. Random selection of the next action is much less efficient than planning, both in recognition rate and in the average number of steps required for recognition. As long as the rate of wrong object-pose classifications stays low the probabilistic implementation always outperforms the other approaches. If the outlier rate increases averaging fusion schemes outperform conjunctive approaches for information integration. We use an appearance based object representation, namely the parametric eigenspace, but the planning algorithm is actually independent of the details of the specific object recognition environment. Received: June 18, 1998; revised November 17, 1998  相似文献   

16.
This paper presents a visual object tracking system which is tolerant to external imaging factors such as illumination, scale, rotation, occlusion and background changes. Specifically, an integration of an online version of total-error-rate minimization based projection network with an observation model of particle filter is proposed to effectively distinguish between the target object and the background. A re-weighting technique is proposed to stabilize the sampling of particle filter for stochastic propagation. For self-adaptation, an automatic updating scheme and extraction of training samples are proposed to adjust to system changes online. Our qualitative and quantitative experiments on 16 public video sequences show convincing performances in terms of tracking accuracy and computational efficiency over competing state-of-the-art algorithms.  相似文献   

17.
In this paper we introduce the notion of Dynamic Generalized Controllability and Observability functions for nonlinear systems. These functions are called dynamic and generalized since they make use of additional states (dynamic extension) and are such that partial differential inequalities are solved in place of equations. The presence of the dynamic extension permits the construction of classes of canonical controllability and observability functions without relying on the solution of any partial differential equation or inequality. The effectiveness of the proposed concept is validated by means of two applications: the model reduction problem via balancing and the sensor deployment problem in a continuous stirred tank reactor (CSTR).  相似文献   

18.
In this investigation, Model Order Reduction (MOR) of second-order systems having cubic nonlinearity in stiffness is developed for the first time using Krylov subspace methods and the associated symmetric transfer functions. In doing so, new second-order Krylov subspaces will be defined for MOR procedure which avoids the need to transform the second-order system to its state space form and thus the main characteristics of the second-order system such as symmetry and positive definiteness of mass and stiffness matrices will be preserved. To show the efficacy of the presented method, three examples will be considered as practical case studies. The first example is a nonlinear shear-beam building model subjected to a seismic disturbance. The second and third examples are nonlinear longitudinal vibration of a rod and vibration of a cantilever beam resting on a nonlinear elastic foundation, respectively. Simulation results in all cases show good accuracy of the vibrational response of the reduced order models when compared with the original ones while reducing the computational load.  相似文献   

19.
We show how Bayesian belief networks (BNs) can be used to model common temporal knowledge. Two approaches to their structuring are proposed. The first leads to BNs with nodes representing states of a process and times spent in such states, and with a graphical structure reflecting the conditional independence assumptions of a Markovian process. A second approach leads to BNs whose topology represents a conditional independence structure between event-times. Once required distributional specifications are stored within the nodes of a BN, this becomes a powerful inference machine capable, for example, of reasoning backwards in time. We discuss computational difficulties associated with propagation algorithms necessary to perform these inferences, and the reasons why we chose to adopt Monte Carlo-based propagation algorithms. Two improvements to existing Monte Carlo algorithms are proposed; an enhancement based on the principle of importance sampling, and a combined technique that exploits both forward and Markov sampling. Finally, we consider Petri nets, a very interesting and general representation of temporal knowledge. A combined approach is proposed, in which the user structures temporal knowledge in Petri net formalism. The obtained Petri net is then automatically translated into an equivalent BN for probability propagation. Inferred conclusions may finally be explained with the aid of Petri nets again.  相似文献   

20.
Virtually all applications which provide or require a security service need a secret key. In an ambient world, where (potentially) sensitive information is continually being gathered about us, it is critical that those keys be both securely deployed and safeguarded from compromise. In this paper, we provide solutions for secure key deployment and storage of keys in sensor networks and radio frequency identification systems based on the use of Physical Unclonable Functions (PUFs). In addition, to providing an overview of different existing PUF realizations, we introduce a PUF realization aimed at ultra-low cost applications. We then show how the properties of Fuzzy Extractors or Helper Data algorithms can be used to securely deploy secret keys to a low cost wireless node. Our protocols are more efficient (round complexity) and allow for lower costs compared to previously proposed ones. We also provide an overview of PUF applications aimed at solving the counterfeiting of goods and devices.
Geert-Jan SchrijenEmail:

Jorge Guajardo   is a senior scientist in the Information and System Security Department at Philips Research Europe. There he lead the efforts to design new and efficient methodologies to secure RFID systems and since 2007 has focus on the design of new anti-counterfeiting methodologies based on Physical Unclonable Functions (PUFs) and their applications to secure key storage and wireless sensor networks. Previous to joining Philips Research, Jorge worked for GTE Government Systems, RSA Laboratories, cv cryptovision gmbh, and Infineon Technologies AG. His interests include: the efficient implementation of cryptographic algorithms in constrained environments, the development of hardware architectures for private and public-key algorithms, provable security of cryptographic protocols under various assumptions, and the interplay of physics and cryptography to attain security goals. Jorge holds a B.Sc degree in physics and electrical engineering and M.S. in electrical engineering from Worcester Polytechnic Institute and a Ph.D. degree in electrical engineering and information sciences from the Ruhr-Universitaet Bochum obtained under the supervision of Prof. Christof Paar. Boris Škorić   received a PhD in theoretical physics from the University of Amsterdam, the Netherlands, in 1999. From 1999 to 2008 he was a research scientist at Philips Research in Eindhoven, working first on display physics and later on security topics. In 2008 he joined the faculty of Mathematics and Computer Science of Eindhoven Technical University, the Netherlands, as assistant professor. Pim Tuyls   studied Theoretical Physics at the Katholieke Universiteit of Leuven where he got a Ph.D. on Quantum Dynamical Entropy in 1997. Currently he works as Chief Technologist at Philips Intrinsic ID in the Netherlands where he is leading the crypto development activities. Since 2004, he is also a visiting professor at the Cosic institute in Leuven. His main interests are in Key Extraction from Noisy Data (Physical Unclonable Functions and Private Biometrics, Quantum Cryptography) and in applications of Secure Multi-Party Computation. Sandeep S. Kumar   is a Senior Researcher at Philips Research Europe. Kumar received both his B.Tech. and M.Tech. degrees in Electrical Engineering from IIT-Bombay, India in 2002. He received his Ph.D. degree in Communication Security from Ruhr University Bochum, Germany in 2006. His research interests include hardware and software architectures for implementations of cryptographic systems, in particular elliptic-curve cryptography on constrained devices. At Philips Research he has been working on hardware implementations of physically unclonable functions for anti-counterfeiting and presently on identity management systems for lifestyle applications. He is a member of the IACR. Thijs Bel   studied Chemical Differentation at the IHBO of Eindhoven. He obtained his certificate in 1984. In 1985 he joined Philips Research, first working on lithography for IC’s and later on lithography for several kinds of displays. In 2007 he joined the group Thin Film Facilities, where he has been working on PUFs and in 2008 he joined the group Device processing Facilities, working on OLEDs. Antoon H. M. Blom   studied electro technology at the Technical High School of s Hertogenbosch, where he graduated in 1978.In 1979 he joined the Philips Company at the mechanization department of the Volt site in Tilburg, a production site for wire wound components. After an intermediate period at the laboratory for tuning units and transformers within the consumer electronics department in Eindhoven, he joined the centre for manufacturing technologies, which has recently been merged with the Philips Applied Technologies department, where he is working in the Optics & Sensors group of the Process Technology department. Geert-Jan Schrijen   obtained his M.Sc. degree in Electrical Engineering from the University of Twente (Enschede) in December 2000. During his studies he specialized in digital signal processing and active noise cancellation. In April 2001 he joined Philips Research. As a research scientist he became interested in the fields of cryptography and information theory and worked several years on security technologies like Digital Rights Management (DRM) systems, low-power authentication protocols and private biometric systems. From 2005 he has been involved in the work on Physical Unclonable Functions (PUFs). Geert-Jan was appointed Chief Algorithm Development at the Philips Intrinsic-ID lab venture in April 2007, where he is focusing on the development of signal processing algorithms and security architectures around PUFs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号