首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
In the last few years, evidence theory, also known as Dempster-Shafer theory or belief functions theory, have received growing attention in many fields such as artificial intelligence, computer vision, telecommunications and networks, robotics, and finance. This is due to the fact that imperfect information permeates the real-world applications, and as a result, it must be incorporated into any information system that aims to provide a complete and accurate model of the real world. Although, it is in an early stage of development relative to classical probability theory, evidence theory has proved to be particularly useful to represent and reason with imperfect information in a wide range of real-world applications. In such cases, evidence theory provides a flexible framework for handling and mining uncertainty and imprecision as well as combining evidence obtained from multiple sources and modeling the conflict between them. The purpose of this paper is threefold. First, it introduces the basics of the belief functions theory with emphasis on the transferable belief model. Second, it provides a practical case study to show how the belief functions theory was used in a real network application, thereby providing guidelines for how the evidence theory may be used in telecommunications and networks. Lastly, it surveys and discusses a number of examples of applications of the evidence theory in telecommunications and network technologies.  相似文献   

2.
The authors address sleep staging as a medical decision problem. They develop a model for automated sleep staging by combining signal information, human heuristic knowledge in the form of rules, and a mathematical framework. The EEG/EOG/EMG (electroencephalogram/electroculogram/electromyogram) events relevant for sleep staging are detected in real time by an existing front-end system and are summarized per minute. These token data are translated, normalized and constitute the input alphabet to a finite-state machine (automaton). The processed token events are used as partial belief in a set of anthropomimetic rules, which encode human knowledge about the occurrence of a particular sleep stage. The Dempster-Shafer theory of evidence weighs the partial beliefs and attributes the minute sleep stage to the machine state transition that displays the highest final belief. Results are briefly presented  相似文献   

3.
D-S证据组合规则可以在没有先验信息的情况下进行融合,这一优点使得D-S证据理论在多传感器融合系统中应用非常广泛。但BPA函数是建立在基本事件的幂集之上的,无法通过简单的统计方法来获得,为了解决这一问题,提出一种适应性更好的基于模糊数相似性的BPA生成方法。通过数据分类的例证说明该方法具有有效性和适用性。  相似文献   

4.
Bayesian网中概率参数学习方法   总被引:4,自引:0,他引:4       下载免费PDF全文
薛万欣  刘大有  张弘 《电子学报》2003,31(11):1686-1689
Bayesian网已经成为AI领域的研究热点,并在现代专家系统、诊断系统及决策支持系统中发挥着至关重要的作用.Bayesian网的研究主要集中在三个方面:知识表示、学习与推理.概率知识是Bayesian网坚实的数学基础,从数据中学习分布参数使得Bayesian网逐步走向现实应用.本文介绍和比较了概率参数学习的各种常用方法,并探求了它们在不同应用背景下的优缺点.基于经典统计学的方法理论成熟,计算简单,但它只利用了实例数据集合所提供的信息,无法加入专家知识,对实例数据的依赖性大;基于Bayesian有机结合了两类信息,对实例数据的依赖性降低,学习结果更加准确.参数学习是Bayesian网学习的基础,是Bayesian网结构学习必不可少的部分.  相似文献   

5.
This paper applies the combined use of qualitative Markov trees and belief functions (otherwise known as Dempster-Shafer theory of evidence), to pavement management decision-making. The basic concepts of the belief function approach-basic probability assignments, belief functions and plausibility functions-are discussed. This paper also discusses the construction of the qualitative Markov tree (join tree). The combined use of the two methods provides a framework for dealing with uncertainty, incomplete data, and imprecise information in the presence of multiple evidences on decision variables. The approach is very appropriate, since it presents more improved methodology and analysis than traditional probability methods applied in pavement management decision-making. Traditional probability theory as a mathematical framework for conceptualizing uncertainty, incomplete data and imprecise information has several shortcomings that have been augmented by several alternative theories. An example is presented to illustrate the construction of qualitative Markov trees, from the evidential network and the solution algorithm. The purpose of the paper is to demonstrate how the evidential network and the qualitative Markov tree can be constructed, and how the propagation of m-values can be handled in the network.  相似文献   

6.
Security becomes the key concern in a cloud environment, as the servers are distributed throughout the globe and involve the circulation of highly sensitive data. Intrusions in the cloud are common because of the huge network traffic that paves the way for intruders to breach traditional security systems with sophisticated software. To avoid such problems, intrusion detection systems (IDSs) have been introduced by various researchers. Each IDS was developed to achieve a particular objective, that is, providing security by detecting intrusions. Most of the available IDS are inefficient and are unable to provide accurate classification. Also, some of them are computationally expensive to be implemented in practical scenarios. This article proposes a new and efficient IDS framework that can accurately classify the intrusion type through effective training to address the existing drawbacks. The proposed framework, named flow directed deep belief network (FD-DBN), involves three main phases: pre-processing, clustering, and classification. In pre-processing, certain data mining operations are carried out to clean the data. The clustering phase is carried out using the game-based k-means (GBKM) clustering algorithm. The clustered data is then provided as input to the FD-DBN classification framework, where the training process is carried out. The deep belief network (DBN) training is performed with dataset features, and the flow direction algorithm is adopted for tuning the weight parameters of DBN. Through tuning, the model yielded accurate classification outcomes. The simulations are done in Python 3.6, and the results proved that the proposed framework is much more effective than the existing IDS frameworks.  相似文献   

7.
The simplest form of a system consists of an input, a process which adds value to the input, and an output containing actionable information. The value of the output from a system is a function of not only the quality of the input but also the appropriateness, validity, and reliability of the transforming process. For certain classes of decisions such as those that correspond to crisis management, even the most simplistic version of a system presents problems for system developers and hence decision makers. Unlike in highly structured decision settings where precise models exist and high quality data is readily available, decision making in crisis settings involves ill-structured tasks which pose considerable problems for those responsible for the investment of a limited information resource budget. This paper presents a framework for analyzing the information monitoring decision support system tradeoff dilemma that occurs in crisis management settings, it concludes with several insights and recommendations for future research  相似文献   

8.
孙伟超  许爱强  李文海 《电子学报》2016,44(11):2726-2734
在使用证据理论进行数据融合的过程中,有时精确的信度结构很难获得,此时需要对区间信度进行合成.本文分析了在DST和DSmT框架下的区间证据合成问题,对目前使用的方法进行了简要的回顾.通过对优化方法进行研究,提出了4种应用于区间信度组合的优化方法.CDI1~CDI4方法都可应用于DST和DSmT框架,对不精确,不确定以及冲突的信息进行合成,合成结果准确度逐步提高.文章最后给出了算例验证,并与其他区间信度合成方法的进行对比.  相似文献   

9.
This paper presents a framework for developing part failure-rate models. It is a partial result of an effort sponsored by the US Air Force for the development of reliability prediction models for military avionics. Published data show that the existing reliability prediction methods fall far short of providing the required accuracy. One of the problems in the existing methods is the exclusion of critical factors. The new framework is based on the premise that essentially all failures are caused by the interactions of built-in flaws, failure mechanisms, and stresses. These three ingredients contribute to form the failure distribution which are functions of stress application duration (eg, aging time), number of thermal cycles, and vibration duration. The Weibull distribution has been selected as the general distribution. The distribution is then modified by the critical factors such as flaw quantities, effects of environmental stress screening, calendar-time reliability improvements, and vendor quality differences, to provide the part failure-rate functions. To provide credibility for the framework, only well published data and information have been used  相似文献   

10.
This paper examines the problem of multidimensional classification, an automated learning process where “rules” are to be inferred on separate but related aspects of a problem, using identical or overlapping data sets. A general framework describing the various types of multidimensional classification is provided. The paper specifically concentrates on conditional classification, wherein the order of classification is based on domain semantics. Drawing from concept learning and information theory, algorithms are presented for acquiring tree-structured knowledge from available data. An application to manufacturing scheduling is presented. Results indicate that conditional classification may provide some ability to better interpret related decisions in automated manufacturing contexts. Further work is necessary to ascertain if the approach is robust, particularly on more complex decisions, larger data sets, and noisy data  相似文献   

11.
Context awareness and activity recognition are becoming a hot research topic in ambient intelligence (AmI) and ubiquitous robotics, due to the latest advances in wireless sensor network research which provides a richer set of context data and allows a wide coverage of AmI environments. However, using raw sensor data for activity recognition is subject to different constraints and makes activity recognition inaccurate and uncertain. The Dempster–Shafer evidence theory, known as belief functions, gives a convenient mathematical framework to handle uncertainty issues in sensor information fusion and facilitates decision making for the activity recognition process. Dempster–Shafer theory is more and more applied to represent and manipulate contextual information under uncertainty in a wide range of activity-aware systems. However, using this theory needs to solve the mapping issue of sensor data into high-level activity knowledge. The present paper contributes new ways to apply the Dempster–Shafer theory using binary discrete sensor information for activity recognition under uncertainty. We propose an efficient mapping technique that allows converting and aggregating the raw data captured, using a wireless senor network, into high-level activity knowledge. In addition, we propose a conflict resolution technique to optimize decision making in the presence of conflicting activities. For the validation of our approach, we have used a real dataset captured using sensors deployed in a smart home. Our results demonstrate that the improvement of activity recognition provided by our approaches is up to of 79 %. These results demonstrate also that the accuracy of activity recognition using the Dempster–Shafer theory with the proposed mappings outperforms both naïve Bayes classifier and J48 decision tree.  相似文献   

12.
The site characterization issue is the most essential task to be undertaken prior to the reclamation of a potentially contaminated site and it is composed of sampling, laboratory analysis, and data evaluation phases. We are primarily concerned with the data evaluation phase and we utilize a recently developed adaptive areal partitioning algorithm to characterize the site. Here, we enhance this approach by integrating expert knowledge (expert belief) into the fuzzy areal assessment scheme which derives information from sample data. We propose to allocate an adaptive weight to expert belief during the assessment. We compare the belief-integrated approach with the nonintegrated one on synthetically generated sites where both uniform and biased sampling have been applied independently. In biased sampling, the zones claimed to be highly contaminated (by the expert) are allocated a higher sampling density. We demonstrate that the belief-integrated approach outperforms the nonintegrated one both when the expert is correct or mistaken in his/her judgment irrespective of the sampling methodology  相似文献   

13.
A framework for fuzzy recognition technology   总被引:3,自引:0,他引:3  
Presents a scheme for object recognition by classificatory problem solving in the framework of fuzzy sets and possibility theory. The scheme has a particular focus on handling the imperfection problems that are common in application domains where the objects to be recognized (detected and identified) represent undesirable situations, referred to as crises. Crises develop over time, and observations typically increase in number and precision as the crisis develops. Early detection and precise recognition of crises is desired, since it increases the possibility of an effective treatment. The crisis recognition problem is central in several areas of decision support, such as medical diagnosis, financial decision making and early warning systems. The problem is characterized by vague knowledge and observations suffering from several kinds of imperfections, such as missing information, imprecision, uncertainty, unreliability of the source, and mutual (possibly conflicting or reinforcing) observations of the same phenomena. The problem of handling possibly imperfect observations from multiple sources includes the problems of information fusion and multiple-sensor data fusion. The different kinds of imperfection are handled in the framework of fuzzy sets and possibility theory  相似文献   

14.
The lifetime distribution of an engineering system is a powerful tool for analysing the system in respect of its reliability characteristics. The analysis can be improved if the experimenter has, and is able to combine, the prior belief about the system with the operational or experimental data. Moreover, in many situations, the operational data with the complete system may either be costlier or non-existent. The problem can still be tackled by making use of such information available on the subunits or components of the system. Pursuing these concepts, the present study deals with the analysis of posterior availability distributions for a series and a parallel system. Time truncated failure and repair information and prior beliefs about the failure and repair rates of the compoents of the systems have been employed in the analysis.  相似文献   

15.
The Spacelab Data Processing Facility (SLDPF) has developed expert system prototypes to aid in the performance of the quality assurance function of Spacelab and/or Attached Shuttle Payloads (ASP) processed telemetry data. The SLDPF is an integral part of the Space Shuttle data network for missions carrying scientific payloads. Its functions include the capturing, quality monitoring, processing, accounting and forwarding of data from Spacelab and ASP missions to various user facilities. The SLDPF consists of two functional elements: the Spacelab Input Processing System (SIPS) and the Spacelab Output Processing System (SOPS). The two expert system proto-types were designed to determine their feasibility and potential in the quality assurance of processed telemetry data. The SIPS expert system, Knowledge System Prototype (KSP) uses an IBM PC/AT with the commercial expert system shell OPS5+. Extraction of knowledge from SIPS experts was implemented emulating the duties of quality assurance analysts. In an interactive mode, an analyst responds to queries resulting in instructions and decisions governing the reprocessing, releasing or further analysis/troubleshooting of data. Released data are forwarded for further processing on the SOPS Sperry 1100/82. The data are edited, time ordered with overlapping data removed, decommutated and quality checked before shipment. The SOPS QA analysts isolate problems and select the appropriate action: either accept the data or request the data to be reprocessed. The SOPS expert system emulates this process by using an expert system shell, CLIPS, and the Macintosh personal computer. To date, these prototypes indicate potential beneficial results; e.g., increase analyst productivity, decrease the burden of tedious analysis, provide consistent evaluations of data, provide concise historical records, provide training for new analysts and expedite the operational retraining of reassigned Spacelab analysts. The logic implemented in the prototypes, the limitations of the personal computers utilized and the degree of accessibility to input data have led to an operational configuration. This configuration is currently under development and on completion will enhance the efficiency, both in time and quality, of releasing Spacelab/ASP data.  相似文献   

16.
Fault-tree analysis: a knowledge-engineering approach   总被引:1,自引:0,他引:1  
This paper deals with the application of knowledge engineering and a methodology for the assessment and measurement of reliability, availability, maintainability, and safety of industrial systems using fault-tree representation. Object oriented structures, production rules representing the expert's heuristics, algorithms, and database structures are the basic elements of the system. The blackboard architecture of the system supports qualitative and quantitative evaluation of the fault tree. A fuzzy set approach analyzes problems with few failure data or much fuzziness or imprecision. Fault-tree analysis is a knowledge acquisition structure that has been extensively explored by knowledge engineers. Reliability engineers can apply the techniques developed by this area of computer science to: (1) improve the data acquisition process; (2) explore the benefits of object oriented expert systems for reliability applications; (3) integrate the several sources of knowledge into a unique system; (4) explore the approximate reasoning to handle uncertainty; and (5) develop hybrid solution strategies combining expert heuristics, conventional procedures, and available failure data  相似文献   

17.
A dynamic reliability problem is considered where system components are operating in time. A general framework for analyzing the relationship of prior information and variance of a Monte Carlo estimator is developed. The variance of an estimator based on less prior information is less than that of an estimator based on more prior information. The first application derives a sequential destruction method as a special case in this general framework. The method uses the order of component failure as prior information instead of the time to failure of components. The second application shows that the use of less prior information than the order of component failure can circumvent difficulties faced by a state transition method. A numerical example is presented  相似文献   

18.
针对传统农业专家系统存在着缺少动态预测功能及机理性解释的不足和利用率低的缺点,设计了一种虚拟植物生长模型﹑农业专家系统和分布式虚拟现实技术的集成框架。虚拟植物生长模型和农业专家系统的有机集成,可以帮助推理机获得推理决策所必需的数据和降低系统决策的风险性。此外,利用多用户分布式虚拟环境使得基于虚拟植物农业专家系统为用户提供内容更丰富、视觉更直观、方便快捷的服务。  相似文献   

19.
This is a review of the more important theoretical and experimental results or intensity-dependent refractive index in relation to self-focusing and nonlinear scattering of light in liquids. The part played within the framework of the molecular-statistical theory of these processes by radial and angular molecular correlations, molecular redistribution, various nonlinear polarizations induced by fluctuations of electric fields of neighboring molecular multipoles, and geometrical shape of the molecules, is discussed. Numerical evaluations of the various contributions are given and compared with the available experimental data. It is found that while investigation of optically induced nonlinearities in gases and rarefied substances yields only values of linear and nonlinear optical polarizabilities of atoms or molecules, work on condensed systems yields information on the values of molecular quadrupoles and octupoles as well as on the functions accounting for molecular correlations of various kinds.  相似文献   

20.
This paper summarizes a methodology for reliability prediction of new products where field data are sparse, and the allowed number & length of experiments are limited. The methodology relies on estimating a set where the unknown parameters are most likely to be found, calculation of an upper bound for the reliability metric of interest conditioned that the parameters reside in the estimated set, and tightening the bounds via design of experiments. Models of failure propagation, failure acceleration, system operations, and time/cycle to failure at various levels of fidelity & expert elicited information may be incorporated to enhance the accuracy of the predictions. The application of the model is illustrated through numerical studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号