首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a novel data fusion paradigm based on fuzzy evidential reasoning. A new fuzzy evidence structure model is first introduced to formulate probabilistic evidence and fuzzy evidence in a unified framework. A generalized Dempster’s rule is then utilized to combine fuzzy evidence structures associated with multiple information sources. Finally, an effective decision rule is developed to take into account uncertainty, quantified by Shannon entropy and fuzzy entropy, of probabilistic evidence and fuzzy evidence, to deal with conflict and to achieve robust decisions. To demonstrate the effectiveness of the proposed paradigm, we apply it to classifying synthetic images and segmenting multi-modality human brain MR images. It is concluded that the proposed paradigm outperforms both the traditional Dempster–Shafer evidence theory based approach and the fuzzy reasoning based approach  相似文献   

2.
Data with missing values,or incomplete information,brings some challenges to the development of classification,as the incompleteness may significantly affect the performance of classifiers.In this paper,we handle missing values in both training and test sets with uncertainty and imprecision reasoning by proposing a new belief combination of classifier(BCC)method based on the evidence theory.The proposed BCC method aims to improve the classification performance of incomplete data by characterizing the uncertainty and imprecision brought by incompleteness.In BCC,different attributes are regarded as independent sources,and the collection of each attribute is considered as a subset.Then,multiple classifiers are trained with each subset independently and allow each observed attribute to provide a sub-classification result for the query pattern.Finally,these sub-classification results with different weights(discounting factors)are used to provide supplementary information to jointly determine the final classes of query patterns.The weights consist of two aspects:global and local.The global weight calculated by an optimization function is employed to represent the reliability of each classifier,and the local weight obtained by mining attribute distribution characteristics is used to quantify the importance of observed attributes to the pattern classification.Abundant comparative experiments including seven methods on twelve datasets are executed,demonstrating the out-performance of BCC over all baseline methods in terms of accuracy,precision,recall,F1 measure,with pertinent computational costs.  相似文献   

3.
In classifier combination, the relative values of beliefs assigned to different hypotheses are more important than accurate estimation of the combined belief function representing the joint observation space. Because of this, the independence requirement in Dempster’s rule should be examined from classifier combination point of view. In this study, it is investigated whether there is a set of dependent classifiers which provides a better combined accuracy than independent classifiers when Dempster’s rule of combination is used. The analysis carried out for three different representations of statistical evidence has shown that the combination of dependent classifiers using Dempster’s rule may provide much better combined accuracies compared to independent classifiers.  相似文献   

4.
A pervasive task in many forms of human activity is classification. Recent interest in the classification process has focused on ensemble classifier systems. These types of systems are based on a paradigm of combining the outputs of a number of individual classifiers. In this paper we propose a new approach for obtaining the final output of ensemble classifiers. The method presented here uses the Dempster–Shafer concept of belief functions to represent the confidence in the outputs of the individual classifiers. The combing of the outputs of the individual classifiers is based on an aggregation process which can be seen as a fusion of the Dempster rule of combination with a generalized form of OWA operator. The use of the OWA operator provides an added degree of flexibility in expressing the way the aggregation of the individual classifiers is performed.  相似文献   

5.
Time series data are widely used in many applications including critical decision support systems. The goodness of the dataset, called the Fitness of Use (FoU), used in the analysis has direct bearing on the quality of the information and knowledge generated and hence on the quality of the decisions based on them. Unlike traditional quality of data which is independent of the application in which it is used, FoU is a function of the application. As the use of geospatial time series datasets increase in many critical applications, it is important to develop formal methodologies to compute their FoU and propagate it to the derived information, knowledge and decisions. In this paper we propose a formal framework to compute the FoU of time series datasets. We present three different techniques using the Dempster–Shafer belief theory framework as the foundation. These three approaches investigate the FoU by focusing on three aspects of data: data attributes, data stability, and impact of gap periods, respectively. The effectiveness of each approach is shown using an application in hydrological datasets that measure streamflow. While we use hydrological information analysis as our application domain in this research, the techniques can be used in many other domains as well.
Ashok SamalEmail:
  相似文献   

6.
基于分类修正的多证据合成方法   总被引:1,自引:0,他引:1  
鉴于传统冲突量参数无法有效地衡量证据间的相似程度, 提出一种基于分类修正的多证据合成方法, 以解决证据合成中的高冲突悖论和“0”悖论. 首先, 利用证据距离参数、冲突量参数和方向角度参数共同衡量各证据间的相似程度, 将证据分为一致证据、不冲突证据、低冲突证据以及高冲突证据4 类; 然后, 利用3 个参数赋予各类证据不同的修正系数; 最后, 利用Dempster 规则对修正后的证据进行合成. 算例分析表明, 所提出的方法能够较好地解决高冲突悖论和“0”悖论, 而且保留了证据理论优良的数学性质.  相似文献   

7.
A key issue in building fuzzy classification systems is the specification of rule conditions, which determine the structure of a knowledge base. This paper presents a new approach to automatically extract classification knowledge from numerical data by means of premise learning. A genetic algorithm is employed to search for premise structure in combination with parameters of membership functions of input fuzzy sets to yield optimal conditions of classification rules. The major advantage of our work is that a parsimonious knowledge base with a low number of rules can be achieved. The practical applicability of the proposed method is examined by computer simulations on two well-known benchmark problems of Iris Data and Cancer Data classification. Received 11 February 1999 / Revised 13 January 2001 / Accepted in revised form 13 February 2001  相似文献   

8.
为了有效融合高度冲突的证据,以Murphy方法和邓勇加权平均法为基础,提出了一种新的基于加权证据距离的证据组合方法。用Murphy方法确定各证据体的权重,采用修正的City Block距离加权平均求证据间的两两距离,进而获取各证据被其他所有证据支持的程度,归一化各证据的总支持度作为该证据的权重,对多源证据加权平均后再利用Dempster组合规则实现信息融合。实验结果表明,该方法能够更加有效快速地识别出目标,拥有更快的收敛速度。  相似文献   

9.
提出了一种基于Borda规则的分类器组合方法。该方法将分类器组合问题看成多目标多人决策问题,是一种基于类别排序的方法。在标准手写数字数据集上对该算法进行了实验研究,证实该算法的识别率较单个分类器有明显提高,具有深入研究的价值。  相似文献   

10.
Partitioning of feature space for pattern classification   总被引:6,自引:0,他引:6  
The article proposes a simple approach for finding a fuzzy partitioning of a feature space for pattern classification problems. A feature space is initially decomposed into some overlapping hyperboxes depending on the relative positions of the pattern classes found in the training samples. A few fuzzy if-then rules reflecting the pattern classes by the generated hyperboxes are then obtained in terms of a relational matrix. The relational matrix is utilized in the modified compositional rule of inference in order to recognize an unknown pattern. The proposed system is capable of handling imprecise information both in the learning and the processing phases. The imprecise information is considered to be either incomplete or mixed or interval or linguistic in form. Ways of handling such imprecise information are also discussed. The effectiveness of the system is demonstrated on some synthetic data sets in two-dimensional feature space. The practical applicability of the system is verified on four real data such as the Iris data set, an appendicitis data set, a speech data set and a hepatic disease data set.  相似文献   

11.
Document image processing is a crucial process in office automation and begins at the ‘OCR’ phase with difficulties in document ‘analysis’ and ‘understanding’. This paper presents a hybrid and comprehensive approach to document structure analysis. Hybrid in the sense that it makes use of layout (geometrical) as well as textual features of a given document. These features are the base for potential conditions which in turn are used to express fuzzy matched rules of an underlying rule base. Rules can be formulated based on features which might be observed within one specific layout object. However, rules can also express dependencies between different layout objects. In addition to its rule driven analysis, which allows an easy adaptation to specific domains with their specific logical objects, the system contains domain-independent markup algorithms for common objects (e.g., lists). Received June 19, 2000 / Revised November 8, 2000  相似文献   

12.
This paper describes a new kind of neural network – Quantum Neural Network (QNN) – and its application to the recognition of handwritten numerals. QNN combines the advantages of neural modelling and fuzzy theoretic principles. Novel experiments have been designed for in-depth studies of applying the QNN to both real data and confusing images synthesized by morphing. Tests on synthesized data examine QNN's fuzzy decision boundary with the intention to illustrate its mechanism and characteristics, while studies on real data prove its great potential as a handwritten numeral classifier and the special role it plays in multi-expert systems. An effective decision-fusion system is proposed and a high reliability of 99.10% has been achieved. Received October 26, 1998 / Revised January 9, 1999  相似文献   

13.
This paper describes a performance evaluation study in which some efficient classifiers are tested in handwritten digit recognition. The evaluated classifiers include a statistical classifier (modified quadratic discriminant function, MQDF), three neural classifiers, and an LVQ (learning vector quantization) classifier. They are efficient in that high accuracies can be achieved at moderate memory space and computation cost. The performance is measured in terms of classification accuracy, sensitivity to training sample size, ambiguity rejection, and outlier resistance. The outlier resistance of neural classifiers is enhanced by training with synthesized outlier data. The classifiers are tested on a large data set extracted from NIST SD19. As results, the test accuracies of the evaluated classifiers are comparable to or higher than those of the nearest neighbor (1-NN) rule and regularized discriminant analysis (RDA). It is shown that neural classifiers are more susceptible to small sample size than MQDF, although they yield higher accuracies on large sample size. As a neural classifier, the polynomial classifier (PC) gives the highest accuracy and performs best in ambiguity rejection. On the other hand, MQDF is superior in outlier rejection even though it is not trained with outlier data. The results indicate that pattern classifiers have complementary advantages and they should be appropriately combined to achieve higher performance. Received: July 18, 2001 / Accepted: September 28, 2001  相似文献   

14.
This paper looks at how human values influence the reception of technology in organisations. It suggests that we need to know what values are and how value systems evolve in order to manage technological change effectively. This proposition is based on research into the issues surrounding performance measurement as part of an information system, the cognition of which contains many parallels with that of technology. The analysis places human values’ theory within the context of systems thinking, where values are taken as system components, their groupings as systems and the expectations and behaviour produced by them as emergence.  相似文献   

15.
A labelling approach for the automatic recognition of tables of contents (ToC) is described in this paper. A prototype is used for the electronic consulting of scientific papers in a digital library system named Calliope. This method operates on a roughly structured ASCII file, produced by OCR. The recognition approach operates by text labelling without using any a priori model. Labelling is based on part-of-speech tagging (PoS) which is initiated by a primary labelling of text components using some specific dictionaries. Significant tags are first grouped into homogeneous classes according to their grammar categories and then reduced in canonical forms corresponding to article fields: “title” and “authors”. Non-labelled tokens are integrated in one or another field by either applying PoS correction rules or using a structure model generated from well-detected articles. The designed prototype operates very well on different ToC layouts and character recognition qualities. Without manual intervention, a 96.3% rate of correct segmentation was obtained on 38 journals, including 2,020 articles, accompanied by a 93.0% rate of correct field extraction. Received April 5, 2000 / Revised February 19, 2001  相似文献   

16.
This paper describes an adaptive recognition system for isolated handwritten characters and the experiments carried out with it. The characters used in our experiments are alphanumeric characters, including both the upper- and lower-case versions of the Latin alphabets and three Scandinavian diacriticals. The writers are allowed to use their own natural style of writing. The recognition system is based on the k-nearest neighbor rule. The six character similarity measures applied by the system are all based on dynamic time warping. The aim of the first experiments is to choose the best combination of the simple preprocessing and normalization operations and the dissimilarity measure for a multi-writer system. However, the main focus of the work is on online adaptation. The purpose of the adaptations is to turn a writer-independent system into writer-dependent and increase recognition performance. The adaptation is carried out by modifying the prototype set of the classifier according to its recognition performance and the user's writing style. The ways of adaptation include: (1) adding new prototypes; (2) inactivating confusing prototypes; and (3) reshaping existing prototypes. The reshaping algorithm is based on the Learning Vector Quantization. Four different adaptation strategies, according to which the modifications of the prototype set are performed, have been studied both offline and online. Adaptation is carried out in a self-supervised fashion during normal use and thus remains unnoticed by the user. Received June 30, 1999 / Revised September 29, 2000  相似文献   

17.
This paper is a discussion on the problem of establishing information requirements in changing and ongoing business organisations. Attempts within existing software development paradigms to cope with business change are identified and discussed, and their problems concerning business change are highlighted. The alternative spiral of change model of tailorable information systems is proposed for thinking about establishing changing and ongoing information systems requirements. It is also proposed that information should be reconceptualised as tailorable. Such a reconceptualisation would allow us to explore ways of establishing information systems requirements that cope with business change. Deferred system’s design is proposed as a form of business software design and development that can cope with business change, as well as with the contextual and situational nature of tailorable information.  相似文献   

18.
One of the challenges in the design of a distributed multimedia system is devising suitable specification models for various schemas in different levels of the system. Another important research issue is the integration and synchronization of heterogeneous multimedia objects. In this paper, we present our models for multimedia schemas and transformation algorithms. They transform high-level multimedia objects into schemas that can be used to support the presentation and communication of the multimedia objects. A key module in the system is the Object Exchange Manager (OEM). In this paper, we present the design and implementation of the OEM module, and discuss in detail the interaction between the OEM and other modules in a distributed multimedia system.  相似文献   

19.
20.
In this paper, we study the problem of how to maximize the throughput of a continuous-media system, given fixed amounts of buffer space and disk bandwidth both pre-determined at design time. Our approach is to maximize the utilizations of disk and buffers. We propose doing so in two ways. First, we analyze a scheme that allows multiple streams to share buffers. Our analysis and preliminary simulation results indicate that buffer sharing could lead to as much as 50% reduction in total buffer requirement. Second, we develop three prefetching strategies: SP, IP1 and IP2. As demonstrated by SP, straightforward prefetching is not effective at all. In contrast, IP1 and IP2, which prefetch more intelligently than does SP, could be valuable in maximizing the effective use of buffers and disk. Our preliminary simulation results show that IP1 and IP2 could lead to a 40% improvement in throughput.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号