首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到2条相似文献,搜索用时 0 毫秒
1.
Of all of the challenges which face the effective application of computational intelligence technologies for pattern recognition, dataset dimensionality is undoubtedly one of the primary impediments. In order for pattern classifiers to be efficient, a dimensionality reduction stage is usually performed prior to classification. Much use has been made of rough set theory for this purpose as it is completely data-driven and no other information is required; most other methods require some additional knowledge. However, traditional rough set-based methods in the literature are restricted to the requirement that all data must be discrete. It is therefore not possible to consider real-valued or noisy data. This is usually addressed by employing a discretisation method, which can result in information loss. This paper proposes a new approach based on the tolerance rough set model, which has the ability to deal with real-valued data whilst simultaneously retaining dataset semantics. More significantly, this paper describes the underlying mechanism for this new approach to utilise the information contained within the boundary region or region of uncertainty. The use of this information can result in the discovery of more compact feature subsets and improved classification accuracy. These results are supported by an experimental evaluation which compares the proposed approach with a number of existing feature selection techniques.  相似文献   

2.
Bayesian networks are knowledge representation schemes that can capture probabilistic relationships among variables and perform probabilistic inference. Arrival of new evidence propagates through the network until all variables are updated. At the end of propagation, the network becomes a static snapshot representing the state of the domain for that particular time. This weakness in capturing temporal semantics has limited the use of Bayesian networks to domains in which time dependency is not a critical factor. This paper describes a framework that combines Bayesian networks and case-based reasoning to create a knowledge representation scheme capable of dealing with time-varying processes. Static Bayesian network topologies are learned from previously available raw data and from sets of constraints describing significant events. These constraints are defined as sets of variables assuming significant values. As new data are gathered, dynamic changes to the topology of a Bayesian network are assimilated using techniques that combine single-value decomposition and minimum distance length. The new topologies are capable of forecasting the occurrences of significant events given specific conditions and monitoring changes over time. Since environment problems are good examples of temporal variations, the problem of forecasting ozone levels in Mexico City was used to test this framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号